id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2406.13629 | Zhepei Wei | Zhepei Wei, Wei-Lin Chen, Yu Meng | InstructRAG: Instructing Retrieval-Augmented Generation via
Self-Synthesized Rationales | ICLR 2025. Code: https://github.com/weizhepei/InstructRAG | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-augmented generation (RAG) has shown promising potential to enhance
the accuracy and factuality of language models (LMs). However, imperfect
retrievers or noisy corpora can introduce misleading or even erroneous
information to the retrieved contents, posing a significant challenge to the
generation quality. Existing RAG methods typically address this challenge by
directly predicting final answers despite potentially noisy inputs, resulting
in an implicit denoising process that is difficult to interpret and verify. On
the other hand, the acquisition of explicit denoising supervision is often
costly, involving significant human efforts. In this work, we propose
InstructRAG, where LMs explicitly learn the denoising process through
self-synthesized rationales -- First, we instruct the LM to explain how the
ground-truth answer is derived from retrieved documents. Then, these rationales
can be used either as demonstrations for in-context learning of explicit
denoising or as supervised fine-tuning data to train the model. Compared to
standard RAG approaches, InstructRAG requires no additional supervision, allows
for easier verification of the predicted answers, and effectively improves
generation accuracy. Experiments show InstructRAG consistently outperforms
existing RAG methods in both training-free and trainable scenarios, achieving a
relative improvement of 8.3% over the best baseline method on average across
five knowledge-intensive benchmarks. Extensive analysis indicates that
InstructRAG scales well with increased numbers of retrieved documents and
consistently exhibits robust denoising ability even in out-of-domain datasets,
demonstrating strong generalizability.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 15:25:29 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Aug 2024 15:48:49 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 00:46:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wei",
"Zhepei",
""
],
[
"Chen",
"Wei-Lin",
""
],
[
"Meng",
"Yu",
""
]
]
| TITLE: InstructRAG: Instructing Retrieval-Augmented Generation via
Self-Synthesized Rationales
ABSTRACT: Retrieval-augmented generation (RAG) has shown promising potential to enhance
the accuracy and factuality of language models (LMs). However, imperfect
retrievers or noisy corpora can introduce misleading or even erroneous
information to the retrieved contents, posing a significant challenge to the
generation quality. Existing RAG methods typically address this challenge by
directly predicting final answers despite potentially noisy inputs, resulting
in an implicit denoising process that is difficult to interpret and verify. On
the other hand, the acquisition of explicit denoising supervision is often
costly, involving significant human efforts. In this work, we propose
InstructRAG, where LMs explicitly learn the denoising process through
self-synthesized rationales -- First, we instruct the LM to explain how the
ground-truth answer is derived from retrieved documents. Then, these rationales
can be used either as demonstrations for in-context learning of explicit
denoising or as supervised fine-tuning data to train the model. Compared to
standard RAG approaches, InstructRAG requires no additional supervision, allows
for easier verification of the predicted answers, and effectively improves
generation accuracy. Experiments show InstructRAG consistently outperforms
existing RAG methods in both training-free and trainable scenarios, achieving a
relative improvement of 8.3% over the best baseline method on average across
five knowledge-intensive benchmarks. Extensive analysis indicates that
InstructRAG scales well with increased numbers of retrieved documents and
consistently exhibits robust denoising ability even in out-of-domain datasets,
demonstrating strong generalizability.
| no_new_dataset | 0.945298 |
2406.14314 | Omri Berkovitch | Omri Berkovitch, Sapir Caduri, Noam Kahlon, Anatoly Efros, Avi
Caciularu, Ido Dagan | Identifying User Goals from UI Trajectories | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Identifying underlying user goals and intents has been recognized as valuable
in various personalization-oriented settings, such as personalized agents,
improved search responses, advertising, user analytics, and more. In this
paper, we propose a new task goal identification from observed UI trajectories
aiming to infer the user's detailed intentions when performing a task within UI
environments. To support this task, we also introduce a novel evaluation
methodology designed to assess whether two intent descriptions can be
considered paraphrases within a specific UI environment. Furthermore, we
demonstrate how this task can leverage datasets designed for the inverse
problem of UI automation, utilizing Android and web datasets for our
experiments. To benchmark this task, we compare the performance of humans and
state-of-the-art models, specifically GPT-4 and Gemini-1.5 Pro, using our
proposed metric. The results reveal that both Gemini and GPT underperform
relative to human performance, underscoring the challenge of the proposed task
and the significant room for improvement. This work highlights the importance
of goal identification within UI trajectories, providing a foundation for
further exploration and advancement in this area.
| [
{
"version": "v1",
"created": "Thu, 20 Jun 2024 13:46:10 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Jun 2024 12:33:48 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 15:47:10 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Berkovitch",
"Omri",
""
],
[
"Caduri",
"Sapir",
""
],
[
"Kahlon",
"Noam",
""
],
[
"Efros",
"Anatoly",
""
],
[
"Caciularu",
"Avi",
""
],
[
"Dagan",
"Ido",
""
]
]
| TITLE: Identifying User Goals from UI Trajectories
ABSTRACT: Identifying underlying user goals and intents has been recognized as valuable
in various personalization-oriented settings, such as personalized agents,
improved search responses, advertising, user analytics, and more. In this
paper, we propose a new task goal identification from observed UI trajectories
aiming to infer the user's detailed intentions when performing a task within UI
environments. To support this task, we also introduce a novel evaluation
methodology designed to assess whether two intent descriptions can be
considered paraphrases within a specific UI environment. Furthermore, we
demonstrate how this task can leverage datasets designed for the inverse
problem of UI automation, utilizing Android and web datasets for our
experiments. To benchmark this task, we compare the performance of humans and
state-of-the-art models, specifically GPT-4 and Gemini-1.5 Pro, using our
proposed metric. The results reveal that both Gemini and GPT underperform
relative to human performance, underscoring the challenge of the proposed task
and the significant room for improvement. This work highlights the importance
of goal identification within UI trajectories, providing a foundation for
further exploration and advancement in this area.
| no_new_dataset | 0.943971 |
2406.14598 | Tinghao Xie | Tinghao Xie, Xiangyu Qi, Yi Zeng, Yangsibo Huang, Udari Madhushani
Sehwag, Kaixuan Huang, Luxi He, Boyi Wei, Dacheng Li, Ying Sheng, Ruoxi Jia,
Bo Li, Kai Li, Danqi Chen, Peter Henderson, Prateek Mittal | SORRY-Bench: Systematically Evaluating Large Language Model Safety
Refusal | Paper accepted to ICLR 2025 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating aligned large language models' (LLMs) ability to recognize and
reject unsafe user requests is crucial for safe, policy-compliant deployments.
Existing evaluation efforts, however, face three limitations that we address
with SORRY-Bench, our proposed benchmark. First, existing methods often use
coarse-grained taxonomies of unsafe topics, and are over-representing some
fine-grained topics. For example, among the ten existing datasets that we
evaluated, tests for refusals of self-harm instructions are over 3x less
represented than tests for fraudulent activities. SORRY-Bench improves on this
by using a fine-grained taxonomy of 44 potentially unsafe topics, and 440
class-balanced unsafe instructions, compiled through human-in-the-loop methods.
Second, linguistic characteristics and formatting of prompts are often
overlooked, like different languages, dialects, and more -- which are only
implicitly considered in many evaluations. We supplement SORRY-Bench with 20
diverse linguistic augmentations to systematically examine these effects.
Third, existing evaluations rely on large LLMs (e.g., GPT-4) for evaluation,
which can be computationally expensive. We investigate design choices for
creating a fast, accurate automated safety evaluator. By collecting 7K+ human
annotations and conducting a meta-evaluation of diverse LLM-as-a-judge designs,
we show that fine-tuned 7B LLMs can achieve accuracy comparable to GPT-4 scale
LLMs, with lower computational cost. Putting these together, we evaluate over
50 proprietary and open-weight LLMs on SORRY-Bench, analyzing their distinctive
safety refusal behaviors. We hope our effort provides a building block for
systematic evaluations of LLMs' safety refusal capabilities, in a balanced,
granular, and efficient manner. Benchmark demo, data, code, and models are
available through https://sorry-bench.github.io.
| [
{
"version": "v1",
"created": "Thu, 20 Jun 2024 17:56:07 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 21:45:36 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xie",
"Tinghao",
""
],
[
"Qi",
"Xiangyu",
""
],
[
"Zeng",
"Yi",
""
],
[
"Huang",
"Yangsibo",
""
],
[
"Sehwag",
"Udari Madhushani",
""
],
[
"Huang",
"Kaixuan",
""
],
[
"He",
"Luxi",
""
],
[
"Wei",
"Boyi",
""
],
[
"Li",
"Dacheng",
""
],
[
"Sheng",
"Ying",
""
],
[
"Jia",
"Ruoxi",
""
],
[
"Li",
"Bo",
""
],
[
"Li",
"Kai",
""
],
[
"Chen",
"Danqi",
""
],
[
"Henderson",
"Peter",
""
],
[
"Mittal",
"Prateek",
""
]
]
| TITLE: SORRY-Bench: Systematically Evaluating Large Language Model Safety
Refusal
ABSTRACT: Evaluating aligned large language models' (LLMs) ability to recognize and
reject unsafe user requests is crucial for safe, policy-compliant deployments.
Existing evaluation efforts, however, face three limitations that we address
with SORRY-Bench, our proposed benchmark. First, existing methods often use
coarse-grained taxonomies of unsafe topics, and are over-representing some
fine-grained topics. For example, among the ten existing datasets that we
evaluated, tests for refusals of self-harm instructions are over 3x less
represented than tests for fraudulent activities. SORRY-Bench improves on this
by using a fine-grained taxonomy of 44 potentially unsafe topics, and 440
class-balanced unsafe instructions, compiled through human-in-the-loop methods.
Second, linguistic characteristics and formatting of prompts are often
overlooked, like different languages, dialects, and more -- which are only
implicitly considered in many evaluations. We supplement SORRY-Bench with 20
diverse linguistic augmentations to systematically examine these effects.
Third, existing evaluations rely on large LLMs (e.g., GPT-4) for evaluation,
which can be computationally expensive. We investigate design choices for
creating a fast, accurate automated safety evaluator. By collecting 7K+ human
annotations and conducting a meta-evaluation of diverse LLM-as-a-judge designs,
we show that fine-tuned 7B LLMs can achieve accuracy comparable to GPT-4 scale
LLMs, with lower computational cost. Putting these together, we evaluate over
50 proprietary and open-weight LLMs on SORRY-Bench, analyzing their distinctive
safety refusal behaviors. We hope our effort provides a building block for
systematic evaluations of LLMs' safety refusal capabilities, in a balanced,
granular, and efficient manner. Benchmark demo, data, code, and models are
available through https://sorry-bench.github.io.
| no_new_dataset | 0.926437 |
2406.15304 | Michael Burgess Jr. | Michael Burgess, Jialiang Zhao, Laurence Willemet | Learning Object Compliance via Young's Modulus from Single Grasps using
Camera-Based Tactile Sensors | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Compliance is a useful parametrization of tactile information that humans
often utilize in manipulation tasks. It can be used to inform low-level
contact-rich actions or characterize objects at a high-level. In robotic
manipulation, existing approaches to estimate compliance have struggled to
generalize across both object shape and material. Using camera-based tactile
sensors, proprioception, and force measurements, we present a novel approach to
estimate object compliance as Young's modulus (E) from parallel grasps. We
evaluate our method over a novel dataset of 285 common objects, including a
wide array of shapes and materials with Young's moduli ranging from 5.0 kPa to
250 GPa. Combining analytical and data-driven approaches, we develop a hybrid
system using a multi-tower neural network to analyze a sequence of tactile
images from grasping. This system is shown to estimate the Young's modulus of
unseen objects within an order of magnitude at 74.2% accuracy across our
dataset. This is an improvement over purely analytical and data-driven
baselines which exhibit 28.9% and 65.0% accuracy respectively. Importantly,
this estimation system performs irrespective of object geometry and
demonstrates increased robustness across material types.
| [
{
"version": "v1",
"created": "Tue, 18 Jun 2024 15:15:18 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Sep 2024 01:22:55 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 17:24:52 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Burgess",
"Michael",
""
],
[
"Zhao",
"Jialiang",
""
],
[
"Willemet",
"Laurence",
""
]
]
| TITLE: Learning Object Compliance via Young's Modulus from Single Grasps using
Camera-Based Tactile Sensors
ABSTRACT: Compliance is a useful parametrization of tactile information that humans
often utilize in manipulation tasks. It can be used to inform low-level
contact-rich actions or characterize objects at a high-level. In robotic
manipulation, existing approaches to estimate compliance have struggled to
generalize across both object shape and material. Using camera-based tactile
sensors, proprioception, and force measurements, we present a novel approach to
estimate object compliance as Young's modulus (E) from parallel grasps. We
evaluate our method over a novel dataset of 285 common objects, including a
wide array of shapes and materials with Young's moduli ranging from 5.0 kPa to
250 GPa. Combining analytical and data-driven approaches, we develop a hybrid
system using a multi-tower neural network to analyze a sequence of tactile
images from grasping. This system is shown to estimate the Young's modulus of
unseen objects within an order of magnitude at 74.2% accuracy across our
dataset. This is an improvement over purely analytical and data-driven
baselines which exhibit 28.9% and 65.0% accuracy respectively. Importantly,
this estimation system performs irrespective of object geometry and
demonstrates increased robustness across material types.
| new_dataset | 0.970493 |
2406.16655 | Peng Hu | Peng Hu, Sizhe Liu, Changjiang Gao, Xin Huang, Xue Han, Junlan Feng,
Chao Deng, and Shujian Huang | Large Language Models Are Cross-Lingual Knowledge-Free Reasoners | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models have demonstrated impressive reasoning capabilities
across multiple languages. However, the relationship between capabilities in
different languages is less explored. In this work, we decompose the process of
reasoning tasks into two separated components: knowledge retrieval and
knowledge-free reasoning, and analyze the relationship between cross-lingual
transferability and these two components. With adapted commonsense reasoning
datasets and constructed knowledge-free reasoning datasets, we show that the
knowledge-free reasoning capability can be nearly perfectly transferred across
various source-target language directions despite the secondary impact of
resource in some specific target languages, while cross-lingual knowledge
retrieval significantly hinders the transfer. Moreover, by analyzing the hidden
states and feed-forward network neuron activation during the reasoning, we show
that higher similarity of hidden representations and larger overlap of
activated neurons could explain the better cross-lingual transferability of
knowledge-free reasoning than knowledge retrieval. Thus, we hypothesize that
knowledge-free reasoning shares similar neurons in different languages for
reasoning, while knowledge is stored separately in different languages. Our
code and data is available at:
https://github.com/NJUNLP/Knowledge-Free-Reasoning.
| [
{
"version": "v1",
"created": "Mon, 24 Jun 2024 14:03:04 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Oct 2024 13:08:01 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 15:56:11 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Hu",
"Peng",
""
],
[
"Liu",
"Sizhe",
""
],
[
"Gao",
"Changjiang",
""
],
[
"Huang",
"Xin",
""
],
[
"Han",
"Xue",
""
],
[
"Feng",
"Junlan",
""
],
[
"Deng",
"Chao",
""
],
[
"Huang",
"Shujian",
""
]
]
| TITLE: Large Language Models Are Cross-Lingual Knowledge-Free Reasoners
ABSTRACT: Large Language Models have demonstrated impressive reasoning capabilities
across multiple languages. However, the relationship between capabilities in
different languages is less explored. In this work, we decompose the process of
reasoning tasks into two separated components: knowledge retrieval and
knowledge-free reasoning, and analyze the relationship between cross-lingual
transferability and these two components. With adapted commonsense reasoning
datasets and constructed knowledge-free reasoning datasets, we show that the
knowledge-free reasoning capability can be nearly perfectly transferred across
various source-target language directions despite the secondary impact of
resource in some specific target languages, while cross-lingual knowledge
retrieval significantly hinders the transfer. Moreover, by analyzing the hidden
states and feed-forward network neuron activation during the reasoning, we show
that higher similarity of hidden representations and larger overlap of
activated neurons could explain the better cross-lingual transferability of
knowledge-free reasoning than knowledge retrieval. Thus, we hypothesize that
knowledge-free reasoning shares similar neurons in different languages for
reasoning, while knowledge is stored separately in different languages. Our
code and data is available at:
https://github.com/NJUNLP/Knowledge-Free-Reasoning.
| new_dataset | 0.962638 |
2406.17800 | Meng Cui | Meng Cui, Xubo Liu, Haohe Liu, Jinzheng Zhao, Daoliang Li, Wenwu Wang | Fish Tracking, Counting, and Behaviour Analysis in Digital Aquaculture:
A Comprehensive Survey | null | Reviews in Aquaculture, 17(1), e13001 (2025) | 10.1111/raq.13001 | null | q-bio.QM cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Digital aquaculture leverages advanced technologies and data-driven methods,
providing substantial benefits over traditional aquaculture practices. This
paper presents a comprehensive review of three interconnected digital
aquaculture tasks, namely, fish tracking, counting, and behaviour analysis,
using a novel and unified approach. Unlike previous reviews which focused on
single modalities or individual tasks, we analyse vision-based (i.e. image- and
video-based), acoustic-based, and biosensor-based methods across all three
tasks. We examine their advantages, limitations, and applications, highlighting
recent advancements and identifying critical cross-cutting research gaps. The
review also includes emerging ideas such as applying multi-task learning and
large language models to address various aspects of fish monitoring, an
approach not previously explored in aquaculture literature. We identify the
major obstacles hindering research progress in this field, including the
scarcity of comprehensive fish datasets and the lack of unified evaluation
standards. To overcome the current limitations, we explore the potential of
using emerging technologies such as multimodal data fusion and deep learning to
improve the accuracy, robustness, and efficiency of integrated fish monitoring
systems. In addition, we provide a summary of existing datasets available for
fish tracking, counting, and behaviour analysis. This holistic perspective
offers a roadmap for future research, emphasizing the need for comprehensive
datasets and evaluation standards to facilitate meaningful comparisons between
technologies and to promote their practical implementations in real-world
settings.
| [
{
"version": "v1",
"created": "Thu, 20 Jun 2024 11:37:27 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Oct 2024 16:13:34 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 14:02:37 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Cui",
"Meng",
""
],
[
"Liu",
"Xubo",
""
],
[
"Liu",
"Haohe",
""
],
[
"Zhao",
"Jinzheng",
""
],
[
"Li",
"Daoliang",
""
],
[
"Wang",
"Wenwu",
""
]
]
| TITLE: Fish Tracking, Counting, and Behaviour Analysis in Digital Aquaculture:
A Comprehensive Survey
ABSTRACT: Digital aquaculture leverages advanced technologies and data-driven methods,
providing substantial benefits over traditional aquaculture practices. This
paper presents a comprehensive review of three interconnected digital
aquaculture tasks, namely, fish tracking, counting, and behaviour analysis,
using a novel and unified approach. Unlike previous reviews which focused on
single modalities or individual tasks, we analyse vision-based (i.e. image- and
video-based), acoustic-based, and biosensor-based methods across all three
tasks. We examine their advantages, limitations, and applications, highlighting
recent advancements and identifying critical cross-cutting research gaps. The
review also includes emerging ideas such as applying multi-task learning and
large language models to address various aspects of fish monitoring, an
approach not previously explored in aquaculture literature. We identify the
major obstacles hindering research progress in this field, including the
scarcity of comprehensive fish datasets and the lack of unified evaluation
standards. To overcome the current limitations, we explore the potential of
using emerging technologies such as multimodal data fusion and deep learning to
improve the accuracy, robustness, and efficiency of integrated fish monitoring
systems. In addition, we provide a summary of existing datasets available for
fish tracking, counting, and behaviour analysis. This holistic perspective
offers a roadmap for future research, emphasizing the need for comprehensive
datasets and evaluation standards to facilitate meaningful comparisons between
technologies and to promote their practical implementations in real-world
settings.
| no_new_dataset | 0.949201 |
2406.17972 | Tianyu Du | Susan Athey, Herman Brunborg, Tianyu Du, Ayush Kanodia, Keyon Vafa | LABOR-LLM: Language-Based Occupational Representations with Large
Language Models | null | null | null | null | cs.LG cs.CL econ.EM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vafa et al. (2024) introduced a transformer-based econometric model, CAREER,
that predicts a worker's next job as a function of career history (an
"occupation model"). CAREER was initially estimated ("pre-trained") using a
large, unrepresentative resume dataset, which served as a "foundation model,"
and parameter estimation was continued ("fine-tuned") using data from a
representative survey. CAREER had better predictive performance than
benchmarks. This paper considers an alternative where the resume-based
foundation model is replaced by a large language model (LLM). We convert
tabular data from the survey into text files that resemble resumes and
fine-tune the LLMs using these text files with the objective to predict the
next token (word). The resulting fine-tuned LLM is used as an input to an
occupation model. Its predictive performance surpasses all prior models. We
demonstrate the value of fine-tuning and further show that by adding more
career data from a different population, fine-tuning smaller LLMs surpasses the
performance of fine-tuning larger models.
| [
{
"version": "v1",
"created": "Tue, 25 Jun 2024 23:07:18 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Dec 2024 06:39:43 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 04:10:03 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Athey",
"Susan",
""
],
[
"Brunborg",
"Herman",
""
],
[
"Du",
"Tianyu",
""
],
[
"Kanodia",
"Ayush",
""
],
[
"Vafa",
"Keyon",
""
]
]
| TITLE: LABOR-LLM: Language-Based Occupational Representations with Large
Language Models
ABSTRACT: Vafa et al. (2024) introduced a transformer-based econometric model, CAREER,
that predicts a worker's next job as a function of career history (an
"occupation model"). CAREER was initially estimated ("pre-trained") using a
large, unrepresentative resume dataset, which served as a "foundation model,"
and parameter estimation was continued ("fine-tuned") using data from a
representative survey. CAREER had better predictive performance than
benchmarks. This paper considers an alternative where the resume-based
foundation model is replaced by a large language model (LLM). We convert
tabular data from the survey into text files that resemble resumes and
fine-tune the LLMs using these text files with the objective to predict the
next token (word). The resulting fine-tuned LLM is used as an input to an
occupation model. Its predictive performance surpasses all prior models. We
demonstrate the value of fine-tuning and further show that by adding more
career data from a different population, fine-tuning smaller LLMs surpasses the
performance of fine-tuning larger models.
| no_new_dataset | 0.955569 |
2406.19653 | Justin Xu | Justin Xu, Jack Gallifant, Alistair E. W. Johnson, Matthew B. A.
McDermott | ACES: Automatic Cohort Extraction System for Event-Stream Datasets | [ICLR 2025] For the latest ACES online documentation, please see
https://eventstreamaces.readthedocs.io/en/latest/ | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Reproducibility remains a significant challenge in machine learning (ML) for
healthcare. Datasets, model pipelines, and even task or cohort definitions are
often private in this field, leading to a significant barrier in sharing,
iterating, and understanding ML results on electronic health record (EHR)
datasets. We address a significant part of this problem by introducing the
Automatic Cohort Extraction System (ACES) for event-stream data. This library
is designed to simultaneously simplify the development of tasks and cohorts for
ML in healthcare and also enable their reproduction, both at an exact level for
single datasets and at a conceptual level across datasets. To accomplish this,
ACES provides: (1) a highly intuitive and expressive domain-specific
configuration language for defining both dataset-specific concepts and
dataset-agnostic inclusion or exclusion criteria, and (2) a pipeline to
automatically extract patient records that meet these defined criteria from
real-world data. ACES can be automatically applied to any dataset in either the
Medical Event Data Standard (MEDS) or Event Stream GPT (ESGPT) formats, or to
*any* dataset in which the necessary task-specific predicates can be extracted
in an event-stream form. ACES has the potential to significantly lower the
barrier to entry for defining ML tasks in representation learning, redefine the
way researchers interact with EHR datasets, and significantly improve the state
of reproducibility for ML studies using this modality. ACES is available at:
https://github.com/justin13601/aces.
| [
{
"version": "v1",
"created": "Fri, 28 Jun 2024 04:48:05 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Oct 2024 22:55:24 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 01:47:44 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xu",
"Justin",
""
],
[
"Gallifant",
"Jack",
""
],
[
"Johnson",
"Alistair E. W.",
""
],
[
"McDermott",
"Matthew B. A.",
""
]
]
| TITLE: ACES: Automatic Cohort Extraction System for Event-Stream Datasets
ABSTRACT: Reproducibility remains a significant challenge in machine learning (ML) for
healthcare. Datasets, model pipelines, and even task or cohort definitions are
often private in this field, leading to a significant barrier in sharing,
iterating, and understanding ML results on electronic health record (EHR)
datasets. We address a significant part of this problem by introducing the
Automatic Cohort Extraction System (ACES) for event-stream data. This library
is designed to simultaneously simplify the development of tasks and cohorts for
ML in healthcare and also enable their reproduction, both at an exact level for
single datasets and at a conceptual level across datasets. To accomplish this,
ACES provides: (1) a highly intuitive and expressive domain-specific
configuration language for defining both dataset-specific concepts and
dataset-agnostic inclusion or exclusion criteria, and (2) a pipeline to
automatically extract patient records that meet these defined criteria from
real-world data. ACES can be automatically applied to any dataset in either the
Medical Event Data Standard (MEDS) or Event Stream GPT (ESGPT) formats, or to
*any* dataset in which the necessary task-specific predicates can be extracted
in an event-stream form. ACES has the potential to significantly lower the
barrier to entry for defining ML tasks in representation learning, redefine the
way researchers interact with EHR datasets, and significantly improve the state
of reproducibility for ML studies using this modality. ACES is available at:
https://github.com/justin13601/aces.
| no_new_dataset | 0.952353 |
2407.00617 | Yuheng Zhang | Yuheng Zhang, Dian Yu, Baolin Peng, Linfeng Song, Ye Tian, Mingyue
Huo, Nan Jiang, Haitao Mi, Dong Yu | Iterative Nash Policy Optimization: Aligning LLMs with General
Preferences via No-Regret Learning | null | null | null | null | cs.LG cs.AI cs.CL cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement Learning with Human Feedback (RLHF) has achieved great success
in aligning large language models (LLMs) with human preferences. Prevalent RLHF
approaches are reward-based, following the Bradley-Terry (BT) model assumption,
which may not fully capture the complexity of human preferences. In this paper,
we explore RLHF under a general preference framework and approach it from a
game-theoretic perspective. Specifically, we formulate the problem as a
two-player game and propose a novel online algorithm, iterative Nash policy
optimization (INPO). The key idea is to let the policy play against itself via
no-regret learning, thereby approximating the Nash policy. Unlike previous
methods, INPO bypasses the need for estimating the expected win rate for
individual responses, which typically incurs high computational or annotation
costs. Instead, we introduce a new loss objective that is directly minimized
over a preference dataset. We provide theoretical analysis for our approach and
demonstrate its effectiveness through experiments on various representative
benchmarks. With an LLaMA-3-8B-based SFT model, INPO achieves a 42.6%
length-controlled win rate on AlpacaEval 2.0 and a 37.8% win rate on
Arena-Hard, showing substantial improvement over the state-of-the-art online
RLHF algorithms.
| [
{
"version": "v1",
"created": "Sun, 30 Jun 2024 08:00:34 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Jul 2024 09:51:26 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Oct 2024 04:07:39 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Mar 2025 03:41:11 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Yuheng",
""
],
[
"Yu",
"Dian",
""
],
[
"Peng",
"Baolin",
""
],
[
"Song",
"Linfeng",
""
],
[
"Tian",
"Ye",
""
],
[
"Huo",
"Mingyue",
""
],
[
"Jiang",
"Nan",
""
],
[
"Mi",
"Haitao",
""
],
[
"Yu",
"Dong",
""
]
]
| TITLE: Iterative Nash Policy Optimization: Aligning LLMs with General
Preferences via No-Regret Learning
ABSTRACT: Reinforcement Learning with Human Feedback (RLHF) has achieved great success
in aligning large language models (LLMs) with human preferences. Prevalent RLHF
approaches are reward-based, following the Bradley-Terry (BT) model assumption,
which may not fully capture the complexity of human preferences. In this paper,
we explore RLHF under a general preference framework and approach it from a
game-theoretic perspective. Specifically, we formulate the problem as a
two-player game and propose a novel online algorithm, iterative Nash policy
optimization (INPO). The key idea is to let the policy play against itself via
no-regret learning, thereby approximating the Nash policy. Unlike previous
methods, INPO bypasses the need for estimating the expected win rate for
individual responses, which typically incurs high computational or annotation
costs. Instead, we introduce a new loss objective that is directly minimized
over a preference dataset. We provide theoretical analysis for our approach and
demonstrate its effectiveness through experiments on various representative
benchmarks. With an LLaMA-3-8B-based SFT model, INPO achieves a 42.6%
length-controlled win rate on AlpacaEval 2.0 and a 37.8% win rate on
Arena-Hard, showing substantial improvement over the state-of-the-art online
RLHF algorithms.
| no_new_dataset | 0.948775 |
2407.00886 | Aliyah Hsu | Aliyah R. Hsu, Georgia Zhou, Yeshwanth Cherapanamjeri, Yaxuan Huang,
Anobel Y. Odisho, Peter R. Carroll, Bin Yu | Efficient Automated Circuit Discovery in Transformers using Contextual
Decomposition | null | null | null | null | cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Automated mechanistic interpretation research has attracted great interest
due to its potential to scale explanations of neural network internals to large
models. Existing automated circuit discovery work relies on activation patching
or its approximations to identify subgraphs in models for specific tasks
(circuits). They often suffer from slow runtime, approximation errors, and
specific requirements of metrics, such as non-zero gradients. In this work, we
introduce contextual decomposition for transformers (CD-T) to build
interpretable circuits in large language models. CD-T can produce circuits of
arbitrary level of abstraction, and is the first able to produce circuits as
fine-grained as attention heads at specific sequence positions efficiently.
CD-T consists of a set of mathematical equations to isolate contribution of
model features. Through recursively computing contribution of all nodes in a
computational graph of a model using CD-T followed by pruning, we are able to
reduce circuit discovery runtime from hours to seconds compared to
state-of-the-art baselines. On three standard circuit evaluation datasets
(indirect object identification, greater-than comparisons, and docstring
completion), we demonstrate that CD-T outperforms ACDC and EAP by better
recovering the manual circuits with an average of 97% ROC AUC under low
runtimes. In addition, we provide evidence that faithfulness of CD-T circuits
is not due to random chance by showing our circuits are 80% more faithful than
random circuits of up to 60% of the original model size. Finally, we show CD-T
circuits are able to perfectly replicate original models' behavior
(faithfulness $ = 1$) using fewer nodes than the baselines for all tasks. Our
results underscore the great promise of CD-T for efficient automated
mechanistic interpretability, paving the way for new insights into the workings
of large language models.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2024 01:12:20 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Oct 2024 19:12:22 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 08:26:23 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Hsu",
"Aliyah R.",
""
],
[
"Zhou",
"Georgia",
""
],
[
"Cherapanamjeri",
"Yeshwanth",
""
],
[
"Huang",
"Yaxuan",
""
],
[
"Odisho",
"Anobel Y.",
""
],
[
"Carroll",
"Peter R.",
""
],
[
"Yu",
"Bin",
""
]
]
| TITLE: Efficient Automated Circuit Discovery in Transformers using Contextual
Decomposition
ABSTRACT: Automated mechanistic interpretation research has attracted great interest
due to its potential to scale explanations of neural network internals to large
models. Existing automated circuit discovery work relies on activation patching
or its approximations to identify subgraphs in models for specific tasks
(circuits). They often suffer from slow runtime, approximation errors, and
specific requirements of metrics, such as non-zero gradients. In this work, we
introduce contextual decomposition for transformers (CD-T) to build
interpretable circuits in large language models. CD-T can produce circuits of
arbitrary level of abstraction, and is the first able to produce circuits as
fine-grained as attention heads at specific sequence positions efficiently.
CD-T consists of a set of mathematical equations to isolate contribution of
model features. Through recursively computing contribution of all nodes in a
computational graph of a model using CD-T followed by pruning, we are able to
reduce circuit discovery runtime from hours to seconds compared to
state-of-the-art baselines. On three standard circuit evaluation datasets
(indirect object identification, greater-than comparisons, and docstring
completion), we demonstrate that CD-T outperforms ACDC and EAP by better
recovering the manual circuits with an average of 97% ROC AUC under low
runtimes. In addition, we provide evidence that faithfulness of CD-T circuits
is not due to random chance by showing our circuits are 80% more faithful than
random circuits of up to 60% of the original model size. Finally, we show CD-T
circuits are able to perfectly replicate original models' behavior
(faithfulness $ = 1$) using fewer nodes than the baselines for all tasks. Our
results underscore the great promise of CD-T for efficient automated
mechanistic interpretability, paving the way for new insights into the workings
of large language models.
| no_new_dataset | 0.947088 |
2407.03257 | Han-Jia Ye | Han-Jia Ye, Huai-Hong Yin, De-Chuan Zhan, Wei-Lun Chao | Revisiting Nearest Neighbor for Tabular Data: A Deep Tabular Baseline
Two Decades Later | Accepted to ICLR 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The widespread enthusiasm for deep learning has recently expanded into the
domain of tabular data. Recognizing that the advancement in deep tabular
methods is often inspired by classical methods, e.g., integration of nearest
neighbors into neural networks, we investigate whether these classical methods
can be revitalized with modern techniques. We revisit a differentiable version
of $K$-nearest neighbors (KNN) -- Neighbourhood Components Analysis (NCA) --
originally designed to learn a linear projection to capture semantic
similarities between instances, and seek to gradually add modern deep learning
techniques on top. Surprisingly, our implementation of NCA using SGD and
without dimensionality reduction already achieves decent performance on tabular
data, in contrast to the results of using existing toolboxes like scikit-learn.
Further equipping NCA with deep representations and additional training
stochasticity significantly enhances its capability, being on par with the
leading tree-based method CatBoost and outperforming existing deep tabular
models in both classification and regression tasks on 300 datasets. We conclude
our paper by analyzing the factors behind these improvements, including loss
functions, prediction strategies, and deep architectures. The code is available
at https://github.com/qile2000/LAMDA-TALENT.
| [
{
"version": "v1",
"created": "Wed, 3 Jul 2024 16:38:57 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 16:38:18 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ye",
"Han-Jia",
""
],
[
"Yin",
"Huai-Hong",
""
],
[
"Zhan",
"De-Chuan",
""
],
[
"Chao",
"Wei-Lun",
""
]
]
| TITLE: Revisiting Nearest Neighbor for Tabular Data: A Deep Tabular Baseline
Two Decades Later
ABSTRACT: The widespread enthusiasm for deep learning has recently expanded into the
domain of tabular data. Recognizing that the advancement in deep tabular
methods is often inspired by classical methods, e.g., integration of nearest
neighbors into neural networks, we investigate whether these classical methods
can be revitalized with modern techniques. We revisit a differentiable version
of $K$-nearest neighbors (KNN) -- Neighbourhood Components Analysis (NCA) --
originally designed to learn a linear projection to capture semantic
similarities between instances, and seek to gradually add modern deep learning
techniques on top. Surprisingly, our implementation of NCA using SGD and
without dimensionality reduction already achieves decent performance on tabular
data, in contrast to the results of using existing toolboxes like scikit-learn.
Further equipping NCA with deep representations and additional training
stochasticity significantly enhances its capability, being on par with the
leading tree-based method CatBoost and outperforming existing deep tabular
models in both classification and regression tasks on 300 datasets. We conclude
our paper by analyzing the factors behind these improvements, including loss
functions, prediction strategies, and deep architectures. The code is available
at https://github.com/qile2000/LAMDA-TALENT.
| no_new_dataset | 0.942507 |
2407.03856 | Yi-Chen Li | Yi-Chen Li, Fuxiang Zhang, Wenjie Qiu, Lei Yuan, Chengxing Jia,
Zongzhang Zhang, Yang Yu, Bo An | Q-Adapter: Customizing Pre-trained LLMs to New Preferences with
Forgetting Mitigation | Camera ready version of ICLR 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs), trained on a large amount of corpus, have
demonstrated remarkable abilities. However, it may not be sufficient to
directly apply open-source LLMs like Llama to certain real-world scenarios,
since most of them are trained for \emph{general} purposes. Thus, the demands
for customizing publicly available LLMs emerge, but are currently
under-studied. In this work, we consider customizing pre-trained LLMs with new
human preferences. Specifically, the LLM should not only meet the new
preference but also preserve its original capabilities after customization.
Drawing inspiration from the observation that human preference can be expressed
as a reward model, we propose to cast LLM customization as optimizing the sum
of two reward functions, one of which (denoted as $r_1$) was used to pre-train
the LLM while the other (denoted as $r_2$) characterizes the new human
preference. The obstacle here is that both reward functions are unknown, making
the application of modern reinforcement learning methods infeasible. Thanks to
the residual Q-learning framework, we can restore the customized LLM with the
pre-trained LLM and the \emph{residual Q-function} without the reward function
$r_1$. Moreover, we find that for a fixed pre-trained LLM, the reward function
$r_2$ can be derived from the residual Q-function, enabling us to directly
learn the residual Q-function from the new human preference data upon the
Bradley-Terry model. We name our method Q-Adapter as it introduces an adapter
module to approximate the residual Q-function for customizing the pre-trained
LLM towards the new preference. Experiments based on the Llama-3.1 model on the
DSP dataset and HH-RLHF dataset illustrate the superior effectiveness of
Q-Adapter on both retaining existing knowledge and learning new preferences.
Code is available at https://github.com/mansicer/Q-Adapter.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2024 11:42:36 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Oct 2024 06:51:25 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Oct 2024 06:12:49 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Mar 2025 08:48:38 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Yi-Chen",
""
],
[
"Zhang",
"Fuxiang",
""
],
[
"Qiu",
"Wenjie",
""
],
[
"Yuan",
"Lei",
""
],
[
"Jia",
"Chengxing",
""
],
[
"Zhang",
"Zongzhang",
""
],
[
"Yu",
"Yang",
""
],
[
"An",
"Bo",
""
]
]
| TITLE: Q-Adapter: Customizing Pre-trained LLMs to New Preferences with
Forgetting Mitigation
ABSTRACT: Large Language Models (LLMs), trained on a large amount of corpus, have
demonstrated remarkable abilities. However, it may not be sufficient to
directly apply open-source LLMs like Llama to certain real-world scenarios,
since most of them are trained for \emph{general} purposes. Thus, the demands
for customizing publicly available LLMs emerge, but are currently
under-studied. In this work, we consider customizing pre-trained LLMs with new
human preferences. Specifically, the LLM should not only meet the new
preference but also preserve its original capabilities after customization.
Drawing inspiration from the observation that human preference can be expressed
as a reward model, we propose to cast LLM customization as optimizing the sum
of two reward functions, one of which (denoted as $r_1$) was used to pre-train
the LLM while the other (denoted as $r_2$) characterizes the new human
preference. The obstacle here is that both reward functions are unknown, making
the application of modern reinforcement learning methods infeasible. Thanks to
the residual Q-learning framework, we can restore the customized LLM with the
pre-trained LLM and the \emph{residual Q-function} without the reward function
$r_1$. Moreover, we find that for a fixed pre-trained LLM, the reward function
$r_2$ can be derived from the residual Q-function, enabling us to directly
learn the residual Q-function from the new human preference data upon the
Bradley-Terry model. We name our method Q-Adapter as it introduces an adapter
module to approximate the residual Q-function for customizing the pre-trained
LLM towards the new preference. Experiments based on the Llama-3.1 model on the
DSP dataset and HH-RLHF dataset illustrate the superior effectiveness of
Q-Adapter on both retaining existing knowledge and learning new preferences.
Code is available at https://github.com/mansicer/Q-Adapter.
| no_new_dataset | 0.94474 |
2407.04285 | Jiawei Xu | Jiawei Xu, Rui Yang, Shuang Qiu, Feng Luo, Meng Fang, Baoxiang Wang,
Lei Han | Tackling Data Corruption in Offline Reinforcement Learning via Sequence
Modeling | Accepted by ICLR2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning policy from offline datasets through offline reinforcement learning
(RL) holds promise for scaling data-driven decision-making while avoiding
unsafe and costly online interactions. However, real-world data collected from
sensors or humans often contains noise and errors, posing a significant
challenge for existing offline RL methods, particularly when the real-world
data is limited. Our study reveals that prior research focusing on adapting
predominant offline RL methods based on temporal difference learning still
falls short under data corruption when the dataset is limited. In contrast, we
discover that vanilla sequence modeling methods, such as Decision Transformer,
exhibit robustness against data corruption, even without specialized
modifications. To unlock the full potential of sequence modeling, we propose
Robust Decision Rransformer (RDT) by incorporating three simple yet effective
robust techniques: embedding dropout to improve the model's robustness against
erroneous inputs, Gaussian weighted learning to mitigate the effects of
corrupted labels, and iterative data correction to eliminate corrupted data
from the source. Extensive experiments on MuJoCo, Kitchen, and Adroit tasks
demonstrate RDT's superior performance under various data corruption scenarios
compared to prior methods. Furthermore, RDT exhibits remarkable robustness in a
more challenging setting that combines training-time data corruption with
test-time observation perturbations. These results highlight the potential of
sequence modeling for learning from noisy or corrupted offline datasets,
thereby promoting the reliable application of offline RL in real-world
scenarios. Our code is available at
https://github.com/jiawei415/RobustDecisionTransformer.
| [
{
"version": "v1",
"created": "Fri, 5 Jul 2024 06:34:32 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Feb 2025 06:25:00 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Feb 2025 03:51:06 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Mar 2025 08:28:00 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xu",
"Jiawei",
""
],
[
"Yang",
"Rui",
""
],
[
"Qiu",
"Shuang",
""
],
[
"Luo",
"Feng",
""
],
[
"Fang",
"Meng",
""
],
[
"Wang",
"Baoxiang",
""
],
[
"Han",
"Lei",
""
]
]
| TITLE: Tackling Data Corruption in Offline Reinforcement Learning via Sequence
Modeling
ABSTRACT: Learning policy from offline datasets through offline reinforcement learning
(RL) holds promise for scaling data-driven decision-making while avoiding
unsafe and costly online interactions. However, real-world data collected from
sensors or humans often contains noise and errors, posing a significant
challenge for existing offline RL methods, particularly when the real-world
data is limited. Our study reveals that prior research focusing on adapting
predominant offline RL methods based on temporal difference learning still
falls short under data corruption when the dataset is limited. In contrast, we
discover that vanilla sequence modeling methods, such as Decision Transformer,
exhibit robustness against data corruption, even without specialized
modifications. To unlock the full potential of sequence modeling, we propose
Robust Decision Rransformer (RDT) by incorporating three simple yet effective
robust techniques: embedding dropout to improve the model's robustness against
erroneous inputs, Gaussian weighted learning to mitigate the effects of
corrupted labels, and iterative data correction to eliminate corrupted data
from the source. Extensive experiments on MuJoCo, Kitchen, and Adroit tasks
demonstrate RDT's superior performance under various data corruption scenarios
compared to prior methods. Furthermore, RDT exhibits remarkable robustness in a
more challenging setting that combines training-time data corruption with
test-time observation perturbations. These results highlight the potential of
sequence modeling for learning from noisy or corrupted offline datasets,
thereby promoting the reliable application of offline RL in real-world
scenarios. Our code is available at
https://github.com/jiawei415/RobustDecisionTransformer.
| no_new_dataset | 0.946892 |
2407.04405 | Kai Ruan | Kai Ruan, Yilong Xu, Ze-Feng Gao, Yike Guo, Hao Sun, Ji-Rong Wen, Yang
Liu | Discovering physical laws with parallel combinatorial tree search | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Symbolic regression plays a crucial role in modern scientific research thanks
to its capability of discovering concise and interpretable mathematical
expressions from data. A grand challenge lies in the arduous search for
parsimonious and generalizable mathematical formulas, in an infinite search
space, while intending to fit the training data. Existing algorithms have faced
a critical bottleneck of accuracy and efficiency over a decade when handling
problems of complexity, which essentially hinders the pace of applying symbolic
regression for scientific exploration across interdisciplinary domains. To this
end, we introduce a parallel combinatorial tree search (PCTS) model to
efficiently distill generic mathematical expressions from limited data. Through
a series of extensive experiments, we demonstrate the superior accuracy and
efficiency of PCTS for equation discovery, which greatly outperforms the
state-of-the-art baseline models on over 200 synthetic and experimental
datasets (e.g., lifting its performance by up to 99% accuracy improvement and
one-order of magnitude speed up). PCTS represents a key advance in accurate and
efficient data-driven discovery of symbolic, interpretable models (e.g.,
underlying physical laws) and marks a pivotal transition towards scalable
symbolic learning.
| [
{
"version": "v1",
"created": "Fri, 5 Jul 2024 10:41:15 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 13:41:19 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 03:39:50 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ruan",
"Kai",
""
],
[
"Xu",
"Yilong",
""
],
[
"Gao",
"Ze-Feng",
""
],
[
"Guo",
"Yike",
""
],
[
"Sun",
"Hao",
""
],
[
"Wen",
"Ji-Rong",
""
],
[
"Liu",
"Yang",
""
]
]
| TITLE: Discovering physical laws with parallel combinatorial tree search
ABSTRACT: Symbolic regression plays a crucial role in modern scientific research thanks
to its capability of discovering concise and interpretable mathematical
expressions from data. A grand challenge lies in the arduous search for
parsimonious and generalizable mathematical formulas, in an infinite search
space, while intending to fit the training data. Existing algorithms have faced
a critical bottleneck of accuracy and efficiency over a decade when handling
problems of complexity, which essentially hinders the pace of applying symbolic
regression for scientific exploration across interdisciplinary domains. To this
end, we introduce a parallel combinatorial tree search (PCTS) model to
efficiently distill generic mathematical expressions from limited data. Through
a series of extensive experiments, we demonstrate the superior accuracy and
efficiency of PCTS for equation discovery, which greatly outperforms the
state-of-the-art baseline models on over 200 synthetic and experimental
datasets (e.g., lifting its performance by up to 99% accuracy improvement and
one-order of magnitude speed up). PCTS represents a key advance in accurate and
efficient data-driven discovery of symbolic, interpretable models (e.g.,
underlying physical laws) and marks a pivotal transition towards scalable
symbolic learning.
| no_new_dataset | 0.944331 |
2407.04495 | Kotaro Ikeda | Kotaro Ikeda, Tomoya Uda, Daisuke Okanohara, and Sosuke Ito | Speed-accuracy relations for the diffusion models: Wisdom from
nonequilibrium thermodynamics and optimal transport | 36 pages, 7 figures | null | null | null | cond-mat.stat-mech cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | We discuss a connection between a generative model, called the diffusion
model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called
stochastic thermodynamics. Based on the techniques of stochastic
thermodynamics, we derive the speed-accuracy relations for the diffusion
models, which are inequalities that relate the accuracy of data generation to
the entropy production rate, which can be interpreted as the speed of the
diffusion dynamics in the absence of the non-conservative force. From a
stochastic thermodynamic perspective, our results provide a quantitative
insight into how best to generate data in diffusion models. The optimal
learning protocol is introduced by the geodesic of space of the 2-Wasserstein
distance in optimal transport theory. We numerically illustrate the validity of
the speed-accuracy relations for the diffusion models with different noise
schedules and the different data. We numerically discuss our results for the
optimal and suboptimal learning protocols. We also show the inaccurate data
generation due to the non-conservative force, and the applicability of our
results to data generation from the real-world image datasets.
| [
{
"version": "v1",
"created": "Fri, 5 Jul 2024 13:35:14 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jul 2024 02:48:15 GMT"
},
{
"version": "v3",
"created": "Mon, 22 Jul 2024 07:19:24 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Mar 2025 05:38:10 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ikeda",
"Kotaro",
""
],
[
"Uda",
"Tomoya",
""
],
[
"Okanohara",
"Daisuke",
""
],
[
"Ito",
"Sosuke",
""
]
]
| TITLE: Speed-accuracy relations for the diffusion models: Wisdom from
nonequilibrium thermodynamics and optimal transport
ABSTRACT: We discuss a connection between a generative model, called the diffusion
model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called
stochastic thermodynamics. Based on the techniques of stochastic
thermodynamics, we derive the speed-accuracy relations for the diffusion
models, which are inequalities that relate the accuracy of data generation to
the entropy production rate, which can be interpreted as the speed of the
diffusion dynamics in the absence of the non-conservative force. From a
stochastic thermodynamic perspective, our results provide a quantitative
insight into how best to generate data in diffusion models. The optimal
learning protocol is introduced by the geodesic of space of the 2-Wasserstein
distance in optimal transport theory. We numerically illustrate the validity of
the speed-accuracy relations for the diffusion models with different noise
schedules and the different data. We numerically discuss our results for the
optimal and suboptimal learning protocols. We also show the inaccurate data
generation due to the non-conservative force, and the applicability of our
results to data generation from the real-world image datasets.
| no_new_dataset | 0.951908 |
2407.04942 | Chenyang Cao | Chenyang Cao, Yucheng Xin, Silang Wu, Longxiang He, Zichen Yan, Junbo
Tan, Xueqian Wang | FOSP: Fine-tuning Offline Safe Policy through World Models | 32 pages, ICLR2025 | null | null | null | cs.RO cs.LG | http://creativecommons.org/licenses/by/4.0/ | Offline Safe Reinforcement Learning (RL) seeks to address safety constraints
by learning from static datasets and restricting exploration. However, these
approaches heavily rely on the dataset and struggle to generalize to unseen
scenarios safely. In this paper, we aim to improve safety during the deployment
of vision-based robotic tasks through online fine-tuning an offline pretrained
policy. To facilitate effective fine-tuning, we introduce model-based RL, which
is known for its data efficiency. Specifically, our method employs in-sample
optimization to improve offline training efficiency while incorporating
reachability guidance to ensure safety. After obtaining an offline safe policy,
a safe policy expansion approach is leveraged for online fine-tuning. The
performance of our method is validated on simulation benchmarks with five
vision-only tasks and through real-world robot deployment using limited data.
It demonstrates that our approach significantly improves the generalization of
offline policies to unseen safety-constrained scenarios. To the best of our
knowledge, this is the first work to explore offline-to-online RL for safe
generalization tasks.
| [
{
"version": "v1",
"created": "Sat, 6 Jul 2024 03:22:57 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 11:55:15 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Cao",
"Chenyang",
""
],
[
"Xin",
"Yucheng",
""
],
[
"Wu",
"Silang",
""
],
[
"He",
"Longxiang",
""
],
[
"Yan",
"Zichen",
""
],
[
"Tan",
"Junbo",
""
],
[
"Wang",
"Xueqian",
""
]
]
| TITLE: FOSP: Fine-tuning Offline Safe Policy through World Models
ABSTRACT: Offline Safe Reinforcement Learning (RL) seeks to address safety constraints
by learning from static datasets and restricting exploration. However, these
approaches heavily rely on the dataset and struggle to generalize to unseen
scenarios safely. In this paper, we aim to improve safety during the deployment
of vision-based robotic tasks through online fine-tuning an offline pretrained
policy. To facilitate effective fine-tuning, we introduce model-based RL, which
is known for its data efficiency. Specifically, our method employs in-sample
optimization to improve offline training efficiency while incorporating
reachability guidance to ensure safety. After obtaining an offline safe policy,
a safe policy expansion approach is leveraged for online fine-tuning. The
performance of our method is validated on simulation benchmarks with five
vision-only tasks and through real-world robot deployment using limited data.
It demonstrates that our approach significantly improves the generalization of
offline policies to unseen safety-constrained scenarios. To the best of our
knowledge, this is the first work to explore offline-to-online RL for safe
generalization tasks.
| no_new_dataset | 0.9463 |
2407.07516 | Omar Sherif Elassiouti | Omar S. EL-Assiouti, Ghada Hamed, Dina Khattab, Hala M. Ebied | HDKD: Hybrid Data-Efficient Knowledge Distillation Network for Medical
Image Classification | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision Transformers (ViTs) have achieved significant advancement in computer
vision tasks due to their powerful modeling capacity. However, their
performance notably degrades when trained with insufficient data due to lack of
inherent inductive biases. Distilling knowledge and inductive biases from a
Convolutional Neural Network (CNN) teacher has emerged as an effective strategy
for enhancing the generalization of ViTs on limited datasets. Previous
approaches to Knowledge Distillation (KD) have pursued two primary paths: some
focused solely on distilling the logit distribution from CNN teacher to ViT
student, neglecting the rich semantic information present in intermediate
features due to the structural differences between them. Others integrated
feature distillation along with logit distillation, yet this introduced
alignment operations that limits the amount of knowledge transferred due to
mismatched architectures and increased the computational overhead. To this end,
this paper presents Hybrid Data-efficient Knowledge Distillation (HDKD)
paradigm which employs a CNN teacher and a hybrid student. The choice of hybrid
student serves two main aspects. First, it leverages the strengths of both
convolutions and transformers while sharing the convolutional structure with
the teacher model. Second, this shared structure enables the direct application
of feature distillation without any information loss or additional
computational overhead. Additionally, we propose an efficient light-weight
convolutional block named Mobile Channel-Spatial Attention (MBCSA), which
serves as the primary convolutional block in both teacher and student models.
Extensive experiments on two medical public datasets showcase the superiority
of HDKD over other state-of-the-art models and its computational efficiency.
Source code at: https://github.com/omarsherif200/HDKD
| [
{
"version": "v1",
"created": "Wed, 10 Jul 2024 10:09:12 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 23:17:11 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"EL-Assiouti",
"Omar S.",
""
],
[
"Hamed",
"Ghada",
""
],
[
"Khattab",
"Dina",
""
],
[
"Ebied",
"Hala M.",
""
]
]
| TITLE: HDKD: Hybrid Data-Efficient Knowledge Distillation Network for Medical
Image Classification
ABSTRACT: Vision Transformers (ViTs) have achieved significant advancement in computer
vision tasks due to their powerful modeling capacity. However, their
performance notably degrades when trained with insufficient data due to lack of
inherent inductive biases. Distilling knowledge and inductive biases from a
Convolutional Neural Network (CNN) teacher has emerged as an effective strategy
for enhancing the generalization of ViTs on limited datasets. Previous
approaches to Knowledge Distillation (KD) have pursued two primary paths: some
focused solely on distilling the logit distribution from CNN teacher to ViT
student, neglecting the rich semantic information present in intermediate
features due to the structural differences between them. Others integrated
feature distillation along with logit distillation, yet this introduced
alignment operations that limits the amount of knowledge transferred due to
mismatched architectures and increased the computational overhead. To this end,
this paper presents Hybrid Data-efficient Knowledge Distillation (HDKD)
paradigm which employs a CNN teacher and a hybrid student. The choice of hybrid
student serves two main aspects. First, it leverages the strengths of both
convolutions and transformers while sharing the convolutional structure with
the teacher model. Second, this shared structure enables the direct application
of feature distillation without any information loss or additional
computational overhead. Additionally, we propose an efficient light-weight
convolutional block named Mobile Channel-Spatial Attention (MBCSA), which
serves as the primary convolutional block in both teacher and student models.
Extensive experiments on two medical public datasets showcase the superiority
of HDKD over other state-of-the-art models and its computational efficiency.
Source code at: https://github.com/omarsherif200/HDKD
| no_new_dataset | 0.952309 |
2407.10223 | Lixu Wang | Chongyang Gao, Lixu Wang, Kaize Ding, Chenkai Weng, Xiao Wang, Qi Zhu | On Large Language Model Continual Unlearning | This paper has been accepted by ICLR 2025. The first two authors
contribute equally and they are ordered alphabetically | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While large language models have demonstrated impressive performance across
various domains and tasks, their security issues have become increasingly
severe. Machine unlearning has emerged as a representative approach for model
safety and security by removing the influence of undesired data on the target
model. However, these methods do not sufficiently consider that unlearning
requests in real-world scenarios are continuously emerging, especially in the
context of LLMs, which may lead to accumulated model utility loss that
eventually becomes unacceptable. Moreover, existing LLM unlearning methods
often ignore previous data access limitations due to privacy concerns and
copyright protection. Without previous data, the utility preservation during
unlearning is much harder. To overcome these challenges, we propose the OOO
framework that includes an Orthogonal low-rank adapter (LoRA) for continually
unlearning requested data and an Out-Of-Distribution (OOD) detector to measure
the similarity between input and unlearning data. The orthogonal LoRA achieves
parameter disentanglement among continual unlearning requests. The OOD detector
is trained with a novel contrastive entropy loss and utilizes a glocal-aware
scoring mechanism. During inference, our OOO framework can decide whether and
to what extent to load the unlearning LoRA based on the OOD detector's
predicted similarity between the input and the unlearned knowledge. Notably,
OOO's effectiveness does not rely on any retained data. We conducted extensive
experiments on OOO and state-of-the-art LLM unlearning methods across three
tasks and seven datasets. The results indicate that OOO consistently achieves
the best unlearning effectiveness and utility preservation, especially when
facing continuous unlearning requests. The source codes can be found at
https://github.com/GCYZSL/O3-LLM-UNLEARNING.
| [
{
"version": "v1",
"created": "Sun, 14 Jul 2024 14:26:17 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 01:21:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Gao",
"Chongyang",
""
],
[
"Wang",
"Lixu",
""
],
[
"Ding",
"Kaize",
""
],
[
"Weng",
"Chenkai",
""
],
[
"Wang",
"Xiao",
""
],
[
"Zhu",
"Qi",
""
]
]
| TITLE: On Large Language Model Continual Unlearning
ABSTRACT: While large language models have demonstrated impressive performance across
various domains and tasks, their security issues have become increasingly
severe. Machine unlearning has emerged as a representative approach for model
safety and security by removing the influence of undesired data on the target
model. However, these methods do not sufficiently consider that unlearning
requests in real-world scenarios are continuously emerging, especially in the
context of LLMs, which may lead to accumulated model utility loss that
eventually becomes unacceptable. Moreover, existing LLM unlearning methods
often ignore previous data access limitations due to privacy concerns and
copyright protection. Without previous data, the utility preservation during
unlearning is much harder. To overcome these challenges, we propose the OOO
framework that includes an Orthogonal low-rank adapter (LoRA) for continually
unlearning requested data and an Out-Of-Distribution (OOD) detector to measure
the similarity between input and unlearning data. The orthogonal LoRA achieves
parameter disentanglement among continual unlearning requests. The OOD detector
is trained with a novel contrastive entropy loss and utilizes a glocal-aware
scoring mechanism. During inference, our OOO framework can decide whether and
to what extent to load the unlearning LoRA based on the OOD detector's
predicted similarity between the input and the unlearned knowledge. Notably,
OOO's effectiveness does not rely on any retained data. We conducted extensive
experiments on OOO and state-of-the-art LLM unlearning methods across three
tasks and seven datasets. The results indicate that OOO consistently achieves
the best unlearning effectiveness and utility preservation, especially when
facing continuous unlearning requests. The source codes can be found at
https://github.com/GCYZSL/O3-LLM-UNLEARNING.
| no_new_dataset | 0.949763 |
2407.10944 | Shachar Don-Yehiya | Shachar Don-Yehiya, Leshem Choshen, Omri Abend | Naturally Occurring Feedback is Common, Extractable and Useful | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Human feedback data is a critical component in developing language models.
However, collecting this feedback is costly and ultimately not scalable.
Inspired by the way human interlocutors provide spontaneous unsolicited
feedback to each other, we propose to extract feedback that users naturally
include when interacting with chat models. We manually annotated conversations
to confirm the presence of naturally occurring feedback in a standard corpus,
finding that as much as 30% of the chats include explicit feedback. Comparing
to older datasets, we find that naturally occurring feedback is more prevalent
in recent conversation datasets, suggesting that more than ever, naturally
occurring feedback can serve as a valuable resource for feedback data. We
propose a method for automatically extracting this feedback, and apply it to
over 1M conversations to obtain hundreds of thousands of feedback samples. The
extracted feedback shows promise: training with it improves over baseline
models and enhances model alignment to human preferences.
| [
{
"version": "v1",
"created": "Mon, 15 Jul 2024 17:41:34 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 13:41:46 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Don-Yehiya",
"Shachar",
""
],
[
"Choshen",
"Leshem",
""
],
[
"Abend",
"Omri",
""
]
]
| TITLE: Naturally Occurring Feedback is Common, Extractable and Useful
ABSTRACT: Human feedback data is a critical component in developing language models.
However, collecting this feedback is costly and ultimately not scalable.
Inspired by the way human interlocutors provide spontaneous unsolicited
feedback to each other, we propose to extract feedback that users naturally
include when interacting with chat models. We manually annotated conversations
to confirm the presence of naturally occurring feedback in a standard corpus,
finding that as much as 30% of the chats include explicit feedback. Comparing
to older datasets, we find that naturally occurring feedback is more prevalent
in recent conversation datasets, suggesting that more than ever, naturally
occurring feedback can serve as a valuable resource for feedback data. We
propose a method for automatically extracting this feedback, and apply it to
over 1M conversations to obtain hundreds of thousands of feedback samples. The
extracted feedback shows promise: training with it improves over baseline
models and enhances model alignment to human preferences.
| no_new_dataset | 0.947914 |
2407.10967 | Haohong Lin | Haohong Lin, Wenhao Ding, Jian Chen, Laixi Shi, Jiacheng Zhu, Bo Li,
Ding Zhao | BECAUSE: Bilinear Causal Representation for Generalizable Offline
Model-based Reinforcement Learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Offline model-based reinforcement learning (MBRL) enhances data efficiency by
utilizing pre-collected datasets to learn models and policies, especially in
scenarios where exploration is costly or infeasible. Nevertheless, its
performance often suffers from the objective mismatch between model and policy
learning, resulting in inferior performance despite accurate model predictions.
This paper first identifies the primary source of this mismatch comes from the
underlying confounders present in offline data for MBRL. Subsequently, we
introduce \textbf{B}ilin\textbf{E}ar \textbf{CAUS}al
r\textbf{E}presentation~(BECAUSE), an algorithm to capture causal
representation for both states and actions to reduce the influence of the
distribution shift, thus mitigating the objective mismatch problem.
Comprehensive evaluations on 18 tasks that vary in data quality and environment
context demonstrate the superior performance of BECAUSE over existing offline
RL algorithms. We show the generalizability and robustness of BECAUSE under
fewer samples or larger numbers of confounders. Additionally, we offer
theoretical analysis of BECAUSE to prove its error bound and sample efficiency
when integrating causal representation into offline MBRL.
| [
{
"version": "v1",
"created": "Mon, 15 Jul 2024 17:59:23 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 01:19:23 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lin",
"Haohong",
""
],
[
"Ding",
"Wenhao",
""
],
[
"Chen",
"Jian",
""
],
[
"Shi",
"Laixi",
""
],
[
"Zhu",
"Jiacheng",
""
],
[
"Li",
"Bo",
""
],
[
"Zhao",
"Ding",
""
]
]
| TITLE: BECAUSE: Bilinear Causal Representation for Generalizable Offline
Model-based Reinforcement Learning
ABSTRACT: Offline model-based reinforcement learning (MBRL) enhances data efficiency by
utilizing pre-collected datasets to learn models and policies, especially in
scenarios where exploration is costly or infeasible. Nevertheless, its
performance often suffers from the objective mismatch between model and policy
learning, resulting in inferior performance despite accurate model predictions.
This paper first identifies the primary source of this mismatch comes from the
underlying confounders present in offline data for MBRL. Subsequently, we
introduce \textbf{B}ilin\textbf{E}ar \textbf{CAUS}al
r\textbf{E}presentation~(BECAUSE), an algorithm to capture causal
representation for both states and actions to reduce the influence of the
distribution shift, thus mitigating the objective mismatch problem.
Comprehensive evaluations on 18 tasks that vary in data quality and environment
context demonstrate the superior performance of BECAUSE over existing offline
RL algorithms. We show the generalizability and robustness of BECAUSE under
fewer samples or larger numbers of confounders. Additionally, we offer
theoretical analysis of BECAUSE to prove its error bound and sample efficiency
when integrating causal representation into offline MBRL.
| no_new_dataset | 0.948632 |
2407.11734 | Alessandro Palma | Alessandro Palma, Till Richter, Hanyi Zhang, Manuel Lubetzki,
Alexander Tong, Andrea Dittadi, Fabian Theis | Multi-Modal and Multi-Attribute Generation of Single Cells with CFGen | 41 pages, 22 figures | The Thirteenth International Conference on Learning
Representations (2025) | null | null | q-bio.QM cs.LG q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Generative modeling of single-cell RNA-seq data is crucial for tasks like
trajectory inference, batch effect removal, and simulation of realistic
cellular data. However, recent deep generative models simulating synthetic
single cells from noise operate on pre-processed continuous gene expression
approximations, overlooking the discrete nature of single-cell data, which
limits their effectiveness and hinders the incorporation of robust noise
models. Additionally, aspects like controllable multi-modal and multi-label
generation of cellular data remain underexplored. This work introduces CellFlow
for Generation (CFGen), a flow-based conditional generative model that
preserves the inherent discreteness of single-cell data. CFGen generates
whole-genome multi-modal single-cell data reliably, improving the recovery of
crucial biological data characteristics while tackling relevant generative
tasks such as rare cell type augmentation and batch correction. We also
introduce a novel framework for compositional data generation using Flow
Matching. By showcasing CFGen on a diverse set of biological datasets and
settings, we provide evidence of its value to the fields of computational
biology and deep generative models.
| [
{
"version": "v1",
"created": "Tue, 16 Jul 2024 14:05:03 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 14:24:06 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Palma",
"Alessandro",
""
],
[
"Richter",
"Till",
""
],
[
"Zhang",
"Hanyi",
""
],
[
"Lubetzki",
"Manuel",
""
],
[
"Tong",
"Alexander",
""
],
[
"Dittadi",
"Andrea",
""
],
[
"Theis",
"Fabian",
""
]
]
| TITLE: Multi-Modal and Multi-Attribute Generation of Single Cells with CFGen
ABSTRACT: Generative modeling of single-cell RNA-seq data is crucial for tasks like
trajectory inference, batch effect removal, and simulation of realistic
cellular data. However, recent deep generative models simulating synthetic
single cells from noise operate on pre-processed continuous gene expression
approximations, overlooking the discrete nature of single-cell data, which
limits their effectiveness and hinders the incorporation of robust noise
models. Additionally, aspects like controllable multi-modal and multi-label
generation of cellular data remain underexplored. This work introduces CellFlow
for Generation (CFGen), a flow-based conditional generative model that
preserves the inherent discreteness of single-cell data. CFGen generates
whole-genome multi-modal single-cell data reliably, improving the recovery of
crucial biological data characteristics while tackling relevant generative
tasks such as rare cell type augmentation and batch correction. We also
introduce a novel framework for compositional data generation using Flow
Matching. By showcasing CFGen on a diverse set of biological datasets and
settings, we provide evidence of its value to the fields of computational
biology and deep generative models.
| no_new_dataset | 0.948728 |
2407.14154 | Am\^andio Faustino | Janez Bo\v{z}i\v{c}, Am\^andio R. Faustino, Boris Radovi\v{c}, Marco
Canini, Veljko Pejovi\'c | Where is the Testbed for my Federated Learning Research? | SEC 2024 | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by/4.0/ | Progressing beyond centralized AI is of paramount importance, yet,
distributed AI solutions, in particular various federated learning (FL)
algorithms, are often not comprehensively assessed, which prevents the research
community from identifying the most promising approaches and practitioners from
being convinced that a certain solution is deployment-ready. The largest hurdle
towards FL algorithm evaluation is the difficulty of conducting real-world
experiments over a variety of FL client devices and different platforms, with
different datasets and data distribution, all while assessing various
dimensions of algorithm performance, such as inference accuracy, energy
consumption, and time to convergence, to name a few. In this paper, we present
CoLExT, a real-world testbed for FL research. CoLExT is designed to streamline
experimentation with custom FL algorithms in a rich testbed configuration
space, with a large number of heterogeneous edge devices, ranging from
single-board computers to smartphones, and provides real-time collection and
visualization of a variety of metrics through automatic instrumentation.
According to our evaluation, porting FL algorithms to CoLExT requires minimal
involvement from the developer, and the instrumentation introduces minimal
resource usage overhead. Furthermore, through an initial investigation
involving popular FL algorithms running on CoLExT, we reveal previously unknown
trade-offs, inefficiencies, and programming bugs.
| [
{
"version": "v1",
"created": "Fri, 19 Jul 2024 09:34:04 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 14:41:12 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Božič",
"Janez",
""
],
[
"Faustino",
"Amândio R.",
""
],
[
"Radovič",
"Boris",
""
],
[
"Canini",
"Marco",
""
],
[
"Pejović",
"Veljko",
""
]
]
| TITLE: Where is the Testbed for my Federated Learning Research?
ABSTRACT: Progressing beyond centralized AI is of paramount importance, yet,
distributed AI solutions, in particular various federated learning (FL)
algorithms, are often not comprehensively assessed, which prevents the research
community from identifying the most promising approaches and practitioners from
being convinced that a certain solution is deployment-ready. The largest hurdle
towards FL algorithm evaluation is the difficulty of conducting real-world
experiments over a variety of FL client devices and different platforms, with
different datasets and data distribution, all while assessing various
dimensions of algorithm performance, such as inference accuracy, energy
consumption, and time to convergence, to name a few. In this paper, we present
CoLExT, a real-world testbed for FL research. CoLExT is designed to streamline
experimentation with custom FL algorithms in a rich testbed configuration
space, with a large number of heterogeneous edge devices, ranging from
single-board computers to smartphones, and provides real-time collection and
visualization of a variety of metrics through automatic instrumentation.
According to our evaluation, porting FL algorithms to CoLExT requires minimal
involvement from the developer, and the instrumentation introduces minimal
resource usage overhead. Furthermore, through an initial investigation
involving popular FL algorithms running on CoLExT, we reveal previously unknown
trade-offs, inefficiencies, and programming bugs.
| no_new_dataset | 0.944587 |
2407.14985 | Xinyi Wang | Xinyi Wang, Antonis Antoniades, Yanai Elazar, Alfonso Amayuelas, Alon
Albalak, Kexun Zhang, William Yang Wang | Generalization v.s. Memorization: Tracing Language Models' Capabilities
Back to Pretraining Data | Accepted to ICLR 2025 | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The impressive capabilities of large language models (LLMs) have sparked
debate over whether these models genuinely generalize to unseen tasks or
predominantly rely on memorizing vast amounts of pretraining data. To explore
this issue, we introduce an extended concept of memorization, distributional
memorization, which measures the correlation between the LLM output
probabilities and the pretraining data frequency. To effectively capture
task-specific pretraining data frequency, we propose a novel task-gram language
model, which is built by counting the co-occurrence of semantically related
$n$-gram pairs from task inputs and outputs in the pretraining corpus. Using
the Pythia models trained on the Pile dataset, we evaluate four distinct tasks:
machine translation, factual question answering, world knowledge understanding,
and math reasoning. Our findings reveal varying levels of memorization, with
the strongest effect observed in factual question answering. Furthermore, while
model performance improves across all tasks as LLM size increases, only factual
question answering shows an increase in memorization, whereas machine
translation and reasoning tasks exhibit greater generalization, producing more
novel outputs. This study demonstrates that memorization plays a larger role in
simpler, knowledge-intensive tasks, while generalization is the key for harder,
reasoning-based tasks, providing a scalable method for analyzing large
pretraining corpora in greater depth.
| [
{
"version": "v1",
"created": "Sat, 20 Jul 2024 21:24:40 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Oct 2024 02:30:41 GMT"
},
{
"version": "v3",
"created": "Sun, 24 Nov 2024 23:25:33 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Nov 2024 17:05:16 GMT"
},
{
"version": "v5",
"created": "Sun, 2 Mar 2025 03:27:58 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Xinyi",
""
],
[
"Antoniades",
"Antonis",
""
],
[
"Elazar",
"Yanai",
""
],
[
"Amayuelas",
"Alfonso",
""
],
[
"Albalak",
"Alon",
""
],
[
"Zhang",
"Kexun",
""
],
[
"Wang",
"William Yang",
""
]
]
| TITLE: Generalization v.s. Memorization: Tracing Language Models' Capabilities
Back to Pretraining Data
ABSTRACT: The impressive capabilities of large language models (LLMs) have sparked
debate over whether these models genuinely generalize to unseen tasks or
predominantly rely on memorizing vast amounts of pretraining data. To explore
this issue, we introduce an extended concept of memorization, distributional
memorization, which measures the correlation between the LLM output
probabilities and the pretraining data frequency. To effectively capture
task-specific pretraining data frequency, we propose a novel task-gram language
model, which is built by counting the co-occurrence of semantically related
$n$-gram pairs from task inputs and outputs in the pretraining corpus. Using
the Pythia models trained on the Pile dataset, we evaluate four distinct tasks:
machine translation, factual question answering, world knowledge understanding,
and math reasoning. Our findings reveal varying levels of memorization, with
the strongest effect observed in factual question answering. Furthermore, while
model performance improves across all tasks as LLM size increases, only factual
question answering shows an increase in memorization, whereas machine
translation and reasoning tasks exhibit greater generalization, producing more
novel outputs. This study demonstrates that memorization plays a larger role in
simpler, knowledge-intensive tasks, while generalization is the key for harder,
reasoning-based tasks, providing a scalable method for analyzing large
pretraining corpora in greater depth.
| no_new_dataset | 0.951594 |
2408.04591 | Hongjun Wang | Hongjun Wang, Sagar Vaze, Kai Han | HiLo: A Learning Framework for Generalized Category Discovery Robust to
Domain Shifts | v2: Accepted as a conference paper at ICLR 2025; Project page:
https://github.com/Visual-AI/hilo/ | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Generalized Category Discovery (GCD) is a challenging task in which, given a
partially labelled dataset, models must categorize all unlabelled instances,
regardless of whether they come from labelled categories or from new ones. In
this paper, we challenge a remaining assumption in this task: that all images
share the same domain. Specifically, we introduce a new task and method to
handle GCD when the unlabelled data also contains images from different domains
to the labelled set. Our proposed `HiLo' networks extract High-level semantic
and Low-level domain features, before minimizing the mutual information between
the representations. Our intuition is that the clusterings based on domain
information and semantic information should be independent. We further extend
our method with a specialized domain augmentation tailored for the GCD task, as
well as a curriculum learning approach. Finally, we construct a benchmark from
corrupted fine-grained datasets as well as a large-scale evaluation on
DomainNet with real-world domain shifts, reimplementing a number of GCD
baselines in this setting. We demonstrate that HiLo outperforms SoTA category
discovery models by a large margin on all evaluations.
| [
{
"version": "v1",
"created": "Thu, 8 Aug 2024 17:04:06 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 12:35:33 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Hongjun",
""
],
[
"Vaze",
"Sagar",
""
],
[
"Han",
"Kai",
""
]
]
| TITLE: HiLo: A Learning Framework for Generalized Category Discovery Robust to
Domain Shifts
ABSTRACT: Generalized Category Discovery (GCD) is a challenging task in which, given a
partially labelled dataset, models must categorize all unlabelled instances,
regardless of whether they come from labelled categories or from new ones. In
this paper, we challenge a remaining assumption in this task: that all images
share the same domain. Specifically, we introduce a new task and method to
handle GCD when the unlabelled data also contains images from different domains
to the labelled set. Our proposed `HiLo' networks extract High-level semantic
and Low-level domain features, before minimizing the mutual information between
the representations. Our intuition is that the clusterings based on domain
information and semantic information should be independent. We further extend
our method with a specialized domain augmentation tailored for the GCD task, as
well as a curriculum learning approach. Finally, we construct a benchmark from
corrupted fine-grained datasets as well as a large-scale evaluation on
DomainNet with real-world domain shifts, reimplementing a number of GCD
baselines in this setting. We demonstrate that HiLo outperforms SoTA category
discovery models by a large margin on all evaluations.
| no_new_dataset | 0.941601 |
2408.04909 | Uri Berger | Uri Berger and Gabriel Stanovsky and Omri Abend and Lea Frermann | Surveying the Landscape of Image Captioning Evaluation: A Comprehensive
Taxonomy, Trends and Metrics Analysis | null | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of image captioning has recently been gaining popularity, and with
it the complex task of evaluating the quality of image captioning models. In
this work, we present the first survey and taxonomy of over 70 different image
captioning metrics and their usage in hundreds of papers, specifically designed
to help users select the most suitable metric for their needs. We find that
despite the diversity of proposed metrics, the vast majority of studies rely on
only five popular metrics, which we show to be weakly correlated with human
ratings. We hypothesize that combining a diverse set of metrics can enhance
correlation with human ratings. As an initial step, we demonstrate that a
linear regression-based ensemble method, which we call EnsembEval, trained on
one human ratings dataset, achieves improved correlation across five additional
datasets, showing there is a lot of room for improvement by leveraging a
diverse set of metrics.
| [
{
"version": "v1",
"created": "Fri, 9 Aug 2024 07:31:06 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 12:40:09 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Berger",
"Uri",
""
],
[
"Stanovsky",
"Gabriel",
""
],
[
"Abend",
"Omri",
""
],
[
"Frermann",
"Lea",
""
]
]
| TITLE: Surveying the Landscape of Image Captioning Evaluation: A Comprehensive
Taxonomy, Trends and Metrics Analysis
ABSTRACT: The task of image captioning has recently been gaining popularity, and with
it the complex task of evaluating the quality of image captioning models. In
this work, we present the first survey and taxonomy of over 70 different image
captioning metrics and their usage in hundreds of papers, specifically designed
to help users select the most suitable metric for their needs. We find that
despite the diversity of proposed metrics, the vast majority of studies rely on
only five popular metrics, which we show to be weakly correlated with human
ratings. We hypothesize that combining a diverse set of metrics can enhance
correlation with human ratings. As an initial step, we demonstrate that a
linear regression-based ensemble method, which we call EnsembEval, trained on
one human ratings dataset, achieves improved correlation across five additional
datasets, showing there is a lot of room for improvement by leveraging a
diverse set of metrics.
| no_new_dataset | 0.942612 |
2408.07517 | Maximilian Baronig | Maximilian Baronig, Romain Ferrand, Silvester Sabathiel, Robert
Legenstein | Advancing Spatio-Temporal Processing in Spiking Neural Networks through
Adaptation | null | null | null | null | cs.NE | http://creativecommons.org/licenses/by-sa/4.0/ | Implementations of spiking neural networks on neuromorphic hardware promise
orders of magnitude less power consumption than their non-spiking counterparts.
The standard neuron model for spike-based computation on such systems has long
been the leaky integrate-and-fire (LIF) neuron. A computationally light
augmentation of the LIF neuron model with an adaptation mechanism has recently
been shown to exhibit superior performance on spatio-temporal processing tasks.
The root of the superiority of these so-called adaptive LIF neurons however is
not well understood. In this article, we thoroughly analyze the dynamical,
computational, and learning properties of adaptive LIF neurons and networks
thereof. Our investigation reveals significant challenges related to stability
and parameterization when employing the conventional Euler-Forward
discretization for this class of models. We report a rigorous theoretical and
empirical demonstration that these challenges can be effectively addressed by
adopting an alternative discretization approach - the Symplectic Euler method,
allowing to improve over state-of-the-art performances on common event-based
benchmark datasets. Our further analysis of the computational properties of
networks of adaptive LIF neurons shows that they are particularly well suited
to exploit the spatio-temporal structure of input sequences without any
normalization techniques.
| [
{
"version": "v1",
"created": "Wed, 14 Aug 2024 12:49:58 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 12:42:10 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Baronig",
"Maximilian",
""
],
[
"Ferrand",
"Romain",
""
],
[
"Sabathiel",
"Silvester",
""
],
[
"Legenstein",
"Robert",
""
]
]
| TITLE: Advancing Spatio-Temporal Processing in Spiking Neural Networks through
Adaptation
ABSTRACT: Implementations of spiking neural networks on neuromorphic hardware promise
orders of magnitude less power consumption than their non-spiking counterparts.
The standard neuron model for spike-based computation on such systems has long
been the leaky integrate-and-fire (LIF) neuron. A computationally light
augmentation of the LIF neuron model with an adaptation mechanism has recently
been shown to exhibit superior performance on spatio-temporal processing tasks.
The root of the superiority of these so-called adaptive LIF neurons however is
not well understood. In this article, we thoroughly analyze the dynamical,
computational, and learning properties of adaptive LIF neurons and networks
thereof. Our investigation reveals significant challenges related to stability
and parameterization when employing the conventional Euler-Forward
discretization for this class of models. We report a rigorous theoretical and
empirical demonstration that these challenges can be effectively addressed by
adopting an alternative discretization approach - the Symplectic Euler method,
allowing to improve over state-of-the-art performances on common event-based
benchmark datasets. Our further analysis of the computational properties of
networks of adaptive LIF neurons shows that they are particularly well suited
to exploit the spatio-temporal structure of input sequences without any
normalization techniques.
| no_new_dataset | 0.943971 |
2408.08258 | Hossein Jafarinia | Hossein Jafarinia, Alireza Alipanah, Danial Hamdi, Saeed Razavi, Nahal
Mirzaie, Mohammad Hossein Rohban | Snuffy: Efficient Whole Slide Image Classifier | Accepted for ECCV 2024 | null | null | null | cs.CV cs.AI cs.LG cs.NE eess.IV | http://creativecommons.org/licenses/by/4.0/ | Whole Slide Image (WSI) classification with multiple instance learning (MIL)
in digital pathology faces significant computational challenges. Current
methods mostly rely on extensive self-supervised learning (SSL) for
satisfactory performance, requiring long training periods and considerable
computational resources. At the same time, no pre-training affects performance
due to domain shifts from natural images to WSIs. We introduce Snuffy
architecture, a novel MIL-pooling method based on sparse transformers that
mitigates performance loss with limited pre-training and enables continual
few-shot pre-training as a competitive option. Our sparsity pattern is tailored
for pathology and is theoretically proven to be a universal approximator with
the tightest probabilistic sharp bound on the number of layers for sparse
transformers, to date. We demonstrate Snuffy's effectiveness on CAMELYON16 and
TCGA Lung cancer datasets, achieving superior WSI and patch-level accuracies.
The code is available on https://github.com/jafarinia/snuffy.
| [
{
"version": "v1",
"created": "Thu, 15 Aug 2024 16:59:15 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Aug 2024 08:36:59 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 04:25:12 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Jafarinia",
"Hossein",
""
],
[
"Alipanah",
"Alireza",
""
],
[
"Hamdi",
"Danial",
""
],
[
"Razavi",
"Saeed",
""
],
[
"Mirzaie",
"Nahal",
""
],
[
"Rohban",
"Mohammad Hossein",
""
]
]
| TITLE: Snuffy: Efficient Whole Slide Image Classifier
ABSTRACT: Whole Slide Image (WSI) classification with multiple instance learning (MIL)
in digital pathology faces significant computational challenges. Current
methods mostly rely on extensive self-supervised learning (SSL) for
satisfactory performance, requiring long training periods and considerable
computational resources. At the same time, no pre-training affects performance
due to domain shifts from natural images to WSIs. We introduce Snuffy
architecture, a novel MIL-pooling method based on sparse transformers that
mitigates performance loss with limited pre-training and enables continual
few-shot pre-training as a competitive option. Our sparsity pattern is tailored
for pathology and is theoretically proven to be a universal approximator with
the tightest probabilistic sharp bound on the number of layers for sparse
transformers, to date. We demonstrate Snuffy's effectiveness on CAMELYON16 and
TCGA Lung cancer datasets, achieving superior WSI and patch-level accuracies.
The code is available on https://github.com/jafarinia/snuffy.
| no_new_dataset | 0.945349 |
2408.08531 | Valdemar \v{S}v\'abensk\'y | Valdemar \v{S}v\'abensk\'y, Kristi\'an Tk\'a\v{c}ik, Aubrey Birdwell,
Richard Weiss, Ryan S. Baker, Pavel \v{C}eleda, Jan Vykopal, Jens Mache,
Ankur Chattopadhyay | Detecting Unsuccessful Students in Cybersecurity Exercises in Two
Different Learning Environments | Published in the FIE 2024 conference proceedings, see
https://doi.org/10.1109/FIE61694.2024.10893135 | null | 10.1109/FIE61694.2024.10893135 | null | cs.LG cs.AI cs.CR cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This full paper in the research track evaluates the usage of data logged from
cybersecurity exercises in order to predict students who are potentially at
risk of performing poorly. Hands-on exercises are essential for learning since
they enable students to practice their skills. In cybersecurity, hands-on
exercises are often complex and require knowledge of many topics. Therefore,
students may miss solutions due to gaps in their knowledge and become
frustrated, which impedes their learning. Targeted aid by the instructor helps,
but since the instructor's time is limited, efficient ways to detect struggling
students are needed. This paper develops automated tools to predict when a
student is having difficulty. We formed a dataset with the actions of 313
students from two countries and two learning environments: KYPO CRP and
EDURange. These data are used in machine learning algorithms to predict the
success of students in exercises deployed in these environments. After
extracting features from the data, we trained and cross-validated eight
classifiers for predicting the exercise outcome and evaluated their predictive
power. The contribution of this paper is comparing two approaches to feature
engineering, modeling, and classification performance on data from two learning
environments. Using the features from either learning environment, we were able
to detect and distinguish between successful and struggling students. A
decision tree classifier achieved the highest balanced accuracy and sensitivity
with data from both learning environments. The results show that activity data
from cybersecurity exercises are suitable for predicting student success. In a
potential application, such models can aid instructors in detecting struggling
students and providing targeted help. We publish data and code for building
these models so that others can adopt or adapt them.
| [
{
"version": "v1",
"created": "Fri, 16 Aug 2024 04:57:54 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 18:15:48 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Švábenský",
"Valdemar",
""
],
[
"Tkáčik",
"Kristián",
""
],
[
"Birdwell",
"Aubrey",
""
],
[
"Weiss",
"Richard",
""
],
[
"Baker",
"Ryan S.",
""
],
[
"Čeleda",
"Pavel",
""
],
[
"Vykopal",
"Jan",
""
],
[
"Mache",
"Jens",
""
],
[
"Chattopadhyay",
"Ankur",
""
]
]
| TITLE: Detecting Unsuccessful Students in Cybersecurity Exercises in Two
Different Learning Environments
ABSTRACT: This full paper in the research track evaluates the usage of data logged from
cybersecurity exercises in order to predict students who are potentially at
risk of performing poorly. Hands-on exercises are essential for learning since
they enable students to practice their skills. In cybersecurity, hands-on
exercises are often complex and require knowledge of many topics. Therefore,
students may miss solutions due to gaps in their knowledge and become
frustrated, which impedes their learning. Targeted aid by the instructor helps,
but since the instructor's time is limited, efficient ways to detect struggling
students are needed. This paper develops automated tools to predict when a
student is having difficulty. We formed a dataset with the actions of 313
students from two countries and two learning environments: KYPO CRP and
EDURange. These data are used in machine learning algorithms to predict the
success of students in exercises deployed in these environments. After
extracting features from the data, we trained and cross-validated eight
classifiers for predicting the exercise outcome and evaluated their predictive
power. The contribution of this paper is comparing two approaches to feature
engineering, modeling, and classification performance on data from two learning
environments. Using the features from either learning environment, we were able
to detect and distinguish between successful and struggling students. A
decision tree classifier achieved the highest balanced accuracy and sensitivity
with data from both learning environments. The results show that activity data
from cybersecurity exercises are suitable for predicting student success. In a
potential application, such models can aid instructors in detecting struggling
students and providing targeted help. We publish data and code for building
these models so that others can adopt or adapt them.
| new_dataset | 0.980949 |
2408.08700 | Martin Hermann Paul Fuchs | Martin Hermann Paul Fuchs, Behnood Rasti, Beg\"um Demir | HyCoT: A Transformer-Based Autoencoder for Hyperspectral Image
Compression | Accepted at 14th IEEE GRSS Workshop on Hyperspectral Image and Signal
Processing: Evolution in Remote Sensing (WHISPERS), 2024 | null | 10.1109/WHISPERS65427.2024.10876514 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | The development of learning-based hyperspectral image (HSI) compression
models has recently attracted significant interest. Existing models
predominantly utilize convolutional filters, which capture only local
dependencies. Furthermore,they often incur high training costs and exhibit
substantial computational complexity. To address these limitations, in this
paper we propose Hyperspectral Compression Transformer (HyCoT) that is a
transformer-based autoencoder for pixelwise HSI compression. Additionally, we
apply a simple yet effective training set reduction approach to accelerate the
training process. Experimental results on the HySpecNet-11k dataset demonstrate
that HyCoT surpasses the state of the art across various compression ratios by
over 1 dB of PSNR with significantly reduced computational requirements. Our
code and pre-trained weights are publicly available at
https://git.tu-berlin.de/rsim/hycot .
| [
{
"version": "v1",
"created": "Fri, 16 Aug 2024 12:27:46 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Nov 2024 15:47:59 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Fuchs",
"Martin Hermann Paul",
""
],
[
"Rasti",
"Behnood",
""
],
[
"Demir",
"Begüm",
""
]
]
| TITLE: HyCoT: A Transformer-Based Autoencoder for Hyperspectral Image
Compression
ABSTRACT: The development of learning-based hyperspectral image (HSI) compression
models has recently attracted significant interest. Existing models
predominantly utilize convolutional filters, which capture only local
dependencies. Furthermore,they often incur high training costs and exhibit
substantial computational complexity. To address these limitations, in this
paper we propose Hyperspectral Compression Transformer (HyCoT) that is a
transformer-based autoencoder for pixelwise HSI compression. Additionally, we
apply a simple yet effective training set reduction approach to accelerate the
training process. Experimental results on the HySpecNet-11k dataset demonstrate
that HyCoT surpasses the state of the art across various compression ratios by
over 1 dB of PSNR with significantly reduced computational requirements. Our
code and pre-trained weights are publicly available at
https://git.tu-berlin.de/rsim/hycot .
| no_new_dataset | 0.948202 |
2408.09886 | Haixia Bi | Sihan Yang, Xuande Mi, Jiadong Feng, Haixia Bi, Hai Zhang and Jian Sun | Improved Baselines with Synchronized Encoding for Universal Medical
Image Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large foundation models, known for their strong zero-shot generalization
capabilities, can be applied to a wide range of downstream tasks. However,
developing foundation models for medical image segmentation poses a significant
challenge due to the domain gap between natural and medical images. While
fine-tuning techniques based on the Segment Anything Model (SAM) have been
explored, they primarily focus on scaling up data or refining inference
strategies without incorporating domain-specific architectural designs,
limiting their zero-shot performance. To optimize segmentation performance
under standard inference settings and provide a strong baseline for future
research, we introduce SyncSAM, which employs a synchronized dual-branch
encoder that integrates convolution and Transformer features in a synchronized
manner to enhance medical image encoding, and a multi-scale dual-branch decoder
to preserve image details. SyncSAM is trained on two of the largest medical
image segmentation datasets, SA-Med2D-20M and IMed-361M, resulting in a series
of pre-trained models for universal medical image segmentation. Experimental
results demonstrate that SyncSAM not only achieves state-of-the-art performance
on test sets but also exhibits strong zero-shot capabilities on unseen
datasets. The code and model weights are available at
https://github.com/Hhankyangg/SyncSAM.
| [
{
"version": "v1",
"created": "Mon, 19 Aug 2024 11:01:00 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 15:24:27 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 11:32:04 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yang",
"Sihan",
""
],
[
"Mi",
"Xuande",
""
],
[
"Feng",
"Jiadong",
""
],
[
"Bi",
"Haixia",
""
],
[
"Zhang",
"Hai",
""
],
[
"Sun",
"Jian",
""
]
]
| TITLE: Improved Baselines with Synchronized Encoding for Universal Medical
Image Segmentation
ABSTRACT: Large foundation models, known for their strong zero-shot generalization
capabilities, can be applied to a wide range of downstream tasks. However,
developing foundation models for medical image segmentation poses a significant
challenge due to the domain gap between natural and medical images. While
fine-tuning techniques based on the Segment Anything Model (SAM) have been
explored, they primarily focus on scaling up data or refining inference
strategies without incorporating domain-specific architectural designs,
limiting their zero-shot performance. To optimize segmentation performance
under standard inference settings and provide a strong baseline for future
research, we introduce SyncSAM, which employs a synchronized dual-branch
encoder that integrates convolution and Transformer features in a synchronized
manner to enhance medical image encoding, and a multi-scale dual-branch decoder
to preserve image details. SyncSAM is trained on two of the largest medical
image segmentation datasets, SA-Med2D-20M and IMed-361M, resulting in a series
of pre-trained models for universal medical image segmentation. Experimental
results demonstrate that SyncSAM not only achieves state-of-the-art performance
on test sets but also exhibits strong zero-shot capabilities on unseen
datasets. The code and model weights are available at
https://github.com/Hhankyangg/SyncSAM.
| no_new_dataset | 0.944382 |
2408.11085 | Changkun Liu | Changkun Liu, Shuai Chen, Yash Bhalgat, Siyan Hu, Ming Cheng, Zirui
Wang, Victor Adrian Prisacariu, Tristan Braud | GS-CPR: Efficient Camera Pose Refinement via 3D Gaussian Splatting | Accepted to International Conference on Learning Representations
(ICLR) 2025. During the ICLR review process, we changed the name of our
framework from GSLoc to GS-CPR (Camera Pose Refinement), according to
reviewers' comments. The project page is available at
https://xrim-lab.github.io/GS-CPR/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We leverage 3D Gaussian Splatting (3DGS) as a scene representation and
propose a novel test-time camera pose refinement (CPR) framework, GS-CPR. This
framework enhances the localization accuracy of state-of-the-art absolute pose
regression and scene coordinate regression methods. The 3DGS model renders
high-quality synthetic images and depth maps to facilitate the establishment of
2D-3D correspondences. GS-CPR obviates the need for training feature extractors
or descriptors by operating directly on RGB images, utilizing the 3D foundation
model, MASt3R, for precise 2D matching. To improve the robustness of our model
in challenging outdoor environments, we incorporate an exposure-adaptive module
within the 3DGS framework. Consequently, GS-CPR enables efficient one-shot pose
refinement given a single RGB query and a coarse initial pose estimation. Our
proposed approach surpasses leading NeRF-based optimization methods in both
accuracy and runtime across indoor and outdoor visual localization benchmarks,
achieving new state-of-the-art accuracy on two indoor datasets. The project
page is available at https://xrim-lab.github.io/GS-CPR/.
| [
{
"version": "v1",
"created": "Tue, 20 Aug 2024 17:58:23 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Oct 2024 15:35:15 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Feb 2025 10:25:44 GMT"
},
{
"version": "v4",
"created": "Sat, 1 Mar 2025 08:23:19 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Changkun",
""
],
[
"Chen",
"Shuai",
""
],
[
"Bhalgat",
"Yash",
""
],
[
"Hu",
"Siyan",
""
],
[
"Cheng",
"Ming",
""
],
[
"Wang",
"Zirui",
""
],
[
"Prisacariu",
"Victor Adrian",
""
],
[
"Braud",
"Tristan",
""
]
]
| TITLE: GS-CPR: Efficient Camera Pose Refinement via 3D Gaussian Splatting
ABSTRACT: We leverage 3D Gaussian Splatting (3DGS) as a scene representation and
propose a novel test-time camera pose refinement (CPR) framework, GS-CPR. This
framework enhances the localization accuracy of state-of-the-art absolute pose
regression and scene coordinate regression methods. The 3DGS model renders
high-quality synthetic images and depth maps to facilitate the establishment of
2D-3D correspondences. GS-CPR obviates the need for training feature extractors
or descriptors by operating directly on RGB images, utilizing the 3D foundation
model, MASt3R, for precise 2D matching. To improve the robustness of our model
in challenging outdoor environments, we incorporate an exposure-adaptive module
within the 3DGS framework. Consequently, GS-CPR enables efficient one-shot pose
refinement given a single RGB query and a coarse initial pose estimation. Our
proposed approach surpasses leading NeRF-based optimization methods in both
accuracy and runtime across indoor and outdoor visual localization benchmarks,
achieving new state-of-the-art accuracy on two indoor datasets. The project
page is available at https://xrim-lab.github.io/GS-CPR/.
| no_new_dataset | 0.946448 |
2408.11561 | Muhammad Aqeel | Muhammad Aqeel, Shakiba Sharifi, Marco Cristani, Francesco Setti | Self-Supervised Iterative Refinement for Anomaly Detection in Industrial
Quality Control | Accepted to VISAPP 2025 | In Proceedings of the 20th International Joint Conference on
Computer Vision, Imaging and Computer Graphics Theory and Applications -
Volume 2: VISAPP, ISBN 978-989-758-728-3, ISSN 2184-4321, pages 173-183
(2025) | 10.5220/0013178100003912 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This study introduces the Iterative Refinement Process (IRP), a robust
anomaly detection methodology designed for high-stakes industrial quality
control. The IRP enhances defect detection accuracy through a cyclic data
refinement strategy, iteratively removing misleading data points to improve
model performance and robustness. We validate the IRP's effectiveness using two
benchmark datasets, Kolektor SDD2 (KSDD2) and MVTec AD, covering a wide range
of industrial products and defect types. Our experimental results demonstrate
that the IRP consistently outperforms traditional anomaly detection models,
particularly in environments with high noise levels. This study highlights the
IRP's potential to significantly enhance anomaly detection processes in
industrial settings, effectively managing the challenges of sparse and noisy
data.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 12:15:20 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 15:04:03 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Aqeel",
"Muhammad",
""
],
[
"Sharifi",
"Shakiba",
""
],
[
"Cristani",
"Marco",
""
],
[
"Setti",
"Francesco",
""
]
]
| TITLE: Self-Supervised Iterative Refinement for Anomaly Detection in Industrial
Quality Control
ABSTRACT: This study introduces the Iterative Refinement Process (IRP), a robust
anomaly detection methodology designed for high-stakes industrial quality
control. The IRP enhances defect detection accuracy through a cyclic data
refinement strategy, iteratively removing misleading data points to improve
model performance and robustness. We validate the IRP's effectiveness using two
benchmark datasets, Kolektor SDD2 (KSDD2) and MVTec AD, covering a wide range
of industrial products and defect types. Our experimental results demonstrate
that the IRP consistently outperforms traditional anomaly detection models,
particularly in environments with high noise levels. This study highlights the
IRP's potential to significantly enhance anomaly detection processes in
industrial settings, effectively managing the challenges of sparse and noisy
data.
| no_new_dataset | 0.949389 |
2409.01281 | Jiace Zhu | Jiace Zhu, Yingtao Shen, Jie Zhao, An Zou | Path-Consistency: Prefix Enhancement for Efficient Inference in LLM | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | To enhance the reasoning capabilities of large language models (LLMs),
self-consistency has gained significant popularity by combining multiple
sampling with majority voting. However, the state-of-the-art self-consistency
approaches consume substantial computational resources and lead to significant
additional time costs due to the multiple sampling. This prevents its full
potential from being realized in scenarios where computational resources are
critical. To improve the inference efficiency, this paper introduces
\textit{path-consistency}, a method that leverages the confidence of answers
generated in earlier branches to identify the prefix of the most promising
path. By dynamically guiding the generation of subsequent branches based on
this prefix, the \textit{path-consistency} mitigates both the errors and
redundancies from random or less useful sampling in self-consistency. As a
result, it can significantly accelerate the inference process by reducing the
number of tokens generated. Our extensive empirical evaluation shows that the
\textit{path-consistency} achieves significant acceleration in inference
latency ranging from $7.8\%$ to $40.5\%$, while maintaining or even improving
task accuracy across different datasets, including mathematical reasoning,
common sense reasoning, symbolic reasoning, and code generation.
| [
{
"version": "v1",
"created": "Sun, 25 Aug 2024 01:45:53 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 09:13:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhu",
"Jiace",
""
],
[
"Shen",
"Yingtao",
""
],
[
"Zhao",
"Jie",
""
],
[
"Zou",
"An",
""
]
]
| TITLE: Path-Consistency: Prefix Enhancement for Efficient Inference in LLM
ABSTRACT: To enhance the reasoning capabilities of large language models (LLMs),
self-consistency has gained significant popularity by combining multiple
sampling with majority voting. However, the state-of-the-art self-consistency
approaches consume substantial computational resources and lead to significant
additional time costs due to the multiple sampling. This prevents its full
potential from being realized in scenarios where computational resources are
critical. To improve the inference efficiency, this paper introduces
\textit{path-consistency}, a method that leverages the confidence of answers
generated in earlier branches to identify the prefix of the most promising
path. By dynamically guiding the generation of subsequent branches based on
this prefix, the \textit{path-consistency} mitigates both the errors and
redundancies from random or less useful sampling in self-consistency. As a
result, it can significantly accelerate the inference process by reducing the
number of tokens generated. Our extensive empirical evaluation shows that the
\textit{path-consistency} achieves significant acceleration in inference
latency ranging from $7.8\%$ to $40.5\%$, while maintaining or even improving
task accuracy across different datasets, including mathematical reasoning,
common sense reasoning, symbolic reasoning, and code generation.
| no_new_dataset | 0.945147 |
2409.02143 | Zheng Chen | Ziwei Yang, Rikuto Kotoge, Xihao Piao, Zheng Chen, Lingwei Zhu, Peng
Gao, Yasuko Matsubara, Yasushi Sakurai, and Jimeng Sun | MLOmics: Benchmark for Machine Learning on Cancer Multi-Omics Data | Under review | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | Framing the investigation of diverse cancers as a machine learning problem
has recently shown significant potential in multi-omics analysis and cancer
research. Empowering these successful machine learning models are the
high-quality training datasets with sufficient data volume and adequate
preprocessing. However, while there exist several public data portals including
The Cancer Genome Atlas (TCGA) multi-omics initiative or open-bases such as the
LinkedOmics, these databases are not off-the-shelf for existing machine
learning models. In this paper we propose MLOmics, an open cancer multi-omics
benchmark aiming at serving better the development and evaluation of
bioinformatics and machine learning models. MLOmics contains 8,314 patient
samples covering all 32 cancer types with four omics types, stratified
features, and extensive baselines. Complementary support for downstream
analysis and bio-knowledge linking are also included to support
interdisciplinary analysis.
| [
{
"version": "v1",
"created": "Mon, 2 Sep 2024 22:04:08 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 12:08:50 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yang",
"Ziwei",
""
],
[
"Kotoge",
"Rikuto",
""
],
[
"Piao",
"Xihao",
""
],
[
"Chen",
"Zheng",
""
],
[
"Zhu",
"Lingwei",
""
],
[
"Gao",
"Peng",
""
],
[
"Matsubara",
"Yasuko",
""
],
[
"Sakurai",
"Yasushi",
""
],
[
"Sun",
"Jimeng",
""
]
]
| TITLE: MLOmics: Benchmark for Machine Learning on Cancer Multi-Omics Data
ABSTRACT: Framing the investigation of diverse cancers as a machine learning problem
has recently shown significant potential in multi-omics analysis and cancer
research. Empowering these successful machine learning models are the
high-quality training datasets with sufficient data volume and adequate
preprocessing. However, while there exist several public data portals including
The Cancer Genome Atlas (TCGA) multi-omics initiative or open-bases such as the
LinkedOmics, these databases are not off-the-shelf for existing machine
learning models. In this paper we propose MLOmics, an open cancer multi-omics
benchmark aiming at serving better the development and evaluation of
bioinformatics and machine learning models. MLOmics contains 8,314 patient
samples covering all 32 cancer types with four omics types, stratified
features, and extensive baselines. Complementary support for downstream
analysis and bio-knowledge linking are also included to support
interdisciplinary analysis.
| no_new_dataset | 0.938407 |
2409.03190 | Yike Zhang | Yike Zhang and Jack Noble | Post-mastoidectomy Surface Multi-View Synthesis from a Single Microscopy
Image | Submitted to Medical Imaging 2025: Image-Guided Procedures, Robotic
Interventions, and Modeling | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cochlear Implant (CI) procedures involve performing an invasive mastoidectomy
to insert an electrode array into the cochlea. In this paper, we introduce a
novel pipeline that is capable of generating synthetic multi-view videos from a
single CI microscope image. In our approach, we use a patient's pre-operative
CT scan to predict the post-mastoidectomy surface using a method designed for
this purpose. We manually align the surface with a selected microscope frame to
obtain an accurate initial pose of the reconstructed CT mesh relative to the
microscope. We then perform UV projection to transfer the colors from the frame
to surface textures. Novel views of the textured surface can be used to
generate a large dataset of synthetic frames with ground truth poses. We
evaluated the quality of synthetic views rendered using Pytorch3D and PyVista.
We found both rendering engines lead to similarly high-quality synthetic
novel-view frames compared to ground truth with a structural similarity index
for both methods averaging about 0.86. A large dataset of novel views with
known poses is critical for ongoing training of a method to automatically
estimate microscope pose for 2D to 3D registration with the pre-operative CT to
facilitate augmented reality surgery. This dataset will empower various
downstream tasks, such as integrating Augmented Reality (AR) in the OR,
tracking surgical tools, and supporting other video analysis studies.
| [
{
"version": "v1",
"created": "Sat, 31 Aug 2024 16:45:24 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 23:26:46 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Yike",
""
],
[
"Noble",
"Jack",
""
]
]
| TITLE: Post-mastoidectomy Surface Multi-View Synthesis from a Single Microscopy
Image
ABSTRACT: Cochlear Implant (CI) procedures involve performing an invasive mastoidectomy
to insert an electrode array into the cochlea. In this paper, we introduce a
novel pipeline that is capable of generating synthetic multi-view videos from a
single CI microscope image. In our approach, we use a patient's pre-operative
CT scan to predict the post-mastoidectomy surface using a method designed for
this purpose. We manually align the surface with a selected microscope frame to
obtain an accurate initial pose of the reconstructed CT mesh relative to the
microscope. We then perform UV projection to transfer the colors from the frame
to surface textures. Novel views of the textured surface can be used to
generate a large dataset of synthetic frames with ground truth poses. We
evaluated the quality of synthetic views rendered using Pytorch3D and PyVista.
We found both rendering engines lead to similarly high-quality synthetic
novel-view frames compared to ground truth with a structural similarity index
for both methods averaging about 0.86. A large dataset of novel views with
known poses is critical for ongoing training of a method to automatically
estimate microscope pose for 2D to 3D registration with the pre-operative CT to
facilitate augmented reality surgery. This dataset will empower various
downstream tasks, such as integrating Augmented Reality (AR) in the OR,
tracking surgical tools, and supporting other video analysis studies.
| new_dataset | 0.940517 |
2409.06666 | Qingkai Fang | Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, Yang
Feng | LLaMA-Omni: Seamless Speech Interaction with Large Language Models | ICLR 2025 | null | null | null | cs.CL cs.AI cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Models like GPT-4o enable real-time interaction with large language models
(LLMs) through speech, significantly enhancing user experience compared to
traditional text-based interaction. However, there is still a lack of
exploration on how to build speech interaction models based on open-source
LLMs. To address this, we propose LLaMA-Omni, a novel model architecture
designed for low-latency and high-quality speech interaction with LLMs.
LLaMA-Omni integrates a pretrained speech encoder, a speech adaptor, an LLM,
and a streaming speech decoder. It eliminates the need for speech
transcription, and can simultaneously generate text and speech responses
directly from speech instructions with extremely low latency. We build our
model based on the latest Llama-3.1-8B-Instruct model. To align the model with
speech interaction scenarios, we construct a dataset named InstructS2S-200K,
which includes 200K speech instructions and corresponding speech responses.
Experimental results show that compared to previous speech-language models,
LLaMA-Omni provides better responses in both content and style, with a response
latency as low as 226ms. Additionally, training LLaMA-Omni takes less than 3
days on just 4 GPUs, paving the way for the efficient development of
speech-language models in the future.
| [
{
"version": "v1",
"created": "Tue, 10 Sep 2024 17:34:34 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 12:59:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Fang",
"Qingkai",
""
],
[
"Guo",
"Shoutao",
""
],
[
"Zhou",
"Yan",
""
],
[
"Ma",
"Zhengrui",
""
],
[
"Zhang",
"Shaolei",
""
],
[
"Feng",
"Yang",
""
]
]
| TITLE: LLaMA-Omni: Seamless Speech Interaction with Large Language Models
ABSTRACT: Models like GPT-4o enable real-time interaction with large language models
(LLMs) through speech, significantly enhancing user experience compared to
traditional text-based interaction. However, there is still a lack of
exploration on how to build speech interaction models based on open-source
LLMs. To address this, we propose LLaMA-Omni, a novel model architecture
designed for low-latency and high-quality speech interaction with LLMs.
LLaMA-Omni integrates a pretrained speech encoder, a speech adaptor, an LLM,
and a streaming speech decoder. It eliminates the need for speech
transcription, and can simultaneously generate text and speech responses
directly from speech instructions with extremely low latency. We build our
model based on the latest Llama-3.1-8B-Instruct model. To align the model with
speech interaction scenarios, we construct a dataset named InstructS2S-200K,
which includes 200K speech instructions and corresponding speech responses.
Experimental results show that compared to previous speech-language models,
LLaMA-Omni provides better responses in both content and style, with a response
latency as low as 226ms. Additionally, training LLaMA-Omni takes less than 3
days on just 4 GPUs, paving the way for the efficient development of
speech-language models in the future.
| new_dataset | 0.954137 |
2409.11219 | Tianqi Chen | Tianqi Chen, Shujian Zhang, Mingyuan Zhou | Score Forgetting Distillation: A Swift, Data-Free Method for Machine
Unlearning in Diffusion Models | ICLR 2025 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The machine learning community is increasingly recognizing the importance of
fostering trust and safety in modern generative AI (GenAI) models. We posit
machine unlearning (MU) as a crucial foundation for developing safe, secure,
and trustworthy GenAI models. Traditional MU methods often rely on stringent
assumptions and require access to real data. This paper introduces Score
Forgetting Distillation (SFD), an innovative MU approach that promotes the
forgetting of undesirable information in diffusion models by aligning the
conditional scores of "unsafe" classes or concepts with those of "safe" ones.
To eliminate the need for real data, our SFD framework incorporates a
score-based MU loss into the score distillation objective of a pretrained
diffusion model. This serves as a regularization term that preserves desired
generation capabilities while enabling the production of synthetic data through
a one-step generator. Our experiments on pretrained label-conditional and
text-to-image diffusion models demonstrate that our method effectively
accelerates the forgetting of target classes or concepts during generation,
while preserving the quality of other classes or concepts. This unlearned and
distilled diffusion not only pioneers a novel concept in MU but also
accelerates the generation speed of diffusion models. Our experiments and
studies on a range of diffusion models and datasets confirm that our approach
is generalizable, effective, and advantageous for MU in diffusion models. Code
is available at https://github.com/tqch/score-forgetting-distillation.
($\textbf{Warning:}$ This paper contains sexually explicit imagery, discussions
of pornography, racially-charged terminology, and other content that some
readers may find disturbing, distressing, and/or offensive.)
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2024 14:12:50 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Oct 2024 03:59:06 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 01:07:41 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Tianqi",
""
],
[
"Zhang",
"Shujian",
""
],
[
"Zhou",
"Mingyuan",
""
]
]
| TITLE: Score Forgetting Distillation: A Swift, Data-Free Method for Machine
Unlearning in Diffusion Models
ABSTRACT: The machine learning community is increasingly recognizing the importance of
fostering trust and safety in modern generative AI (GenAI) models. We posit
machine unlearning (MU) as a crucial foundation for developing safe, secure,
and trustworthy GenAI models. Traditional MU methods often rely on stringent
assumptions and require access to real data. This paper introduces Score
Forgetting Distillation (SFD), an innovative MU approach that promotes the
forgetting of undesirable information in diffusion models by aligning the
conditional scores of "unsafe" classes or concepts with those of "safe" ones.
To eliminate the need for real data, our SFD framework incorporates a
score-based MU loss into the score distillation objective of a pretrained
diffusion model. This serves as a regularization term that preserves desired
generation capabilities while enabling the production of synthetic data through
a one-step generator. Our experiments on pretrained label-conditional and
text-to-image diffusion models demonstrate that our method effectively
accelerates the forgetting of target classes or concepts during generation,
while preserving the quality of other classes or concepts. This unlearned and
distilled diffusion not only pioneers a novel concept in MU but also
accelerates the generation speed of diffusion models. Our experiments and
studies on a range of diffusion models and datasets confirm that our approach
is generalizable, effective, and advantageous for MU in diffusion models. Code
is available at https://github.com/tqch/score-forgetting-distillation.
($\textbf{Warning:}$ This paper contains sexually explicit imagery, discussions
of pornography, racially-charged terminology, and other content that some
readers may find disturbing, distressing, and/or offensive.)
| no_new_dataset | 0.951504 |
2409.12326 | Yi Yang | Yi Yang, Lei Zhong, Huiping Zhuang | ReFu: Recursive Fusion for Exemplar-Free 3D Class-Incremental Learning | Accepted at WACV 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel Recursive Fusion model, dubbed ReFu, designed to
integrate point clouds and meshes for exemplar-free 3D Class-Incremental
Learning, where the model learns new 3D classes while retaining knowledge of
previously learned ones. Unlike existing methods that either rely on storing
historical data to mitigate forgetting or focus on single data modalities, ReFu
eliminates the need for exemplar storage while utilizing the complementary
strengths of both point clouds and meshes. To achieve this, we introduce a
recursive method which continuously accumulates knowledge by updating the
regularized auto-correlation matrix. Furthermore, we propose a fusion module,
featuring a Pointcloud-guided Mesh Attention Layer that learns correlations
between the two modalities. This mechanism effectively integrates point cloud
and mesh features, leading to more robust and stable continual learning.
Experiments across various datasets demonstrate that our proposed framework
outperforms existing methods in 3D class-incremental learning.
| [
{
"version": "v1",
"created": "Wed, 18 Sep 2024 21:44:33 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 20:55:27 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yang",
"Yi",
""
],
[
"Zhong",
"Lei",
""
],
[
"Zhuang",
"Huiping",
""
]
]
| TITLE: ReFu: Recursive Fusion for Exemplar-Free 3D Class-Incremental Learning
ABSTRACT: We introduce a novel Recursive Fusion model, dubbed ReFu, designed to
integrate point clouds and meshes for exemplar-free 3D Class-Incremental
Learning, where the model learns new 3D classes while retaining knowledge of
previously learned ones. Unlike existing methods that either rely on storing
historical data to mitigate forgetting or focus on single data modalities, ReFu
eliminates the need for exemplar storage while utilizing the complementary
strengths of both point clouds and meshes. To achieve this, we introduce a
recursive method which continuously accumulates knowledge by updating the
regularized auto-correlation matrix. Furthermore, we propose a fusion module,
featuring a Pointcloud-guided Mesh Attention Layer that learns correlations
between the two modalities. This mechanism effectively integrates point cloud
and mesh features, leading to more robust and stable continual learning.
Experiments across various datasets demonstrate that our proposed framework
outperforms existing methods in 3D class-incremental learning.
| no_new_dataset | 0.952618 |
2409.13426 | Vladimir Guzov | Vladimir Guzov, Yifeng Jiang, Fangzhou Hong, Gerard Pons-Moll, Richard
Newcombe, C. Karen Liu, Yuting Ye and Lingni Ma | HMD^2: Environment-aware Motion Generation from Single Egocentric
Head-Mounted Device | International Conference on 3D Vision 2025 (3DV 2025) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper investigates the generation of realistic full-body human motion
using a single head-mounted device with an outward-facing color camera and the
ability to perform visual SLAM. To address the ambiguity of this setup, we
present HMD^2, a novel system that balances motion reconstruction and
generation. From a reconstruction standpoint, it aims to maximally utilize the
camera streams to produce both analytical and learned features, including head
motion, SLAM point cloud, and image embeddings. On the generative front, HMD^2
employs a multi-modal conditional motion diffusion model with a Transformer
backbone to maintain temporal coherence of generated motions, and utilizes
autoregressive inpainting to facilitate online motion inference with minimal
latency (0.17 seconds). We show that our system provides an effective and
robust solution that scales to a diverse dataset of over 200 hours of motion in
complex indoor and outdoor environments.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2024 11:46:48 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 15:06:51 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Guzov",
"Vladimir",
""
],
[
"Jiang",
"Yifeng",
""
],
[
"Hong",
"Fangzhou",
""
],
[
"Pons-Moll",
"Gerard",
""
],
[
"Newcombe",
"Richard",
""
],
[
"Liu",
"C. Karen",
""
],
[
"Ye",
"Yuting",
""
],
[
"Ma",
"Lingni",
""
]
]
| TITLE: HMD^2: Environment-aware Motion Generation from Single Egocentric
Head-Mounted Device
ABSTRACT: This paper investigates the generation of realistic full-body human motion
using a single head-mounted device with an outward-facing color camera and the
ability to perform visual SLAM. To address the ambiguity of this setup, we
present HMD^2, a novel system that balances motion reconstruction and
generation. From a reconstruction standpoint, it aims to maximally utilize the
camera streams to produce both analytical and learned features, including head
motion, SLAM point cloud, and image embeddings. On the generative front, HMD^2
employs a multi-modal conditional motion diffusion model with a Transformer
backbone to maintain temporal coherence of generated motions, and utilizes
autoregressive inpainting to facilitate online motion inference with minimal
latency (0.17 seconds). We show that our system provides an effective and
robust solution that scales to a diverse dataset of over 200 hours of motion in
complex indoor and outdoor environments.
| no_new_dataset | 0.947039 |
2409.13533 | Sagar Parekh | Sagar Parekh, Lauren Bramblett, Nicola Bezzo, and Dylan P. Losey | Using High-Level Patterns to Estimate How Humans Predict a Robot will
Behave | null | null | null | null | cs.RO cs.LG | http://creativecommons.org/licenses/by/4.0/ | Humans interacting with robots often form predictions of what the robot will
do next. For instance, based on the recent behavior of an autonomous car, a
nearby human driver might predict that the car is going to remain in the same
lane. It is important for the robot to understand the human's prediction for
safe and seamless interaction: e.g., if the autonomous car knows the human
thinks it is not merging -- but the autonomous car actually intends to merge --
then the car can adjust its behavior to prevent an accident. Prior works
typically assume that humans make precise predictions of robot behavior.
However, recent research on human-human prediction suggests the opposite:
humans tend to approximate other agents by predicting their high-level
behaviors. We apply this finding to develop a second-order theory of mind
approach that enables robots to estimate how humans predict they will behave.
To extract these high-level predictions directly from data, we embed the recent
human and robot trajectories into a discrete latent space. Each element of this
latent space captures a different type of behavior (e.g., merging in front of
the human, remaining in the same lane) and decodes into a vector field across
the state space that is consistent with the underlying behavior type. We
hypothesize that our resulting high-level and course predictions of robot
behavior will correspond to actual human predictions. We provide initial
evidence in support of this hypothesis through proof-of-concept simulations,
testing our method's predictions against those of real users, and experiments
on a real-world interactive driving dataset.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2024 14:23:05 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 14:40:34 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Parekh",
"Sagar",
""
],
[
"Bramblett",
"Lauren",
""
],
[
"Bezzo",
"Nicola",
""
],
[
"Losey",
"Dylan P.",
""
]
]
| TITLE: Using High-Level Patterns to Estimate How Humans Predict a Robot will
Behave
ABSTRACT: Humans interacting with robots often form predictions of what the robot will
do next. For instance, based on the recent behavior of an autonomous car, a
nearby human driver might predict that the car is going to remain in the same
lane. It is important for the robot to understand the human's prediction for
safe and seamless interaction: e.g., if the autonomous car knows the human
thinks it is not merging -- but the autonomous car actually intends to merge --
then the car can adjust its behavior to prevent an accident. Prior works
typically assume that humans make precise predictions of robot behavior.
However, recent research on human-human prediction suggests the opposite:
humans tend to approximate other agents by predicting their high-level
behaviors. We apply this finding to develop a second-order theory of mind
approach that enables robots to estimate how humans predict they will behave.
To extract these high-level predictions directly from data, we embed the recent
human and robot trajectories into a discrete latent space. Each element of this
latent space captures a different type of behavior (e.g., merging in front of
the human, remaining in the same lane) and decodes into a vector field across
the state space that is consistent with the underlying behavior type. We
hypothesize that our resulting high-level and course predictions of robot
behavior will correspond to actual human predictions. We provide initial
evidence in support of this hypothesis through proof-of-concept simulations,
testing our method's predictions against those of real users, and experiments
on a real-world interactive driving dataset.
| no_new_dataset | 0.949995 |
2409.13555 | Danyang Liu | Danyang Liu, Mirella Lapata, Frank Keller | Generating Visual Stories with Grounded and Coreferent Characters | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Characters are important in narratives. They move the plot forward, create
emotional connections, and embody the story's themes. Visual storytelling
methods focus more on the plot and events relating to it, without building the
narrative around specific characters. As a result, the generated stories feel
generic, with character mentions being absent, vague, or incorrect. To mitigate
these issues, we introduce the new task of character-centric story generation
and present the first model capable of predicting visual stories with
consistently grounded and coreferent character mentions. Our model is finetuned
on a new dataset which we build on top of the widely used VIST benchmark.
Specifically, we develop an automated pipeline to enrich VIST with visual and
textual character coreference chains. We also propose new evaluation metrics to
measure the richness of characters and coreference in stories. Experimental
results show that our model generates stories with recurring characters which
are consistent and coreferent to larger extent compared to baselines and
state-of-the-art systems.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2024 14:56:33 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 14:36:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Danyang",
""
],
[
"Lapata",
"Mirella",
""
],
[
"Keller",
"Frank",
""
]
]
| TITLE: Generating Visual Stories with Grounded and Coreferent Characters
ABSTRACT: Characters are important in narratives. They move the plot forward, create
emotional connections, and embody the story's themes. Visual storytelling
methods focus more on the plot and events relating to it, without building the
narrative around specific characters. As a result, the generated stories feel
generic, with character mentions being absent, vague, or incorrect. To mitigate
these issues, we introduce the new task of character-centric story generation
and present the first model capable of predicting visual stories with
consistently grounded and coreferent character mentions. Our model is finetuned
on a new dataset which we build on top of the widely used VIST benchmark.
Specifically, we develop an automated pipeline to enrich VIST with visual and
textual character coreference chains. We also propose new evaluation metrics to
measure the richness of characters and coreference in stories. Experimental
results show that our model generates stories with recurring characters which
are consistent and coreferent to larger extent compared to baselines and
state-of-the-art systems.
| new_dataset | 0.958069 |
2409.18038 | M\'onika Farsang | Felix Resch, M\'onika Farsang, Radu Grosu | MMDVS-LF: Multi-Modal Dynamic Vision Sensor and Eye-Tracking Dataset for
Line Following | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Dynamic Vision Sensors (DVS) offer a unique advantage in control applications
due to their high temporal resolution and asynchronous event-based data. Still,
their adoption in machine learning algorithms remains limited. To address this
gap and promote the development of models that leverage the specific
characteristics of DVS data, we introduce the MMDVS-LF: Multi-Modal Dynamic
Vision Sensor and Eye-Tracking Dataset for Line Following. This comprehensive
dataset is the first to integrate multiple sensor modalities, including DVS
recordings and eye-tracking data from a small-scale standardized vehicle.
Additionally, the dataset includes RGB video, odometry, Inertial Measurement
Unit (IMU) data, and demographic data of drivers performing a Line Following.
With its diverse range of data, MMDVS-LF opens new opportunities for developing
event-based deep learning algorithms just like the MNIST dataset did for
Convolutional Neural Networks.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2024 16:42:53 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 14:52:51 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Resch",
"Felix",
""
],
[
"Farsang",
"Mónika",
""
],
[
"Grosu",
"Radu",
""
]
]
| TITLE: MMDVS-LF: Multi-Modal Dynamic Vision Sensor and Eye-Tracking Dataset for
Line Following
ABSTRACT: Dynamic Vision Sensors (DVS) offer a unique advantage in control applications
due to their high temporal resolution and asynchronous event-based data. Still,
their adoption in machine learning algorithms remains limited. To address this
gap and promote the development of models that leverage the specific
characteristics of DVS data, we introduce the MMDVS-LF: Multi-Modal Dynamic
Vision Sensor and Eye-Tracking Dataset for Line Following. This comprehensive
dataset is the first to integrate multiple sensor modalities, including DVS
recordings and eye-tracking data from a small-scale standardized vehicle.
Additionally, the dataset includes RGB video, odometry, Inertial Measurement
Unit (IMU) data, and demographic data of drivers performing a Line Following.
With its diverse range of data, MMDVS-LF opens new opportunities for developing
event-based deep learning algorithms just like the MNIST dataset did for
Convolutional Neural Networks.
| new_dataset | 0.957358 |
2409.18459 | Yuki Imajuku | Yuki Imajuku and Yoko Yamakata and Kiyoharu Aizawa | FoodMLLM-JP: Leveraging Multimodal Large Language Models for Japanese
Recipe Generation | 15 pages, 5 figures. We found errors in the calculation of evaluation
metrics, which were corrected in this version with
$\color{blue}{\text{modifications highlighted in blue}}$. Please also see the
Appendix | null | 10.1007/978-981-96-2054-8_30 | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research on food image understanding using recipe data has been a
long-standing focus due to the diversity and complexity of the data. Moreover,
food is inextricably linked to people's lives, making it a vital research area
for practical applications such as dietary management. Recent advancements in
Multimodal Large Language Models (MLLMs) have demonstrated remarkable
capabilities, not only in their vast knowledge but also in their ability to
handle languages naturally. While English is predominantly used, they can also
support multiple languages including Japanese. This suggests that MLLMs are
expected to significantly improve performance in food image understanding
tasks. We fine-tuned open MLLMs LLaVA-1.5 and Phi-3 Vision on a Japanese recipe
dataset and benchmarked their performance against the closed model GPT-4o. We
then evaluated the content of generated recipes, including ingredients and
cooking procedures, using 5,000 evaluation samples that comprehensively cover
Japanese food culture. Our evaluation demonstrates that the open models trained
on recipe data outperform GPT-4o, the current state-of-the-art model, in
ingredient generation. Our model achieved F1 score of 0.531, surpassing
GPT-4o's F1 score of 0.481, indicating a higher level of accuracy. Furthermore,
our model exhibited comparable performance to GPT-4o in generating cooking
procedure text.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2024 05:43:22 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 15:04:18 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Imajuku",
"Yuki",
""
],
[
"Yamakata",
"Yoko",
""
],
[
"Aizawa",
"Kiyoharu",
""
]
]
| TITLE: FoodMLLM-JP: Leveraging Multimodal Large Language Models for Japanese
Recipe Generation
ABSTRACT: Research on food image understanding using recipe data has been a
long-standing focus due to the diversity and complexity of the data. Moreover,
food is inextricably linked to people's lives, making it a vital research area
for practical applications such as dietary management. Recent advancements in
Multimodal Large Language Models (MLLMs) have demonstrated remarkable
capabilities, not only in their vast knowledge but also in their ability to
handle languages naturally. While English is predominantly used, they can also
support multiple languages including Japanese. This suggests that MLLMs are
expected to significantly improve performance in food image understanding
tasks. We fine-tuned open MLLMs LLaVA-1.5 and Phi-3 Vision on a Japanese recipe
dataset and benchmarked their performance against the closed model GPT-4o. We
then evaluated the content of generated recipes, including ingredients and
cooking procedures, using 5,000 evaluation samples that comprehensively cover
Japanese food culture. Our evaluation demonstrates that the open models trained
on recipe data outperform GPT-4o, the current state-of-the-art model, in
ingredient generation. Our model achieved F1 score of 0.531, surpassing
GPT-4o's F1 score of 0.481, indicating a higher level of accuracy. Furthermore,
our model exhibited comparable performance to GPT-4o in generating cooking
procedure text.
| no_new_dataset | 0.94256 |
2409.19764 | Donghyun Lee | Donghyun Lee, Yuhang Li, Youngeun Kim, Shiting Xiao, Priyadarshini
Panda | Spiking Transformer with Spatial-Temporal Attention | null | null | null | null | cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Spike-based Transformer presents a compelling and energy-efficient
alternative to traditional Artificial Neural Network (ANN)-based Transformers,
achieving impressive results through sparse binary computations. However,
existing spike-based transformers predominantly focus on spatial attention
while neglecting crucial temporal dependencies inherent in spike-based
processing, leading to suboptimal feature representation and limited
performance. To address this limitation, we propose Spiking Transformer with
Spatial-Temporal Attention (STAtten), a simple and straightforward architecture
that efficiently integrates both spatial and temporal information in the
self-attention mechanism. STAtten introduces a block-wise computation strategy
that processes information in spatial-temporal chunks, enabling comprehensive
feature capture while maintaining the same computational complexity as previous
spatial-only approaches. Our method can be seamlessly integrated into existing
spike-based transformers without architectural overhaul. Extensive experiments
demonstrate that STAtten significantly improves the performance of existing
spike-based transformers across both static and neuromorphic datasets,
including CIFAR10/100, ImageNet, CIFAR10-DVS, and N-Caltech101. The code is
available at https://github.com/Intelligent-Computing-Lab-Yale/STAtten
| [
{
"version": "v1",
"created": "Sun, 29 Sep 2024 20:29:39 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Nov 2024 21:22:40 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 18:51:11 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lee",
"Donghyun",
""
],
[
"Li",
"Yuhang",
""
],
[
"Kim",
"Youngeun",
""
],
[
"Xiao",
"Shiting",
""
],
[
"Panda",
"Priyadarshini",
""
]
]
| TITLE: Spiking Transformer with Spatial-Temporal Attention
ABSTRACT: Spike-based Transformer presents a compelling and energy-efficient
alternative to traditional Artificial Neural Network (ANN)-based Transformers,
achieving impressive results through sparse binary computations. However,
existing spike-based transformers predominantly focus on spatial attention
while neglecting crucial temporal dependencies inherent in spike-based
processing, leading to suboptimal feature representation and limited
performance. To address this limitation, we propose Spiking Transformer with
Spatial-Temporal Attention (STAtten), a simple and straightforward architecture
that efficiently integrates both spatial and temporal information in the
self-attention mechanism. STAtten introduces a block-wise computation strategy
that processes information in spatial-temporal chunks, enabling comprehensive
feature capture while maintaining the same computational complexity as previous
spatial-only approaches. Our method can be seamlessly integrated into existing
spike-based transformers without architectural overhaul. Extensive experiments
demonstrate that STAtten significantly improves the performance of existing
spike-based transformers across both static and neuromorphic datasets,
including CIFAR10/100, ImageNet, CIFAR10-DVS, and N-Caltech101. The code is
available at https://github.com/Intelligent-Computing-Lab-Yale/STAtten
| no_new_dataset | 0.945551 |
2409.19835 | Yimian Dai PhD | Qun Dai and Chunyang Yuan and Yimian Dai and Yuxuan Li and Xiang Li
and Kang Ni and Jianhui Xu and Xiangbo Shu and Jian Yang | MoCoLSK: Modality Conditioned High-Resolution Downscaling for Land
Surface Temperature | Accepted by IEEE TGRS | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Land Surface Temperature (LST) is a critical parameter for environmental
studies, but directly obtaining high spatial resolution LST data remains
challenging due to the spatio-temporal trade-off in satellite remote sensing.
Guided LST downscaling has emerged as an alternative solution to overcome these
limitations, but current methods often neglect spatial non-stationarity, and
there is a lack of an open-source ecosystem for deep learning methods. In this
paper, we propose the Modality-Conditional Large Selective Kernel (MoCoLSK)
Network, a novel architecture that dynamically fuses multi-modal data through
modality-conditioned projections. MoCoLSK achieves a confluence of dynamic
receptive field adjustment and multi-modal feature fusion, leading to enhanced
LST prediction accuracy. Furthermore, we establish the GrokLST project, a
comprehensive open-source ecosystem featuring the GrokLST dataset, a
high-resolution benchmark, and the GrokLST toolkit, an open-source
PyTorch-based toolkit encapsulating MoCoLSK alongside 40+ state-of-the-art
approaches. Extensive experimental results validate MoCoLSK's effectiveness in
capturing complex dependencies and subtle variations within multispectral data,
outperforming existing methods in LST downscaling. Our code, dataset, and
toolkit are available at https://github.com/GrokCV/GrokLST.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2024 00:17:00 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 07:32:50 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dai",
"Qun",
""
],
[
"Yuan",
"Chunyang",
""
],
[
"Dai",
"Yimian",
""
],
[
"Li",
"Yuxuan",
""
],
[
"Li",
"Xiang",
""
],
[
"Ni",
"Kang",
""
],
[
"Xu",
"Jianhui",
""
],
[
"Shu",
"Xiangbo",
""
],
[
"Yang",
"Jian",
""
]
]
| TITLE: MoCoLSK: Modality Conditioned High-Resolution Downscaling for Land
Surface Temperature
ABSTRACT: Land Surface Temperature (LST) is a critical parameter for environmental
studies, but directly obtaining high spatial resolution LST data remains
challenging due to the spatio-temporal trade-off in satellite remote sensing.
Guided LST downscaling has emerged as an alternative solution to overcome these
limitations, but current methods often neglect spatial non-stationarity, and
there is a lack of an open-source ecosystem for deep learning methods. In this
paper, we propose the Modality-Conditional Large Selective Kernel (MoCoLSK)
Network, a novel architecture that dynamically fuses multi-modal data through
modality-conditioned projections. MoCoLSK achieves a confluence of dynamic
receptive field adjustment and multi-modal feature fusion, leading to enhanced
LST prediction accuracy. Furthermore, we establish the GrokLST project, a
comprehensive open-source ecosystem featuring the GrokLST dataset, a
high-resolution benchmark, and the GrokLST toolkit, an open-source
PyTorch-based toolkit encapsulating MoCoLSK alongside 40+ state-of-the-art
approaches. Extensive experimental results validate MoCoLSK's effectiveness in
capturing complex dependencies and subtle variations within multispectral data,
outperforming existing methods in LST downscaling. Our code, dataset, and
toolkit are available at https://github.com/GrokCV/GrokLST.
| new_dataset | 0.963643 |
2409.20164 | Fulong Ma | Fulong Ma, Weiqing Qi, Guoyang Zhao, Ming Liu, and Jun Ma | Erase, then Redraw: A Novel Data Augmentation Approach for Free Space
Detection Using Diffusion Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data augmentation is one of the most common tools in deep learning,
underpinning many recent advances including tasks such as classification,
detection, and semantic segmentation. The standard approach to data
augmentation involves simple transformations like rotation and flipping to
generate new images. However, these new images often lack diversity along the
main semantic dimensions within the data. Traditional data augmentation methods
cannot alter high-level semantic attributes such as the presence of vehicles,
trees, and buildings in a scene to enhance data diversity. In recent years, the
rapid development of generative models has injected new vitality into the field
of data augmentation. In this paper, we address the lack of diversity in data
augmentation for road detection task by using a pre-trained text-to-image
diffusion model to parameterize image-to-image transformations. Our method
involves editing images using these diffusion models to change their semantics.
In essence, we achieve this goal by erasing instances of real objects from the
original dataset and generating new instances with similar semantics in the
erased regions using the diffusion model, thereby expanding the original
dataset. We evaluate our approach on the KITTI road dataset and achieve the
best results compared to other data augmentation methods, which demonstrates
the effectiveness of our proposed development.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2024 10:21:54 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 13:14:34 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ma",
"Fulong",
""
],
[
"Qi",
"Weiqing",
""
],
[
"Zhao",
"Guoyang",
""
],
[
"Liu",
"Ming",
""
],
[
"Ma",
"Jun",
""
]
]
| TITLE: Erase, then Redraw: A Novel Data Augmentation Approach for Free Space
Detection Using Diffusion Model
ABSTRACT: Data augmentation is one of the most common tools in deep learning,
underpinning many recent advances including tasks such as classification,
detection, and semantic segmentation. The standard approach to data
augmentation involves simple transformations like rotation and flipping to
generate new images. However, these new images often lack diversity along the
main semantic dimensions within the data. Traditional data augmentation methods
cannot alter high-level semantic attributes such as the presence of vehicles,
trees, and buildings in a scene to enhance data diversity. In recent years, the
rapid development of generative models has injected new vitality into the field
of data augmentation. In this paper, we address the lack of diversity in data
augmentation for road detection task by using a pre-trained text-to-image
diffusion model to parameterize image-to-image transformations. Our method
involves editing images using these diffusion models to change their semantics.
In essence, we achieve this goal by erasing instances of real objects from the
original dataset and generating new instances with similar semantics in the
erased regions using the diffusion model, thereby expanding the original
dataset. We evaluate our approach on the KITTI road dataset and achieve the
best results compared to other data augmentation methods, which demonstrates
the effectiveness of our proposed development.
| no_new_dataset | 0.952264 |
2409.20171 | Fulong Ma | Fulong Ma, Peng Hou, Yuxuan Liu, Yang Liu, Ming Liu, and Jun Ma | Annotation-Free Curb Detection Leveraging Altitude Difference Image | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Road curbs are considered as one of the crucial and ubiquitous traffic
features, which are essential for ensuring the safety of autonomous vehicles.
Current methods for detecting curbs primarily rely on camera imagery or LiDAR
point clouds. Image-based methods are vulnerable to fluctuations in lighting
conditions and exhibit poor robustness, while methods based on point clouds
circumvent the issues associated with lighting variations. However, it is the
typical case that significant processing delays are encountered due to the
voluminous amount of 3D points contained in each frame of the point cloud data.
Furthermore, the inherently unstructured characteristics of point clouds poses
challenges for integrating the latest deep learning advancements into point
cloud data applications. To address these issues, this work proposes an
annotation-free curb detection method leveraging Altitude Difference Image
(ADI), which effectively mitigates the aforementioned challenges. Given that
methods based on deep learning generally demand extensive, manually annotated
datasets, which are both expensive and labor-intensive to create, we present an
Automatic Curb Annotator (ACA) module. This module utilizes a deterministic
curb detection algorithm to automatically generate a vast quantity of training
data. Consequently, it facilitates the training of the curb detection model
without necessitating any manual annotation of data. Finally, by incorporating
a post-processing module, we manage to achieve state-of-the-art results on the
KITTI 3D curb dataset with considerably reduced processing delays compared to
existing methods, which underscores the effectiveness of our approach in curb
detection tasks.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2024 10:29:41 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Feb 2025 06:09:55 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 13:12:48 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ma",
"Fulong",
""
],
[
"Hou",
"Peng",
""
],
[
"Liu",
"Yuxuan",
""
],
[
"Liu",
"Yang",
""
],
[
"Liu",
"Ming",
""
],
[
"Ma",
"Jun",
""
]
]
| TITLE: Annotation-Free Curb Detection Leveraging Altitude Difference Image
ABSTRACT: Road curbs are considered as one of the crucial and ubiquitous traffic
features, which are essential for ensuring the safety of autonomous vehicles.
Current methods for detecting curbs primarily rely on camera imagery or LiDAR
point clouds. Image-based methods are vulnerable to fluctuations in lighting
conditions and exhibit poor robustness, while methods based on point clouds
circumvent the issues associated with lighting variations. However, it is the
typical case that significant processing delays are encountered due to the
voluminous amount of 3D points contained in each frame of the point cloud data.
Furthermore, the inherently unstructured characteristics of point clouds poses
challenges for integrating the latest deep learning advancements into point
cloud data applications. To address these issues, this work proposes an
annotation-free curb detection method leveraging Altitude Difference Image
(ADI), which effectively mitigates the aforementioned challenges. Given that
methods based on deep learning generally demand extensive, manually annotated
datasets, which are both expensive and labor-intensive to create, we present an
Automatic Curb Annotator (ACA) module. This module utilizes a deterministic
curb detection algorithm to automatically generate a vast quantity of training
data. Consequently, it facilitates the training of the curb detection model
without necessitating any manual annotation of data. Finally, by incorporating
a post-processing module, we manage to achieve state-of-the-art results on the
KITTI 3D curb dataset with considerably reduced processing delays compared to
existing methods, which underscores the effectiveness of our approach in curb
detection tasks.
| no_new_dataset | 0.94256 |
2410.00564 | Jie Cheng | Jie Cheng, Ruixi Qiao, Yingwei Ma, Binhua Li, Gang Xiong, Qinghai
Miao, Yongbin Li, Yisheng Lv | Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model
Pretraining | Accepted by ICLR 2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A significant aspiration of offline reinforcement learning (RL) is to develop
a generalist agent with high capabilities from large and heterogeneous
datasets. However, prior approaches that scale offline RL either rely heavily
on expert trajectories or struggle to generalize to diverse unseen tasks.
Inspired by the excellent generalization of world model in conditional video
generation, we explore the potential of image observation-based world model for
scaling offline RL and enhancing generalization on novel tasks. In this paper,
we introduce JOWA: Jointly-Optimized World-Action model, an offline model-based
RL agent pretrained on multiple Atari games with 6 billion tokens data to learn
general-purpose representation and decision-making ability. Our method jointly
optimizes a world-action model through a shared transformer backbone, which
stabilize temporal difference learning with large models during pretraining.
Moreover, we propose a provably efficient and parallelizable planning algorithm
to compensate for the Q-value estimation error and thus search out better
policies. Experimental results indicate that our largest agent, with 150
million parameters, achieves 78.9% human-level performance on pretrained games
using only 10% subsampled offline data, outperforming existing state-of-the-art
large-scale offline RL baselines by 31.6% on averange. Furthermore, JOWA scales
favorably with model capacity and can sample-efficiently transfer to novel
games using only 5k offline fine-tuning data (approximately 4 trajectories) per
game, demonstrating superior generalization. We will release codes and model
weights at https://github.com/CJReinforce/JOWA
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 10:25:03 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Oct 2024 13:41:43 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 02:59:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Cheng",
"Jie",
""
],
[
"Qiao",
"Ruixi",
""
],
[
"Ma",
"Yingwei",
""
],
[
"Li",
"Binhua",
""
],
[
"Xiong",
"Gang",
""
],
[
"Miao",
"Qinghai",
""
],
[
"Li",
"Yongbin",
""
],
[
"Lv",
"Yisheng",
""
]
]
| TITLE: Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model
Pretraining
ABSTRACT: A significant aspiration of offline reinforcement learning (RL) is to develop
a generalist agent with high capabilities from large and heterogeneous
datasets. However, prior approaches that scale offline RL either rely heavily
on expert trajectories or struggle to generalize to diverse unseen tasks.
Inspired by the excellent generalization of world model in conditional video
generation, we explore the potential of image observation-based world model for
scaling offline RL and enhancing generalization on novel tasks. In this paper,
we introduce JOWA: Jointly-Optimized World-Action model, an offline model-based
RL agent pretrained on multiple Atari games with 6 billion tokens data to learn
general-purpose representation and decision-making ability. Our method jointly
optimizes a world-action model through a shared transformer backbone, which
stabilize temporal difference learning with large models during pretraining.
Moreover, we propose a provably efficient and parallelizable planning algorithm
to compensate for the Q-value estimation error and thus search out better
policies. Experimental results indicate that our largest agent, with 150
million parameters, achieves 78.9% human-level performance on pretrained games
using only 10% subsampled offline data, outperforming existing state-of-the-art
large-scale offline RL baselines by 31.6% on averange. Furthermore, JOWA scales
favorably with model capacity and can sample-efficiently transfer to novel
games using only 5k offline fine-tuning data (approximately 4 trajectories) per
game, demonstrating superior generalization. We will release codes and model
weights at https://github.com/CJReinforce/JOWA
| no_new_dataset | 0.944331 |
2410.00645 | Liangzu Peng | Liangzu Peng, Juan Elenter, Joshua Agterberg, Alejandro Ribeiro,
Ren\'e Vidal | TSVD: Bridging Theory and Practice in Continual Learning with
Pre-trained Models | 47 pages, 18 figures, 16 tables (v2, accepted to ICLR 2025) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of continual learning (CL) is to train a model that can solve
multiple tasks presented sequentially. Recent CL approaches have achieved
strong performance by leveraging large pre-trained models that generalize well
to downstream tasks. However, such methods lack theoretical guarantees, making
them prone to unexpected failures. Conversely, principled CL approaches often
fail to achieve competitive performance. In this work, we aim to bridge this
gap between theory and practice by designing a simple CL method that is
theoretically sound and highly performant. Specifically, we lift pre-trained
features into a higher dimensional space and formulate an over-parametrized
minimum-norm least-squares problem. We find that the lifted features are highly
ill-conditioned, potentially leading to large training errors (numerical
instability) and increased generalization errors. We address these challenges
by continually truncating the singular value decomposition (SVD) of the lifted
features. Our approach, termed TSVD, is stable with respect to the choice of
hyperparameters, can handle hundreds of tasks, and outperforms state-of-the-art
CL methods on multiple datasets. Importantly, our method satisfies a recurrence
relation throughout its continual learning process, which allows us to prove it
maintains small training and generalization errors by appropriately truncating
a fraction of SVD factors. This results in a stable continual learning method
with strong empirical performance and theoretical guarantees. Code available:
https://github.com/liangzu/tsvd.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 12:58:37 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 03:19:08 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Peng",
"Liangzu",
""
],
[
"Elenter",
"Juan",
""
],
[
"Agterberg",
"Joshua",
""
],
[
"Ribeiro",
"Alejandro",
""
],
[
"Vidal",
"René",
""
]
]
| TITLE: TSVD: Bridging Theory and Practice in Continual Learning with
Pre-trained Models
ABSTRACT: The goal of continual learning (CL) is to train a model that can solve
multiple tasks presented sequentially. Recent CL approaches have achieved
strong performance by leveraging large pre-trained models that generalize well
to downstream tasks. However, such methods lack theoretical guarantees, making
them prone to unexpected failures. Conversely, principled CL approaches often
fail to achieve competitive performance. In this work, we aim to bridge this
gap between theory and practice by designing a simple CL method that is
theoretically sound and highly performant. Specifically, we lift pre-trained
features into a higher dimensional space and formulate an over-parametrized
minimum-norm least-squares problem. We find that the lifted features are highly
ill-conditioned, potentially leading to large training errors (numerical
instability) and increased generalization errors. We address these challenges
by continually truncating the singular value decomposition (SVD) of the lifted
features. Our approach, termed TSVD, is stable with respect to the choice of
hyperparameters, can handle hundreds of tasks, and outperforms state-of-the-art
CL methods on multiple datasets. Importantly, our method satisfies a recurrence
relation throughout its continual learning process, which allows us to prove it
maintains small training and generalization errors by appropriately truncating
a fraction of SVD factors. This results in a stable continual learning method
with strong empirical performance and theoretical guarantees. Code available:
https://github.com/liangzu/tsvd.
| no_new_dataset | 0.943086 |
2410.00722 | Giovanni Luca Marchetti | Vahid Shahverdi, Giovanni Luca Marchetti, Kathl\'en Kohn | On the Geometry and Optimization of Polynomial Convolutional Networks | Accepted at AISTATS 2025 | null | null | null | cs.LG math.AG | http://creativecommons.org/licenses/by/4.0/ | We study convolutional neural networks with monomial activation functions.
Specifically, we prove that their parameterization map is regular and is an
isomorphism almost everywhere, up to rescaling the filters. By leveraging on
tools from algebraic geometry, we explore the geometric properties of the image
in function space of this map - typically referred to as neuromanifold. In
particular, we compute the dimension and the degree of the neuromanifold, which
measure the expressivity of the model, and describe its singularities.
Moreover, for a generic large dataset, we derive an explicit formula that
quantifies the number of critical points arising in the optimization of a
regression loss.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 14:13:05 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 12:18:16 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Shahverdi",
"Vahid",
""
],
[
"Marchetti",
"Giovanni Luca",
""
],
[
"Kohn",
"Kathlén",
""
]
]
| TITLE: On the Geometry and Optimization of Polynomial Convolutional Networks
ABSTRACT: We study convolutional neural networks with monomial activation functions.
Specifically, we prove that their parameterization map is regular and is an
isomorphism almost everywhere, up to rescaling the filters. By leveraging on
tools from algebraic geometry, we explore the geometric properties of the image
in function space of this map - typically referred to as neuromanifold. In
particular, we compute the dimension and the degree of the neuromanifold, which
measure the expressivity of the model, and describe its singularities.
Moreover, for a generic large dataset, we derive an explicit formula that
quantifies the number of critical points arising in the optimization of a
regression loss.
| no_new_dataset | 0.951953 |
2410.01337 | Bocheng Zeng | Bocheng Zeng, Qi Wang, Mengtao Yan, Yang Liu, Ruizhi Chengze, Yi
Zhang, Hongsheng Liu, Zidong Wang, Hao Sun | PhyMPGN: Physics-encoded Message Passing Graph Network for
spatiotemporal PDE systems | null | null | null | null | cs.LG cs.AI cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solving partial differential equations (PDEs) serves as a cornerstone for
modeling complex dynamical systems. Recent progresses have demonstrated grand
benefits of data-driven neural-based models for predicting spatiotemporal
dynamics (e.g., tremendous speedup gain compared with classical numerical
methods). However, most existing neural models rely on rich training data, have
limited extrapolation and generalization abilities, and suffer to produce
precise or reliable physical prediction under intricate conditions (e.g.,
irregular mesh or geometry, complex boundary conditions, diverse PDE
parameters, etc.). To this end, we propose a new graph learning approach,
namely, Physics-encoded Message Passing Graph Network (PhyMPGN), to model
spatiotemporal PDE systems on irregular meshes given small training datasets.
Specifically, we incorporate a GNN into a numerical integrator to approximate
the temporal marching of spatiotemporal dynamics for a given PDE system.
Considering that many physical phenomena are governed by diffusion processes,
we further design a learnable Laplace block, which encodes the discrete
Laplace-Beltrami operator, to aid and guide the GNN learning in a physically
feasible solution space. A boundary condition padding strategy is also designed
to improve the model convergence and accuracy. Extensive experiments
demonstrate that PhyMPGN is capable of accurately predicting various types of
spatiotemporal dynamics on coarse unstructured meshes, consistently achieves
the state-of-the-art results, and outperforms other baselines with considerable
gains.
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2024 08:54:18 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 10:11:42 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 02:50:30 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zeng",
"Bocheng",
""
],
[
"Wang",
"Qi",
""
],
[
"Yan",
"Mengtao",
""
],
[
"Liu",
"Yang",
""
],
[
"Chengze",
"Ruizhi",
""
],
[
"Zhang",
"Yi",
""
],
[
"Liu",
"Hongsheng",
""
],
[
"Wang",
"Zidong",
""
],
[
"Sun",
"Hao",
""
]
]
| TITLE: PhyMPGN: Physics-encoded Message Passing Graph Network for
spatiotemporal PDE systems
ABSTRACT: Solving partial differential equations (PDEs) serves as a cornerstone for
modeling complex dynamical systems. Recent progresses have demonstrated grand
benefits of data-driven neural-based models for predicting spatiotemporal
dynamics (e.g., tremendous speedup gain compared with classical numerical
methods). However, most existing neural models rely on rich training data, have
limited extrapolation and generalization abilities, and suffer to produce
precise or reliable physical prediction under intricate conditions (e.g.,
irregular mesh or geometry, complex boundary conditions, diverse PDE
parameters, etc.). To this end, we propose a new graph learning approach,
namely, Physics-encoded Message Passing Graph Network (PhyMPGN), to model
spatiotemporal PDE systems on irregular meshes given small training datasets.
Specifically, we incorporate a GNN into a numerical integrator to approximate
the temporal marching of spatiotemporal dynamics for a given PDE system.
Considering that many physical phenomena are governed by diffusion processes,
we further design a learnable Laplace block, which encodes the discrete
Laplace-Beltrami operator, to aid and guide the GNN learning in a physically
feasible solution space. A boundary condition padding strategy is also designed
to improve the model convergence and accuracy. Extensive experiments
demonstrate that PhyMPGN is capable of accurately predicting various types of
spatiotemporal dynamics on coarse unstructured meshes, consistently achieves
the state-of-the-art results, and outperforms other baselines with considerable
gains.
| no_new_dataset | 0.948155 |
2410.01417 | Hong Li | Hong Li, Nanxi Li, Yuanjie Chen, Jianbin Zhu, Qinlu Guo, Cewu Lu,
Yong-Lu Li | The Labyrinth of Links: Navigating the Associative Maze of Multi-modal
LLMs | Accepted by ICLR 2025. Project page:
https://mvig-rhos.com/llm_inception | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Multi-modal Large Language Models (MLLMs) have exhibited impressive
capability. However, recently many deficiencies of MLLMs have been found
compared to human intelligence, $\textit{e.g.}$, hallucination. To drive the
MLLMs study, the community dedicated efforts to building larger benchmarks with
complex tasks. In this paper, we propose benchmarking an essential but usually
overlooked intelligence: $\textbf{association}$, a human's basic capability to
link observation and prior practice memory. To comprehensively investigate
MLLM's performance on the association, we formulate the association task and
devise a standard benchmark based on adjective and verb semantic concepts.
Instead of costly data annotation and curation, we propose a convenient
$\textbf{annotation-free}$ construction method transforming the general dataset
for our association tasks. Simultaneously, we devise a rigorous data refinement
process to eliminate confusion in the raw dataset. Building on this database,
we establish three levels of association tasks: single-step, synchronous, and
asynchronous associations. Moreover, we conduct a comprehensive investigation
into the MLLMs' zero-shot association capabilities, addressing multiple
dimensions, including three distinct memory strategies, both open-source and
closed-source MLLMs, cutting-edge Mixture-of-Experts (MoE) models, and the
involvement of human experts. Our systematic investigation shows that current
open-source MLLMs consistently exhibit poor capability in our association
tasks, even the currently state-of-the-art GPT-4V(vision) also has a
significant gap compared to humans. We believe our benchmark would pave the way
for future MLLM studies. $\textit{Our data and code are available at:}$
https://mvig-rhos.com/llm_inception.
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2024 10:58:54 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 00:41:36 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Hong",
""
],
[
"Li",
"Nanxi",
""
],
[
"Chen",
"Yuanjie",
""
],
[
"Zhu",
"Jianbin",
""
],
[
"Guo",
"Qinlu",
""
],
[
"Lu",
"Cewu",
""
],
[
"Li",
"Yong-Lu",
""
]
]
| TITLE: The Labyrinth of Links: Navigating the Associative Maze of Multi-modal
LLMs
ABSTRACT: Multi-modal Large Language Models (MLLMs) have exhibited impressive
capability. However, recently many deficiencies of MLLMs have been found
compared to human intelligence, $\textit{e.g.}$, hallucination. To drive the
MLLMs study, the community dedicated efforts to building larger benchmarks with
complex tasks. In this paper, we propose benchmarking an essential but usually
overlooked intelligence: $\textbf{association}$, a human's basic capability to
link observation and prior practice memory. To comprehensively investigate
MLLM's performance on the association, we formulate the association task and
devise a standard benchmark based on adjective and verb semantic concepts.
Instead of costly data annotation and curation, we propose a convenient
$\textbf{annotation-free}$ construction method transforming the general dataset
for our association tasks. Simultaneously, we devise a rigorous data refinement
process to eliminate confusion in the raw dataset. Building on this database,
we establish three levels of association tasks: single-step, synchronous, and
asynchronous associations. Moreover, we conduct a comprehensive investigation
into the MLLMs' zero-shot association capabilities, addressing multiple
dimensions, including three distinct memory strategies, both open-source and
closed-source MLLMs, cutting-edge Mixture-of-Experts (MoE) models, and the
involvement of human experts. Our systematic investigation shows that current
open-source MLLMs consistently exhibit poor capability in our association
tasks, even the currently state-of-the-art GPT-4V(vision) also has a
significant gap compared to humans. We believe our benchmark would pave the way
for future MLLM studies. $\textit{Our data and code are available at:}$
https://mvig-rhos.com/llm_inception.
| no_new_dataset | 0.933734 |
2410.01746 | Emanuele Zappala | Emanuele Zappala | Leray-Schauder Mappings for Operator Learning | 13 pages, 2 figures, 1 table. Comments are welcome! v2: Theoretical
analysis expanded, several explanations regarding the experiments have been
added for improved clarity | null | null | null | cs.LG cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | We present an algorithm for learning operators between Banach spaces, based
on the use of Leray-Schauder mappings to learn a finite-dimensional
approximation of compact subspaces. We show that the resulting method is a
universal approximator of (possibly nonlinear) operators. We demonstrate the
efficiency of the approach on two benchmark datasets showing it achieves
results comparable to state of the art models.
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2024 17:01:01 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 06:17:54 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zappala",
"Emanuele",
""
]
]
| TITLE: Leray-Schauder Mappings for Operator Learning
ABSTRACT: We present an algorithm for learning operators between Banach spaces, based
on the use of Leray-Schauder mappings to learn a finite-dimensional
approximation of compact subspaces. We show that the resulting method is a
universal approximator of (possibly nonlinear) operators. We demonstrate the
efficiency of the approach on two benchmark datasets showing it achieves
results comparable to state of the art models.
| no_new_dataset | 0.944125 |
2410.02242 | Hyunwoo Lee | Hyunwoo Lee, Hayoung Choi, Hyunju Kim | Robust Weight Initialization for Tanh Neural Networks with Fixed Point
Analysis | ICLR 2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | As a neural network's depth increases, it can improve generalization
performance. However, training deep networks is challenging due to gradient and
signal propagation issues. To address these challenges, extensive theoretical
research and various methods have been introduced. Despite these advances,
effective weight initialization methods for tanh neural networks remain
insufficiently investigated. This paper presents a novel weight initialization
method for neural networks with tanh activation function. Based on an analysis
of the fixed points of the function $\tanh(ax)$, the proposed method aims to
determine values of $a$ that mitigate activation saturation. A series of
experiments on various classification datasets and physics-informed neural
networks demonstrates that the proposed method outperforms Xavier
initialization methods~(with or without normalization) in terms of robustness
across different network sizes, data efficiency, and convergence speed. Code is
available at https://github.com/1HyunwooLee/Tanh-Init
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 06:30:27 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 11:32:27 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lee",
"Hyunwoo",
""
],
[
"Choi",
"Hayoung",
""
],
[
"Kim",
"Hyunju",
""
]
]
| TITLE: Robust Weight Initialization for Tanh Neural Networks with Fixed Point
Analysis
ABSTRACT: As a neural network's depth increases, it can improve generalization
performance. However, training deep networks is challenging due to gradient and
signal propagation issues. To address these challenges, extensive theoretical
research and various methods have been introduced. Despite these advances,
effective weight initialization methods for tanh neural networks remain
insufficiently investigated. This paper presents a novel weight initialization
method for neural networks with tanh activation function. Based on an analysis
of the fixed points of the function $\tanh(ax)$, the proposed method aims to
determine values of $a$ that mitigate activation saturation. A series of
experiments on various classification datasets and physics-informed neural
networks demonstrates that the proposed method outperforms Xavier
initialization methods~(with or without normalization) in terms of robustness
across different network sizes, data efficiency, and convergence speed. Code is
available at https://github.com/1HyunwooLee/Tanh-Init
| no_new_dataset | 0.943919 |
2410.02392 | Bastian Rieck | Rub\'en Ballester and Ernst R\"oell and Daniel B\=in Schmid and
Mathieu Alain and Sergio Escalera and Carles Casacuberta and Bastian Rieck | MANTRA: The Manifold Triangulations Assemblage | Accepted at ICLR 2025 (https://openreview.net/forum?id=X6y5CC44HM) | null | null | null | cs.LG math.AT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rising interest in leveraging higher-order interactions present in
complex systems has led to a surge in more expressive models exploiting
higher-order structures in the data, especially in topological deep learning
(TDL), which designs neural networks on higher-order domains such as simplicial
complexes. However, progress in this field is hindered by the scarcity of
datasets for benchmarking these architectures. To address this gap, we
introduce MANTRA, the first large-scale, diverse, and intrinsically
higher-order dataset for benchmarking higher-order models, comprising over
43,000 and 250,000 triangulations of surfaces and three-dimensional manifolds,
respectively. With MANTRA, we assess several graph- and simplicial
complex-based models on three topological classification tasks. We demonstrate
that while simplicial complex-based neural networks generally outperform their
graph-based counterparts in capturing simple topological invariants, they also
struggle, suggesting a rethink of TDL. Thus, MANTRA serves as a benchmark for
assessing and advancing topological methods, leading the way for more effective
higher-order models.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 11:13:55 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 09:50:18 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ballester",
"Rubén",
""
],
[
"Röell",
"Ernst",
""
],
[
"Schmid",
"Daniel Bīn",
""
],
[
"Alain",
"Mathieu",
""
],
[
"Escalera",
"Sergio",
""
],
[
"Casacuberta",
"Carles",
""
],
[
"Rieck",
"Bastian",
""
]
]
| TITLE: MANTRA: The Manifold Triangulations Assemblage
ABSTRACT: The rising interest in leveraging higher-order interactions present in
complex systems has led to a surge in more expressive models exploiting
higher-order structures in the data, especially in topological deep learning
(TDL), which designs neural networks on higher-order domains such as simplicial
complexes. However, progress in this field is hindered by the scarcity of
datasets for benchmarking these architectures. To address this gap, we
introduce MANTRA, the first large-scale, diverse, and intrinsically
higher-order dataset for benchmarking higher-order models, comprising over
43,000 and 250,000 triangulations of surfaces and three-dimensional manifolds,
respectively. With MANTRA, we assess several graph- and simplicial
complex-based models on three topological classification tasks. We demonstrate
that while simplicial complex-based neural networks generally outperform their
graph-based counterparts in capturing simple topological invariants, they also
struggle, suggesting a rethink of TDL. Thus, MANTRA serves as a benchmark for
assessing and advancing topological methods, leading the way for more effective
higher-order models.
| new_dataset | 0.967132 |
2410.03115 | Haoran Xu | Haoran Xu, Kenton Murray, Philipp Koehn, Hieu Hoang, Akiko Eriguchi,
Huda Khayrallah | X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality
Translation at Scale | Published as a conference paper at ICLR 2025 (spotlight) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have achieved remarkable success across various
NLP tasks with a focus on English due to English-centric pre-training and
limited multilingual data. In this work, we focus on the problem of
translation, and while some multilingual LLMs claim to support for hundreds of
languages, models often fail to provide high-quality responses for mid- and
low-resource languages, leading to imbalanced performance heavily skewed in
favor of high-resource languages. We introduce **X-ALMA**, a model designed to
ensure top-tier performance across 50 diverse languages, regardless of their
resource levels. X-ALMA surpasses state-of-the-art open-source multilingual
LLMs, such as Aya-101 and Aya-23, in every single translation direction on the
FLORES-200 and WMT'23 test datasets according to COMET-22. This is achieved by
plug-and-play language-specific module architecture to prevent language
conflicts during training and a carefully designed training regimen with novel
optimization methods to maximize the translation performance. After the final
stage of training regimen, our proposed **A**daptive **R**ejection
**P**reference **O**ptimization (**ARPO**) surpasses existing preference
optimization methods in translation tasks.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2024 03:17:27 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 05:16:38 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xu",
"Haoran",
""
],
[
"Murray",
"Kenton",
""
],
[
"Koehn",
"Philipp",
""
],
[
"Hoang",
"Hieu",
""
],
[
"Eriguchi",
"Akiko",
""
],
[
"Khayrallah",
"Huda",
""
]
]
| TITLE: X-ALMA: Plug & Play Modules and Adaptive Rejection for Quality
Translation at Scale
ABSTRACT: Large language models (LLMs) have achieved remarkable success across various
NLP tasks with a focus on English due to English-centric pre-training and
limited multilingual data. In this work, we focus on the problem of
translation, and while some multilingual LLMs claim to support for hundreds of
languages, models often fail to provide high-quality responses for mid- and
low-resource languages, leading to imbalanced performance heavily skewed in
favor of high-resource languages. We introduce **X-ALMA**, a model designed to
ensure top-tier performance across 50 diverse languages, regardless of their
resource levels. X-ALMA surpasses state-of-the-art open-source multilingual
LLMs, such as Aya-101 and Aya-23, in every single translation direction on the
FLORES-200 and WMT'23 test datasets according to COMET-22. This is achieved by
plug-and-play language-specific module architecture to prevent language
conflicts during training and a carefully designed training regimen with novel
optimization methods to maximize the translation performance. After the final
stage of training regimen, our proposed **A**daptive **R**ejection
**P**reference **O**ptimization (**ARPO**) surpasses existing preference
optimization methods in translation tasks.
| no_new_dataset | 0.948442 |
2410.03246 | Oliver Hausd\"orfer | Oliver Hausd\"orfer, Alexander von Rohr, \'Eric Lefort and Angela
Schoellig | Latent Action Priors for Locomotion with Deep Reinforcement Learning | Submitted to IROS 2025 | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Reinforcement Learning (DRL) enables robots to learn complex behaviors
through interaction with the environment. However, due to the unrestricted
nature of the learning algorithms, the resulting solutions are often brittle
and appear unnatural. This is especially true for learning direct joint-level
torque control, as inductive biases are difficult to integrate into the
learning process. We propose an inductive bias for learning locomotion that is
especially useful for torque control: latent actions learned from a small
dataset of expert demonstrations. This prior allows the policy to directly
leverage knowledge contained in the expert's actions and facilitates more
efficient exploration. We observe that the agent is not restricted to the
reward levels of the demonstration, and performance in transfer tasks is
improved significantly. Latent action priors combined with style rewards for
imitation lead to a closer replication of the expert's behavior. Videos and
code are available at https://sites.google.com/view/latent-action-priors.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2024 09:10:56 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 09:12:55 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Hausdörfer",
"Oliver",
""
],
[
"von Rohr",
"Alexander",
""
],
[
"Lefort",
"Éric",
""
],
[
"Schoellig",
"Angela",
""
]
]
| TITLE: Latent Action Priors for Locomotion with Deep Reinforcement Learning
ABSTRACT: Deep Reinforcement Learning (DRL) enables robots to learn complex behaviors
through interaction with the environment. However, due to the unrestricted
nature of the learning algorithms, the resulting solutions are often brittle
and appear unnatural. This is especially true for learning direct joint-level
torque control, as inductive biases are difficult to integrate into the
learning process. We propose an inductive bias for learning locomotion that is
especially useful for torque control: latent actions learned from a small
dataset of expert demonstrations. This prior allows the policy to directly
leverage knowledge contained in the expert's actions and facilitates more
efficient exploration. We observe that the agent is not restricted to the
reward levels of the demonstration, and performance in transfer tasks is
improved significantly. Latent action priors combined with style rewards for
imitation lead to a closer replication of the expert's behavior. Videos and
code are available at https://sites.google.com/view/latent-action-priors.
| no_new_dataset | 0.948155 |
2410.03524 | Yongchao Chen | Yongchao Chen, Harsh Jhamtani, Srinagesh Sharma, Chuchu Fan, Chi Wang | Steering Large Language Models between Code Execution and Textual
Reasoning | 32 pages, 12 figures, 12 tables | The Thirteenth International Conference on Learning
Representations (ICLR'2025) | null | null | cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | While a lot of recent research focuses on enhancing the textual reasoning
capabilities of Large Language Models (LLMs) by optimizing the multi-agent
framework or reasoning chains, several benchmark tasks can be solved with 100\%
success through direct coding, which is more scalable and avoids the
computational overhead associated with textual iterating and searching. Textual
reasoning has inherent limitations in solving tasks with challenges in math,
logics, optimization, and searching, which is unlikely to be solved by simply
scaling up the model and data size. The recently released OpenAI GPT Code
Interpreter and multi-agent frameworks such as AutoGen have demonstrated
remarkable proficiency of integrating code generation and execution to solve
complex tasks using LLMs. However, based on our experiments on 7 existing
popular methods for steering code/text generation in both single- and
multi-turn settings with 14 tasks and 6 types of LLMs (including the new
O1-preview), currently there is no optimal method to correctly steer LLMs to
write code when needed. We discover some interesting patterns on when models
use code vs. textual reasoning with the evolution to task complexity and model
sizes, which even result in an astonishingly inverse scaling behavior. We also
discover that results from LLM written code are not always better than using
textual reasoning, even if the task could be solved through code. To mitigate
the above issues, we propose three methods to better steer LLM code/text
generation and achieve a notable improvement. The costs of token lengths and
runtime are thoroughly discussed for all the methods. We believe the problem of
steering LLM code/text generation is critical for future research and has much
space for further improvement. Project Page, Datasets, and Codes are available
at https://yongchao98.github.io/CodeSteer/.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2024 15:44:47 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 15:54:11 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Yongchao",
""
],
[
"Jhamtani",
"Harsh",
""
],
[
"Sharma",
"Srinagesh",
""
],
[
"Fan",
"Chuchu",
""
],
[
"Wang",
"Chi",
""
]
]
| TITLE: Steering Large Language Models between Code Execution and Textual
Reasoning
ABSTRACT: While a lot of recent research focuses on enhancing the textual reasoning
capabilities of Large Language Models (LLMs) by optimizing the multi-agent
framework or reasoning chains, several benchmark tasks can be solved with 100\%
success through direct coding, which is more scalable and avoids the
computational overhead associated with textual iterating and searching. Textual
reasoning has inherent limitations in solving tasks with challenges in math,
logics, optimization, and searching, which is unlikely to be solved by simply
scaling up the model and data size. The recently released OpenAI GPT Code
Interpreter and multi-agent frameworks such as AutoGen have demonstrated
remarkable proficiency of integrating code generation and execution to solve
complex tasks using LLMs. However, based on our experiments on 7 existing
popular methods for steering code/text generation in both single- and
multi-turn settings with 14 tasks and 6 types of LLMs (including the new
O1-preview), currently there is no optimal method to correctly steer LLMs to
write code when needed. We discover some interesting patterns on when models
use code vs. textual reasoning with the evolution to task complexity and model
sizes, which even result in an astonishingly inverse scaling behavior. We also
discover that results from LLM written code are not always better than using
textual reasoning, even if the task could be solved through code. To mitigate
the above issues, we propose three methods to better steer LLM code/text
generation and achieve a notable improvement. The costs of token lengths and
runtime are thoroughly discussed for all the methods. We believe the problem of
steering LLM code/text generation is critical for future research and has much
space for further improvement. Project Page, Datasets, and Codes are available
at https://yongchao98.github.io/CodeSteer/.
| no_new_dataset | 0.946597 |
2410.03878 | Yue Zhang | Yue Zhang, Zhiyang Xu, Ying Shen, Parisa Kordjamshidi, Lifu Huang | SPARTUN3D: Situated Spatial Understanding of 3D World in Large Language
Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integrating the 3D world into large language models (3D-based LLMs) has been
a promising research direction for 3D scene understanding. However, current
3D-based LLMs fall short in situated understanding due to two key limitations:
1) existing 3D datasets are constructed from a global perspective of the 3D
scenes and lack situated context. 2) the architectures of existing 3D-based
LLMs lack explicit alignment between the spatial representations of 3D scenes
and natural language, limiting their performance in tasks requiring precise
spatial reasoning. We address these issues by introducing a scalable situated
3D dataset, named Spartun3D, that incorporates various situated spatial
reasoning tasks. Furthermore, we propose Spartun3D-LLM, built on an existing
3D-based LLM but integrated with a novel situated spatial alignment module,
aiming to enhance the alignment between 3D visual representations and their
corresponding textual descriptions. Experimental results demonstrate that both
our proposed dataset and alignment module significantly enhance the situated
spatial understanding of 3D-based LLMs.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2024 19:22:20 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 15:22:12 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Yue",
""
],
[
"Xu",
"Zhiyang",
""
],
[
"Shen",
"Ying",
""
],
[
"Kordjamshidi",
"Parisa",
""
],
[
"Huang",
"Lifu",
""
]
]
| TITLE: SPARTUN3D: Situated Spatial Understanding of 3D World in Large Language
Models
ABSTRACT: Integrating the 3D world into large language models (3D-based LLMs) has been
a promising research direction for 3D scene understanding. However, current
3D-based LLMs fall short in situated understanding due to two key limitations:
1) existing 3D datasets are constructed from a global perspective of the 3D
scenes and lack situated context. 2) the architectures of existing 3D-based
LLMs lack explicit alignment between the spatial representations of 3D scenes
and natural language, limiting their performance in tasks requiring precise
spatial reasoning. We address these issues by introducing a scalable situated
3D dataset, named Spartun3D, that incorporates various situated spatial
reasoning tasks. Furthermore, we propose Spartun3D-LLM, built on an existing
3D-based LLM but integrated with a novel situated spatial alignment module,
aiming to enhance the alignment between 3D visual representations and their
corresponding textual descriptions. Experimental results demonstrate that both
our proposed dataset and alignment module significantly enhance the situated
spatial understanding of 3D-based LLMs.
| new_dataset | 0.966156 |
2410.04343 | Zhenrui Yue | Zhenrui Yue, Honglei Zhuang, Aijun Bai, Kai Hui, Rolf Jagerman, Hansi
Zeng, Zhen Qin, Dong Wang, Xuanhui Wang, Michael Bendersky | Inference Scaling for Long-Context Retrieval Augmented Generation | ICLR 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The scaling of inference computation has unlocked the potential of
long-context large language models (LLMs) across diverse settings. For
knowledge-intensive tasks, the increased compute is often allocated to
incorporate more external knowledge. However, without effectively utilizing
such knowledge, solely expanding context does not always enhance performance.
In this work, we investigate inference scaling for retrieval augmented
generation (RAG), exploring the combination of multiple strategies beyond
simply increasing the quantity of knowledge, including in-context learning and
iterative prompting. These strategies provide additional flexibility to scale
test-time computation (e.g., by increasing retrieved documents or generation
steps), thereby enhancing LLMs' ability to effectively acquire and utilize
contextual information. We address two key questions: (1) How does RAG
performance benefit from the scaling of inference computation when optimally
configured? (2) Can we predict the optimal test-time compute allocation for a
given budget by modeling the relationship between RAG performance and inference
parameters? Our observations reveal that increasing inference computation leads
to nearly linear gains in RAG performance when optimally allocated, a
relationship we describe as the inference scaling laws for RAG. Building on
this, we further develop the computation allocation model to estimate RAG
performance across different inference configurations. The model predicts
optimal inference parameters under various computation constraints, which align
closely with the experimental results. By applying these optimal
configurations, we demonstrate that scaling inference compute on long-context
LLMs achieves up to 58.9% gains on benchmark datasets compared to standard RAG.
| [
{
"version": "v1",
"created": "Sun, 6 Oct 2024 03:42:15 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 19:44:37 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yue",
"Zhenrui",
""
],
[
"Zhuang",
"Honglei",
""
],
[
"Bai",
"Aijun",
""
],
[
"Hui",
"Kai",
""
],
[
"Jagerman",
"Rolf",
""
],
[
"Zeng",
"Hansi",
""
],
[
"Qin",
"Zhen",
""
],
[
"Wang",
"Dong",
""
],
[
"Wang",
"Xuanhui",
""
],
[
"Bendersky",
"Michael",
""
]
]
| TITLE: Inference Scaling for Long-Context Retrieval Augmented Generation
ABSTRACT: The scaling of inference computation has unlocked the potential of
long-context large language models (LLMs) across diverse settings. For
knowledge-intensive tasks, the increased compute is often allocated to
incorporate more external knowledge. However, without effectively utilizing
such knowledge, solely expanding context does not always enhance performance.
In this work, we investigate inference scaling for retrieval augmented
generation (RAG), exploring the combination of multiple strategies beyond
simply increasing the quantity of knowledge, including in-context learning and
iterative prompting. These strategies provide additional flexibility to scale
test-time computation (e.g., by increasing retrieved documents or generation
steps), thereby enhancing LLMs' ability to effectively acquire and utilize
contextual information. We address two key questions: (1) How does RAG
performance benefit from the scaling of inference computation when optimally
configured? (2) Can we predict the optimal test-time compute allocation for a
given budget by modeling the relationship between RAG performance and inference
parameters? Our observations reveal that increasing inference computation leads
to nearly linear gains in RAG performance when optimally allocated, a
relationship we describe as the inference scaling laws for RAG. Building on
this, we further develop the computation allocation model to estimate RAG
performance across different inference configurations. The model predicts
optimal inference parameters under various computation constraints, which align
closely with the experimental results. By applying these optimal
configurations, we demonstrate that scaling inference compute on long-context
LLMs achieves up to 58.9% gains on benchmark datasets compared to standard RAG.
| no_new_dataset | 0.947088 |
2410.04642 | Alexander Atanasov | Alexander Atanasov, Alexandru Meterez, James B. Simon, Cengiz Pehlevan | The Optimization Landscape of SGD Across the Feature Learning Strength | ICLR 2025 Final Copy, 40 Pages, 45 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider neural networks (NNs) where the final layer is down-scaled by a
fixed hyperparameter $\gamma$. Recent work has identified $\gamma$ as
controlling the strength of feature learning. As $\gamma$ increases, network
evolution changes from "lazy" kernel dynamics to "rich" feature-learning
dynamics, with a host of associated benefits including improved performance on
common tasks. In this work, we conduct a thorough empirical investigation of
the effect of scaling $\gamma$ across a variety of models and datasets in the
online training setting. We first examine the interaction of $\gamma$ with the
learning rate $\eta$, identifying several scaling regimes in the
$\gamma$-$\eta$ plane which we explain theoretically using a simple model. We
find that the optimal learning rate $\eta^*$ scales non-trivially with
$\gamma$. In particular, $\eta^* \propto \gamma^2$ when $\gamma \ll 1$ and
$\eta^* \propto \gamma^{2/L}$ when $\gamma \gg 1$ for a feed-forward network of
depth $L$. Using this optimal learning rate scaling, we proceed with an
empirical study of the under-explored "ultra-rich" $\gamma \gg 1$ regime. We
find that networks in this regime display characteristic loss curves, starting
with a long plateau followed by a drop-off, sometimes followed by one or more
additional staircase steps. We find networks of different large $\gamma$ values
optimize along similar trajectories up to a reparameterization of time. We
further find that optimal online performance is often found at large $\gamma$
and could be missed if this hyperparameter is not tuned. Our findings indicate
that analytical study of the large-$\gamma$ limit may yield useful insights
into the dynamics of representation learning in performant models.
| [
{
"version": "v1",
"created": "Sun, 6 Oct 2024 22:30:14 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Oct 2024 12:28:22 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 18:16:48 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Atanasov",
"Alexander",
""
],
[
"Meterez",
"Alexandru",
""
],
[
"Simon",
"James B.",
""
],
[
"Pehlevan",
"Cengiz",
""
]
]
| TITLE: The Optimization Landscape of SGD Across the Feature Learning Strength
ABSTRACT: We consider neural networks (NNs) where the final layer is down-scaled by a
fixed hyperparameter $\gamma$. Recent work has identified $\gamma$ as
controlling the strength of feature learning. As $\gamma$ increases, network
evolution changes from "lazy" kernel dynamics to "rich" feature-learning
dynamics, with a host of associated benefits including improved performance on
common tasks. In this work, we conduct a thorough empirical investigation of
the effect of scaling $\gamma$ across a variety of models and datasets in the
online training setting. We first examine the interaction of $\gamma$ with the
learning rate $\eta$, identifying several scaling regimes in the
$\gamma$-$\eta$ plane which we explain theoretically using a simple model. We
find that the optimal learning rate $\eta^*$ scales non-trivially with
$\gamma$. In particular, $\eta^* \propto \gamma^2$ when $\gamma \ll 1$ and
$\eta^* \propto \gamma^{2/L}$ when $\gamma \gg 1$ for a feed-forward network of
depth $L$. Using this optimal learning rate scaling, we proceed with an
empirical study of the under-explored "ultra-rich" $\gamma \gg 1$ regime. We
find that networks in this regime display characteristic loss curves, starting
with a long plateau followed by a drop-off, sometimes followed by one or more
additional staircase steps. We find networks of different large $\gamma$ values
optimize along similar trajectories up to a reparameterization of time. We
further find that optimal online performance is often found at large $\gamma$
and could be missed if this hyperparameter is not tuned. Our findings indicate
that analytical study of the large-$\gamma$ limit may yield useful insights
into the dynamics of representation learning in performant models.
| no_new_dataset | 0.941922 |
2410.04660 | Xiaorui Su | Xiaorui Su, Yibo Wang, Shanghua Gao, Xiaolong Liu, Valentina
Giunchiglia, Djork-Arn\'e Clevert, and Marinka Zitnik | KGARevion: An AI Agent for Knowledge-Intensive Biomedical QA | null | ICLR2025 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Biomedical reasoning integrates structured, codified knowledge with tacit,
experience-driven insights. Depending on the context, quantity, and nature of
available evidence, researchers and clinicians use diverse strategies,
including rule-based, prototype-based, and case-based reasoning. Effective
medical AI models must handle this complexity while ensuring reliability and
adaptability. We introduce KGARevion, a knowledge graph-based agent that
answers knowledge-intensive questions. Upon receiving a query, KGARevion
generates relevant triplets by leveraging the latent knowledge embedded in a
large language model. It then verifies these triplets against a grounded
knowledge graph, filtering out errors and retaining only accurate, contextually
relevant information for the final answer. This multi-step process strengthens
reasoning, adapts to different models of medical inference, and outperforms
retrieval-augmented generation-based approaches that lack effective
verification mechanisms. Evaluations on medical QA benchmarks show that
KGARevion improves accuracy by over 5.2% over 15 models in handling complex
medical queries. To further assess its effectiveness, we curated three new
medical QA datasets with varying levels of semantic complexity, where KGARevion
improved accuracy by 10.4%. The agent integrates with different LLMs and
biomedical knowledge graphs for broad applicability across knowledge-intensive
tasks. We evaluated KGARevion on AfriMed-QA, a newly introduced dataset focused
on African healthcare, demonstrating its strong zero-shot generalization to
underrepresented medical contexts.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 00:17:37 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 18:23:47 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Su",
"Xiaorui",
""
],
[
"Wang",
"Yibo",
""
],
[
"Gao",
"Shanghua",
""
],
[
"Liu",
"Xiaolong",
""
],
[
"Giunchiglia",
"Valentina",
""
],
[
"Clevert",
"Djork-Arné",
""
],
[
"Zitnik",
"Marinka",
""
]
]
| TITLE: KGARevion: An AI Agent for Knowledge-Intensive Biomedical QA
ABSTRACT: Biomedical reasoning integrates structured, codified knowledge with tacit,
experience-driven insights. Depending on the context, quantity, and nature of
available evidence, researchers and clinicians use diverse strategies,
including rule-based, prototype-based, and case-based reasoning. Effective
medical AI models must handle this complexity while ensuring reliability and
adaptability. We introduce KGARevion, a knowledge graph-based agent that
answers knowledge-intensive questions. Upon receiving a query, KGARevion
generates relevant triplets by leveraging the latent knowledge embedded in a
large language model. It then verifies these triplets against a grounded
knowledge graph, filtering out errors and retaining only accurate, contextually
relevant information for the final answer. This multi-step process strengthens
reasoning, adapts to different models of medical inference, and outperforms
retrieval-augmented generation-based approaches that lack effective
verification mechanisms. Evaluations on medical QA benchmarks show that
KGARevion improves accuracy by over 5.2% over 15 models in handling complex
medical queries. To further assess its effectiveness, we curated three new
medical QA datasets with varying levels of semantic complexity, where KGARevion
improved accuracy by 10.4%. The agent integrates with different LLMs and
biomedical knowledge graphs for broad applicability across knowledge-intensive
tasks. We evaluated KGARevion on AfriMed-QA, a newly introduced dataset focused
on African healthcare, demonstrating its strong zero-shot generalization to
underrepresented medical contexts.
| new_dataset | 0.963984 |
2410.04810 | Haokun Chen | Haokun Chen, Hang Li, Yao Zhang, Jinhe Bi, Gengyuan Zhang, Yueqi
Zhang, Philip Torr, Jindong Gu, Denis Krompass, Volker Tresp | FedBiP: Heterogeneous One-Shot Federated Learning with Personalized
Latent Diffusion Models | CVPR 2025 | null | null | null | cs.LG cs.CV cs.DC cs.MM | http://creativecommons.org/licenses/by/4.0/ | One-Shot Federated Learning (OSFL), a special decentralized machine learning
paradigm, has recently gained significant attention. OSFL requires only a
single round of client data or model upload, which reduces communication costs
and mitigates privacy threats compared to traditional FL. Despite these
promising prospects, existing methods face challenges due to client data
heterogeneity and limited data quantity when applied to real-world OSFL
systems. Recently, Latent Diffusion Models (LDM) have shown remarkable
advancements in synthesizing high-quality images through pretraining on
large-scale datasets, thereby presenting a potential solution to overcome these
issues. However, directly applying pretrained LDM to heterogeneous OSFL results
in significant distribution shifts in synthetic data, leading to performance
degradation in classification models trained on such data. This issue is
particularly pronounced in rare domains, such as medical imaging, which are
underrepresented in LDM's pretraining data. To address this challenge, we
propose Federated Bi-Level Personalization (FedBiP), which personalizes the
pretrained LDM at both instance-level and concept-level. Hereby, FedBiP
synthesizes images following the client's local data distribution without
compromising the privacy regulations. FedBiP is also the first approach to
simultaneously address feature space heterogeneity and client data scarcity in
OSFL. Our method is validated through extensive experiments on three OSFL
benchmarks with feature space heterogeneity, as well as on challenging medical
and satellite image datasets with label heterogeneity. The results demonstrate
the effectiveness of FedBiP, which substantially outperforms other OSFL
methods.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 07:45:18 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 17:18:04 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Haokun",
""
],
[
"Li",
"Hang",
""
],
[
"Zhang",
"Yao",
""
],
[
"Bi",
"Jinhe",
""
],
[
"Zhang",
"Gengyuan",
""
],
[
"Zhang",
"Yueqi",
""
],
[
"Torr",
"Philip",
""
],
[
"Gu",
"Jindong",
""
],
[
"Krompass",
"Denis",
""
],
[
"Tresp",
"Volker",
""
]
]
| TITLE: FedBiP: Heterogeneous One-Shot Federated Learning with Personalized
Latent Diffusion Models
ABSTRACT: One-Shot Federated Learning (OSFL), a special decentralized machine learning
paradigm, has recently gained significant attention. OSFL requires only a
single round of client data or model upload, which reduces communication costs
and mitigates privacy threats compared to traditional FL. Despite these
promising prospects, existing methods face challenges due to client data
heterogeneity and limited data quantity when applied to real-world OSFL
systems. Recently, Latent Diffusion Models (LDM) have shown remarkable
advancements in synthesizing high-quality images through pretraining on
large-scale datasets, thereby presenting a potential solution to overcome these
issues. However, directly applying pretrained LDM to heterogeneous OSFL results
in significant distribution shifts in synthetic data, leading to performance
degradation in classification models trained on such data. This issue is
particularly pronounced in rare domains, such as medical imaging, which are
underrepresented in LDM's pretraining data. To address this challenge, we
propose Federated Bi-Level Personalization (FedBiP), which personalizes the
pretrained LDM at both instance-level and concept-level. Hereby, FedBiP
synthesizes images following the client's local data distribution without
compromising the privacy regulations. FedBiP is also the first approach to
simultaneously address feature space heterogeneity and client data scarcity in
OSFL. Our method is validated through extensive experiments on three OSFL
benchmarks with feature space heterogeneity, as well as on challenging medical
and satellite image datasets with label heterogeneity. The results demonstrate
the effectiveness of FedBiP, which substantially outperforms other OSFL
methods.
| no_new_dataset | 0.952131 |
2410.04870 | Bingrui Li | Bingrui Li, Wei Huang, Andi Han, Zhanpeng Zhou, Taiji Suzuki, Jun Zhu,
Jianfei Chen | On the Optimization and Generalization of Two-layer Transformers with
Sign Gradient Descent | 79 pages, 19 figures, ICLR 2025 Spotlight | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | The Adam optimizer is widely used for transformer optimization in practice,
which makes understanding the underlying optimization mechanisms an important
problem. However, due to the Adam's complexity, theoretical analysis of how it
optimizes transformers remains a challenging task. Fortunately, Sign Gradient
Descent (SignGD) serves as an effective surrogate for Adam. Despite its
simplicity, theoretical understanding of how SignGD optimizes transformers
still lags behind. In this work, we study how SignGD optimizes a two-layer
transformer -- consisting of a softmax attention layer with trainable query-key
parameterization followed by a linear layer -- on a linearly separable noisy
dataset. We identify four stages in the training dynamics, each exhibiting
intriguing behaviors. Based on the training dynamics, we prove the fast
convergence but poor generalization of the learned transformer on the noisy
dataset. We also show that Adam behaves similarly to SignGD in terms of both
optimization and generalization in this setting. Additionally, we find that the
poor generalization of SignGD is not solely due to data noise, suggesting that
both SignGD and Adam requires high-quality data for real-world tasks. Finally,
experiments on synthetic and real-world datasets empirically support our
theoretical results.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 09:36:43 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 10:01:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Bingrui",
""
],
[
"Huang",
"Wei",
""
],
[
"Han",
"Andi",
""
],
[
"Zhou",
"Zhanpeng",
""
],
[
"Suzuki",
"Taiji",
""
],
[
"Zhu",
"Jun",
""
],
[
"Chen",
"Jianfei",
""
]
]
| TITLE: On the Optimization and Generalization of Two-layer Transformers with
Sign Gradient Descent
ABSTRACT: The Adam optimizer is widely used for transformer optimization in practice,
which makes understanding the underlying optimization mechanisms an important
problem. However, due to the Adam's complexity, theoretical analysis of how it
optimizes transformers remains a challenging task. Fortunately, Sign Gradient
Descent (SignGD) serves as an effective surrogate for Adam. Despite its
simplicity, theoretical understanding of how SignGD optimizes transformers
still lags behind. In this work, we study how SignGD optimizes a two-layer
transformer -- consisting of a softmax attention layer with trainable query-key
parameterization followed by a linear layer -- on a linearly separable noisy
dataset. We identify four stages in the training dynamics, each exhibiting
intriguing behaviors. Based on the training dynamics, we prove the fast
convergence but poor generalization of the learned transformer on the noisy
dataset. We also show that Adam behaves similarly to SignGD in terms of both
optimization and generalization in this setting. Additionally, we find that the
poor generalization of SignGD is not solely due to data noise, suggesting that
both SignGD and Adam requires high-quality data for real-world tasks. Finally,
experiments on synthetic and real-world datasets empirically support our
theoretical results.
| no_new_dataset | 0.944228 |
2410.05243 | Boyu Gou | Boyu Gou, Ruohan Wang, Boyuan Zheng, Yanan Xie, Cheng Chang, Yiheng
Shu, Huan Sun, Yu Su | Navigating the Digital World as Humans Do: Universal Visual Grounding
for GUI Agents | Accepted to ICLR 2025 (Oral) | null | null | null | cs.AI cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal large language models (MLLMs) are transforming the capabilities of
graphical user interface (GUI) agents, facilitating their transition from
controlled simulations to complex, real-world applications across various
platforms. However, the effectiveness of these agents hinges on the robustness
of their grounding capability. Current GUI agents predominantly utilize
text-based representations such as HTML or accessibility trees, which, despite
their utility, often introduce noise, incompleteness, and increased
computational overhead. In this paper, we advocate a human-like embodiment for
GUI agents that perceive the environment entirely visually and directly perform
pixel-level operations on the GUI. The key is visual grounding models that can
accurately map diverse referring expressions of GUI elements to their
coordinates on the GUI across different platforms. We show that a simple
recipe, which includes web-based synthetic data and slight adaptation of the
LLaVA architecture, is surprisingly effective for training such visual
grounding models. We collect the largest dataset for GUI visual grounding so
far, containing 10M GUI elements and their referring expressions over 1.3M
screenshots, and use it to train UGround, a strong universal visual grounding
model for GUI agents. Empirical results on six benchmarks spanning three
categories (grounding, offline agent, and online agent) show that 1) UGround
substantially outperforms existing visual grounding models for GUI agents, by
up to 20% absolute, and 2) agents with UGround outperform state-of-the-art
agents, despite the fact that existing agents use additional text-based input
while ours only uses visual perception. These results provide strong support
for the feasibility and promises of GUI agents that navigate the digital world
as humans do.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 17:47:50 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 18:39:16 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Gou",
"Boyu",
""
],
[
"Wang",
"Ruohan",
""
],
[
"Zheng",
"Boyuan",
""
],
[
"Xie",
"Yanan",
""
],
[
"Chang",
"Cheng",
""
],
[
"Shu",
"Yiheng",
""
],
[
"Sun",
"Huan",
""
],
[
"Su",
"Yu",
""
]
]
| TITLE: Navigating the Digital World as Humans Do: Universal Visual Grounding
for GUI Agents
ABSTRACT: Multimodal large language models (MLLMs) are transforming the capabilities of
graphical user interface (GUI) agents, facilitating their transition from
controlled simulations to complex, real-world applications across various
platforms. However, the effectiveness of these agents hinges on the robustness
of their grounding capability. Current GUI agents predominantly utilize
text-based representations such as HTML or accessibility trees, which, despite
their utility, often introduce noise, incompleteness, and increased
computational overhead. In this paper, we advocate a human-like embodiment for
GUI agents that perceive the environment entirely visually and directly perform
pixel-level operations on the GUI. The key is visual grounding models that can
accurately map diverse referring expressions of GUI elements to their
coordinates on the GUI across different platforms. We show that a simple
recipe, which includes web-based synthetic data and slight adaptation of the
LLaVA architecture, is surprisingly effective for training such visual
grounding models. We collect the largest dataset for GUI visual grounding so
far, containing 10M GUI elements and their referring expressions over 1.3M
screenshots, and use it to train UGround, a strong universal visual grounding
model for GUI agents. Empirical results on six benchmarks spanning three
categories (grounding, offline agent, and online agent) show that 1) UGround
substantially outperforms existing visual grounding models for GUI agents, by
up to 20% absolute, and 2) agents with UGround outperform state-of-the-art
agents, despite the fact that existing agents use additional text-based input
while ours only uses visual perception. These results provide strong support
for the feasibility and promises of GUI agents that navigate the digital world
as humans do.
| new_dataset | 0.951997 |
2410.05643 | Yongxin Guo | Yongxin Guo, Jingyu Liu, Mingda Li, Qingbin Liu, Xi Chen, Xiaoying
Tang | TRACE: Temporal Grounding Video LLM via Causal Event Modeling | ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Video Temporal Grounding (VTG) is a crucial capability for video
understanding models and plays a vital role in downstream tasks such as video
browsing and editing. To effectively handle various tasks simultaneously and
enable zero-shot prediction, there is a growing trend in employing video LLMs
for VTG tasks. However, current video LLM-based methods rely exclusively on
natural language generation, lacking the ability to model the clear structure
inherent in videos, which restricts their effectiveness in tackling VTG tasks.
To address this issue, this paper first formally introduces causal event
modeling framework, which represents video LLM outputs as sequences of events,
and predict the current event using previous events, video inputs, and textural
instructions. Each event consists of three components: timestamps, salient
scores, and textual captions. We then propose a novel task-interleaved video
LLM called TRACE to effectively implement the causal event modeling framework
in practice. The TRACE process visual frames, timestamps, salient scores, and
text as distinct tasks, employing various encoders and decoding heads for each.
Task tokens are arranged in an interleaved sequence according to the causal
event modeling framework's formulation. Extensive experiments on various VTG
tasks and datasets demonstrate the superior performance of TRACE compared to
state-of-the-art video LLMs. Our model and code are available at
https://github.com/gyxxyg/TRACE.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 02:46:30 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Nov 2024 08:58:14 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 10:28:30 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Guo",
"Yongxin",
""
],
[
"Liu",
"Jingyu",
""
],
[
"Li",
"Mingda",
""
],
[
"Liu",
"Qingbin",
""
],
[
"Chen",
"Xi",
""
],
[
"Tang",
"Xiaoying",
""
]
]
| TITLE: TRACE: Temporal Grounding Video LLM via Causal Event Modeling
ABSTRACT: Video Temporal Grounding (VTG) is a crucial capability for video
understanding models and plays a vital role in downstream tasks such as video
browsing and editing. To effectively handle various tasks simultaneously and
enable zero-shot prediction, there is a growing trend in employing video LLMs
for VTG tasks. However, current video LLM-based methods rely exclusively on
natural language generation, lacking the ability to model the clear structure
inherent in videos, which restricts their effectiveness in tackling VTG tasks.
To address this issue, this paper first formally introduces causal event
modeling framework, which represents video LLM outputs as sequences of events,
and predict the current event using previous events, video inputs, and textural
instructions. Each event consists of three components: timestamps, salient
scores, and textual captions. We then propose a novel task-interleaved video
LLM called TRACE to effectively implement the causal event modeling framework
in practice. The TRACE process visual frames, timestamps, salient scores, and
text as distinct tasks, employing various encoders and decoding heads for each.
Task tokens are arranged in an interleaved sequence according to the causal
event modeling framework's formulation. Extensive experiments on various VTG
tasks and datasets demonstrate the superior performance of TRACE compared to
state-of-the-art video LLMs. Our model and code are available at
https://github.com/gyxxyg/TRACE.
| no_new_dataset | 0.949902 |
2410.06232 | William Dorrell Mr | Will Dorrell and Kyle Hsu and Luke Hollingsworth and Jin Hwa Lee and
Jiajun Wu and Chelsea Finn and Peter E Latham and Tim EJ Behrens and James CR
Whittington | Range, not Independence, Drives Modularity in Biologically Inspired
Representations | 47 pages, 17 figures. WD and KH contributed equally; LH and JHL
contributed equally | null | null | null | q-bio.NC cs.AI cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | Why do biological and artificial neurons sometimes modularise, each encoding
a single meaningful variable, and sometimes entangle their representation of
many variables? In this work, we develop a theory of when biologically inspired
networks -- those that are nonnegative and energy efficient -- modularise their
representation of source variables (sources). We derive necessary and
sufficient conditions on a sample of sources that determine whether the neurons
in an optimal biologically-inspired linear autoencoder modularise. Our theory
applies to any dataset, extending far beyond the case of statistical
independence studied in previous work. Rather we show that sources modularise
if their support is ``sufficiently spread''. From this theory, we extract and
validate predictions in a variety of empirical studies on how data distribution
affects modularisation in nonlinear feedforward and recurrent neural networks
trained on supervised and unsupervised tasks. Furthermore, we apply these ideas
to neuroscience data, showing that range independence can be used to understand
the mixing or modularising of spatial and reward information in entorhinal
recordings in seemingly conflicting experiments. Further, we use these results
to suggest alternate origins of mixed-selectivity, beyond the predominant
theory of flexible nonlinear classification. In sum, our theory prescribes
precise conditions on when neural activities modularise, providing tools for
inducing and elucidating modular representations in brains and machines.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 17:41:37 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jan 2025 09:20:48 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 20:40:21 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dorrell",
"Will",
""
],
[
"Hsu",
"Kyle",
""
],
[
"Hollingsworth",
"Luke",
""
],
[
"Lee",
"Jin Hwa",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Finn",
"Chelsea",
""
],
[
"Latham",
"Peter E",
""
],
[
"Behrens",
"Tim EJ",
""
],
[
"Whittington",
"James CR",
""
]
]
| TITLE: Range, not Independence, Drives Modularity in Biologically Inspired
Representations
ABSTRACT: Why do biological and artificial neurons sometimes modularise, each encoding
a single meaningful variable, and sometimes entangle their representation of
many variables? In this work, we develop a theory of when biologically inspired
networks -- those that are nonnegative and energy efficient -- modularise their
representation of source variables (sources). We derive necessary and
sufficient conditions on a sample of sources that determine whether the neurons
in an optimal biologically-inspired linear autoencoder modularise. Our theory
applies to any dataset, extending far beyond the case of statistical
independence studied in previous work. Rather we show that sources modularise
if their support is ``sufficiently spread''. From this theory, we extract and
validate predictions in a variety of empirical studies on how data distribution
affects modularisation in nonlinear feedforward and recurrent neural networks
trained on supervised and unsupervised tasks. Furthermore, we apply these ideas
to neuroscience data, showing that range independence can be used to understand
the mixing or modularising of spatial and reward information in entorhinal
recordings in seemingly conflicting experiments. Further, we use these results
to suggest alternate origins of mixed-selectivity, beyond the predominant
theory of flexible nonlinear classification. In sum, our theory prescribes
precise conditions on when neural activities modularise, providing tools for
inducing and elucidating modular representations in brains and machines.
| no_new_dataset | 0.947914 |
2410.06526 | Kaijing Ma | Kaijing Ma, Xinrun Du, Yunran Wang, Haoran Zhang, Zhoufutu Wen,
Xingwei Qu, Jian Yang, Jiaheng Liu, Minghao Liu, Xiang Yue, Wenhao Huang, Ge
Zhang | KOR-Bench: Benchmarking Language Models on Knowledge-Orthogonal
Reasoning Tasks | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce Knowledge-Orthogonal Reasoning (KOR), a concept
aimed at minimizing reliance on domain-specific knowledge, enabling more
accurate evaluation of models' reasoning abilities in out-of-distribution
settings. Based on this concept, we propose the Knowledge-Orthogonal Reasoning
Benchmark (KOR-Bench), encompassing five task categories: Operation, Logic,
Cipher, Puzzle, and Counterfactual. KOR-Bench emphasizes models' effectiveness
in applying new rule descriptions to solve novel rule-driven questions.
O1-Preview and O1-Mini achieve accuracies of 72.88% and 70.16%, surpassing
Claude-3.5-Sonnet and GPT-4o (58.96% and 58.00%), highlighting the
effectiveness of KOR-Bench. We perform detailed analyses, identifying
bottlenecks in the Cipher task with Stepwise Prompting, where two rounds of
Self-Correction yield optimal results. We evaluate performance across three
integrated tasks, explore the impact of Tricks on the Puzzle task, and
visualize rule-focused attention. Additionally, we conduct an ablation study on
dataset size, benchmark correlations, and zero-shot and three-shot "only
questions" experiments. KOR-Bench aims to enhance reasoning evaluation and
support further research in this area.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 03:56:50 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Oct 2024 03:51:29 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 12:34:10 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ma",
"Kaijing",
""
],
[
"Du",
"Xinrun",
""
],
[
"Wang",
"Yunran",
""
],
[
"Zhang",
"Haoran",
""
],
[
"Wen",
"Zhoufutu",
""
],
[
"Qu",
"Xingwei",
""
],
[
"Yang",
"Jian",
""
],
[
"Liu",
"Jiaheng",
""
],
[
"Liu",
"Minghao",
""
],
[
"Yue",
"Xiang",
""
],
[
"Huang",
"Wenhao",
""
],
[
"Zhang",
"Ge",
""
]
]
| TITLE: KOR-Bench: Benchmarking Language Models on Knowledge-Orthogonal
Reasoning Tasks
ABSTRACT: In this paper, we introduce Knowledge-Orthogonal Reasoning (KOR), a concept
aimed at minimizing reliance on domain-specific knowledge, enabling more
accurate evaluation of models' reasoning abilities in out-of-distribution
settings. Based on this concept, we propose the Knowledge-Orthogonal Reasoning
Benchmark (KOR-Bench), encompassing five task categories: Operation, Logic,
Cipher, Puzzle, and Counterfactual. KOR-Bench emphasizes models' effectiveness
in applying new rule descriptions to solve novel rule-driven questions.
O1-Preview and O1-Mini achieve accuracies of 72.88% and 70.16%, surpassing
Claude-3.5-Sonnet and GPT-4o (58.96% and 58.00%), highlighting the
effectiveness of KOR-Bench. We perform detailed analyses, identifying
bottlenecks in the Cipher task with Stepwise Prompting, where two rounds of
Self-Correction yield optimal results. We evaluate performance across three
integrated tasks, explore the impact of Tricks on the Puzzle task, and
visualize rule-focused attention. Additionally, we conduct an ablation study on
dataset size, benchmark correlations, and zero-shot and three-shot "only
questions" experiments. KOR-Bench aims to enhance reasoning evaluation and
support further research in this area.
| no_new_dataset | 0.897111 |
2410.06614 | Stephen Hausler | Stephen Hausler and Peyman Moghadam | Pair-VPR: Place-Aware Pre-training and Contrastive Pair Classification
for Visual Place Recognition with Vision Transformers | null | null | 10.1109/LRA.2025.3546512 | null | cs.RO cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we propose a novel joint training method for Visual Place
Recognition (VPR), which simultaneously learns a global descriptor and a pair
classifier for re-ranking. The pair classifier can predict whether a given pair
of images are from the same place or not. The network only comprises Vision
Transformer components for both the encoder and the pair classifier, and both
components are trained using their respective class tokens. In existing VPR
methods, typically the network is initialized using pre-trained weights from a
generic image dataset such as ImageNet. In this work we propose an alternative
pre-training strategy, by using Siamese Masked Image Modelling as a
pre-training task. We propose a Place-aware image sampling procedure from a
collection of large VPR datasets for pre-training our model, to learn visual
features tuned specifically for VPR. By re-using the Mask Image Modelling
encoder and decoder weights in the second stage of training, Pair-VPR can
achieve state-of-the-art VPR performance across five benchmark datasets with a
ViT-B encoder, along with further improvements in localization recall with
larger encoders. The Pair-VPR website is:
https://csiro-robotics.github.io/Pair-VPR.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 07:09:46 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 08:59:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Hausler",
"Stephen",
""
],
[
"Moghadam",
"Peyman",
""
]
]
| TITLE: Pair-VPR: Place-Aware Pre-training and Contrastive Pair Classification
for Visual Place Recognition with Vision Transformers
ABSTRACT: In this work we propose a novel joint training method for Visual Place
Recognition (VPR), which simultaneously learns a global descriptor and a pair
classifier for re-ranking. The pair classifier can predict whether a given pair
of images are from the same place or not. The network only comprises Vision
Transformer components for both the encoder and the pair classifier, and both
components are trained using their respective class tokens. In existing VPR
methods, typically the network is initialized using pre-trained weights from a
generic image dataset such as ImageNet. In this work we propose an alternative
pre-training strategy, by using Siamese Masked Image Modelling as a
pre-training task. We propose a Place-aware image sampling procedure from a
collection of large VPR datasets for pre-training our model, to learn visual
features tuned specifically for VPR. By re-using the Mask Image Modelling
encoder and decoder weights in the second stage of training, Pair-VPR can
achieve state-of-the-art VPR performance across five benchmark datasets with a
ViT-B encoder, along with further improvements in localization recall with
larger encoders. The Pair-VPR website is:
https://csiro-robotics.github.io/Pair-VPR.
| no_new_dataset | 0.949201 |
2410.07672 | Yougang Lyu | Yougang Lyu, Lingyong Yan, Zihan Wang, Dawei Yin, Pengjie Ren, Maarten
de Rijke, Zhaochun Ren | MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference
Optimization | ICLR 2025 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As large language models (LLMs) are rapidly advancing and achieving
near-human capabilities on specific tasks, aligning them with human values is
becoming more urgent. In scenarios where LLMs outperform humans, we face a
weak-to-strong alignment problem where we need to effectively align strong
student LLMs through weak supervision generated by weak teachers. Existing
alignment methods mainly focus on strong-to-weak alignment and self-alignment
settings, and it is impractical to adapt them to the much harder weak-to-strong
alignment setting. To fill this gap, we propose a multi-agent contrastive
preference optimization (MACPO) framework. MACPO facilitates weak teachers and
strong students to learn from each other by iteratively reinforcing unfamiliar
positive behaviors while penalizing familiar negative ones. To get this, we
devise a mutual positive behavior augmentation strategy to encourage weak
teachers and strong students to learn from each other's positive behavior and
further provide higher quality positive behavior for the next iteration.
Additionally, we propose a hard negative behavior construction strategy to
induce weak teachers and strong students to generate familiar negative behavior
by fine-tuning on negative behavioral data. Experimental results on the HH-RLHF
and PKU-SafeRLHF datasets, evaluated using both automatic metrics and human
judgments, demonstrate that MACPO simultaneously improves the alignment
performance of strong students and weak teachers. Moreover, as the number of
weak teachers increases, MACPO achieves better weak-to-strong alignment
performance through more iteration optimization rounds.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 07:29:35 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 06:25:14 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lyu",
"Yougang",
""
],
[
"Yan",
"Lingyong",
""
],
[
"Wang",
"Zihan",
""
],
[
"Yin",
"Dawei",
""
],
[
"Ren",
"Pengjie",
""
],
[
"de Rijke",
"Maarten",
""
],
[
"Ren",
"Zhaochun",
""
]
]
| TITLE: MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference
Optimization
ABSTRACT: As large language models (LLMs) are rapidly advancing and achieving
near-human capabilities on specific tasks, aligning them with human values is
becoming more urgent. In scenarios where LLMs outperform humans, we face a
weak-to-strong alignment problem where we need to effectively align strong
student LLMs through weak supervision generated by weak teachers. Existing
alignment methods mainly focus on strong-to-weak alignment and self-alignment
settings, and it is impractical to adapt them to the much harder weak-to-strong
alignment setting. To fill this gap, we propose a multi-agent contrastive
preference optimization (MACPO) framework. MACPO facilitates weak teachers and
strong students to learn from each other by iteratively reinforcing unfamiliar
positive behaviors while penalizing familiar negative ones. To get this, we
devise a mutual positive behavior augmentation strategy to encourage weak
teachers and strong students to learn from each other's positive behavior and
further provide higher quality positive behavior for the next iteration.
Additionally, we propose a hard negative behavior construction strategy to
induce weak teachers and strong students to generate familiar negative behavior
by fine-tuning on negative behavioral data. Experimental results on the HH-RLHF
and PKU-SafeRLHF datasets, evaluated using both automatic metrics and human
judgments, demonstrate that MACPO simultaneously improves the alignment
performance of strong students and weak teachers. Moreover, as the number of
weak teachers increases, MACPO achieves better weak-to-strong alignment
performance through more iteration optimization rounds.
| no_new_dataset | 0.950457 |
2410.07864 | Songming Liu | Songming Liu, Lingxuan Wu, Bangguo Li, Hengkai Tan, Huayu Chen,
Zhengyi Wang, Ke Xu, Hang Su, Jun Zhu | RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation | 10 pages, conference | null | null | null | cs.RO cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bimanual manipulation is essential in robotics, yet developing foundation
models is extremely challenging due to the inherent complexity of coordinating
two robot arms (leading to multi-modal action distributions) and the scarcity
of training data. In this paper, we present the Robotics Diffusion Transformer
(RDT), a pioneering diffusion foundation model for bimanual manipulation. RDT
builds on diffusion models to effectively represent multi-modality, with
innovative designs of a scalable Transformer to deal with the heterogeneity of
multi-modal inputs and to capture the nonlinearity and high frequency of
robotic data. To address data scarcity, we further introduce a Physically
Interpretable Unified Action Space, which can unify the action representations
of various robots while preserving the physical meanings of original actions,
facilitating learning transferrable physical knowledge. With these designs, we
managed to pre-train RDT on the largest collection of multi-robot datasets to
date and scaled it up to 1.2B parameters, which is the largest diffusion-based
foundation model for robotic manipulation. We finally fine-tuned RDT on a
self-created multi-task bimanual dataset with over 6K+ episodes to refine its
manipulation capabilities. Experiments on real robots demonstrate that RDT
significantly outperforms existing methods. It exhibits zero-shot
generalization to unseen objects and scenes, understands and follows language
instructions, learns new skills with just 1~5 demonstrations, and effectively
handles complex, dexterous tasks. We refer to
https://rdt-robotics.github.io/rdt-robotics/ for the code and videos.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 12:33:46 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 08:57:15 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Songming",
""
],
[
"Wu",
"Lingxuan",
""
],
[
"Li",
"Bangguo",
""
],
[
"Tan",
"Hengkai",
""
],
[
"Chen",
"Huayu",
""
],
[
"Wang",
"Zhengyi",
""
],
[
"Xu",
"Ke",
""
],
[
"Su",
"Hang",
""
],
[
"Zhu",
"Jun",
""
]
]
| TITLE: RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation
ABSTRACT: Bimanual manipulation is essential in robotics, yet developing foundation
models is extremely challenging due to the inherent complexity of coordinating
two robot arms (leading to multi-modal action distributions) and the scarcity
of training data. In this paper, we present the Robotics Diffusion Transformer
(RDT), a pioneering diffusion foundation model for bimanual manipulation. RDT
builds on diffusion models to effectively represent multi-modality, with
innovative designs of a scalable Transformer to deal with the heterogeneity of
multi-modal inputs and to capture the nonlinearity and high frequency of
robotic data. To address data scarcity, we further introduce a Physically
Interpretable Unified Action Space, which can unify the action representations
of various robots while preserving the physical meanings of original actions,
facilitating learning transferrable physical knowledge. With these designs, we
managed to pre-train RDT on the largest collection of multi-robot datasets to
date and scaled it up to 1.2B parameters, which is the largest diffusion-based
foundation model for robotic manipulation. We finally fine-tuned RDT on a
self-created multi-task bimanual dataset with over 6K+ episodes to refine its
manipulation capabilities. Experiments on real robots demonstrate that RDT
significantly outperforms existing methods. It exhibits zero-shot
generalization to unseen objects and scenes, understands and follows language
instructions, learns new skills with just 1~5 demonstrations, and effectively
handles complex, dexterous tasks. We refer to
https://rdt-robotics.github.io/rdt-robotics/ for the code and videos.
| no_new_dataset | 0.938011 |
2410.08452 | Yagnik Bandyopadhyay | Yagnik Bandyopadhyay, Harshil Avlani, and Houlong L. Zhuang | Kolmogorov-Arnold Neural Networks for High-Entropy Alloys Design | null | null | 10.1088/1361-651X/adbb83 | null | cond-mat.mtrl-sci cs.LG | http://creativecommons.org/licenses/by/4.0/ | A wide range of deep learning-based machine learning techniques are
extensively applied to the design of high-entropy alloys (HEAs), yielding
numerous valuable insights. Kolmogorov-Arnold Networks (KAN) is a recently
developed architecture that aims to improve both the accuracy and
interpretability of input features. In this work, we explore three different
datasets for HEA design and demonstrate the application of KAN for both
classification and regression models. In the first example, we use a KAN
classification model to predict the probability of single-phase formation in
high-entropy carbide ceramics based on various properties such as mixing
enthalpy and valence electron concentration. In the second example, we employ a
KAN regression model to predict the yield strength and ultimate tensile
strength of HEAs based on their chemical composition and process conditions
including annealing time, cold rolling percentage, and homogenization
temperature. The third example involves a KAN classification model to determine
whether a certain composition is an HEA or non-HEA, followed by a KAN regressor
model to predict the bulk modulus of the identified HEA, aiming to identify
HEAs with high bulk modulus. In all three examples, KAN either outperform or
match the performance in terms of accuracy such as F1 score for classification
and Mean Square Error (MSE), and coefficient of determination (R2) for
regression of the multilayer perceptron (MLP) by demonstrating the efficacy of
KAN in handling both classification and regression tasks. We provide a
promising direction for future research to explore advanced machine learning
techniques, which lead to more accurate predictions and better interpretability
of complex materials, ultimately accelerating the discovery and optimization of
HEAs with desirable properties.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 01:48:47 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Bandyopadhyay",
"Yagnik",
""
],
[
"Avlani",
"Harshil",
""
],
[
"Zhuang",
"Houlong L.",
""
]
]
| TITLE: Kolmogorov-Arnold Neural Networks for High-Entropy Alloys Design
ABSTRACT: A wide range of deep learning-based machine learning techniques are
extensively applied to the design of high-entropy alloys (HEAs), yielding
numerous valuable insights. Kolmogorov-Arnold Networks (KAN) is a recently
developed architecture that aims to improve both the accuracy and
interpretability of input features. In this work, we explore three different
datasets for HEA design and demonstrate the application of KAN for both
classification and regression models. In the first example, we use a KAN
classification model to predict the probability of single-phase formation in
high-entropy carbide ceramics based on various properties such as mixing
enthalpy and valence electron concentration. In the second example, we employ a
KAN regression model to predict the yield strength and ultimate tensile
strength of HEAs based on their chemical composition and process conditions
including annealing time, cold rolling percentage, and homogenization
temperature. The third example involves a KAN classification model to determine
whether a certain composition is an HEA or non-HEA, followed by a KAN regressor
model to predict the bulk modulus of the identified HEA, aiming to identify
HEAs with high bulk modulus. In all three examples, KAN either outperform or
match the performance in terms of accuracy such as F1 score for classification
and Mean Square Error (MSE), and coefficient of determination (R2) for
regression of the multilayer perceptron (MLP) by demonstrating the efficacy of
KAN in handling both classification and regression tasks. We provide a
promising direction for future research to explore advanced machine learning
techniques, which lead to more accurate predictions and better interpretability
of complex materials, ultimately accelerating the discovery and optimization of
HEAs with desirable properties.
| no_new_dataset | 0.949716 |
2410.08454 | Yanxi Wang | Jiaxing Hao, Yanxi Wang, Zhigang Chang, Hongmin Gao, Zihao Cheng, Chen
Wu, Xin Zhao, Peiye Fang and Rachmat Muwardi | HorGait: A Hybrid Model for Accurate Gait Recognition in LiDAR Point
Cloud Planar Projections | null | null | 10.1109/ACCESS.2025.3547759 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gait recognition is a remote biometric technology that utilizes the dynamic
characteristics of human movement to identify individuals even under various
extreme lighting conditions. Due to the limitation in spatial perception
capability inherent in 2D gait representations, LiDAR can directly capture 3D
gait features and represent them as point clouds, reducing environmental and
lighting interference in recognition while significantly advancing privacy
protection. For complex 3D representations, shallow networks fail to achieve
accurate recognition, making vision Transformers the foremost prevalent method.
However, the prevalence of dumb patches has limited the widespread use of
Transformer architecture in gait recognition. This paper proposes a method
named HorGait, which utilizes a hybrid model with a Transformer architecture
for gait recognition on the planar projection of 3D point clouds from LiDAR.
Specifically, it employs a hybrid model structure called LHM Block to achieve
input adaptation, long-range, and high-order spatial interaction of the
Transformer architecture. Additionally, it uses large convolutional kernel CNNs
to segment the input representation, replacing attention windows to reduce dumb
patches. We conducted extensive experiments, and the results show that HorGait
achieves state-of-the-art performance among Transformer architecture methods on
the SUSTech1K dataset, verifying that the hybrid model can complete the full
Transformer process and perform better in point cloud planar projection. The
outstanding performance of HorGait offers new insights for the future
application of the Transformer architecture in gait recognition.
| [
{
"version": "v1",
"created": "Fri, 11 Oct 2024 02:12:41 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Oct 2024 01:59:45 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Hao",
"Jiaxing",
""
],
[
"Wang",
"Yanxi",
""
],
[
"Chang",
"Zhigang",
""
],
[
"Gao",
"Hongmin",
""
],
[
"Cheng",
"Zihao",
""
],
[
"Wu",
"Chen",
""
],
[
"Zhao",
"Xin",
""
],
[
"Fang",
"Peiye",
""
],
[
"Muwardi",
"Rachmat",
""
]
]
| TITLE: HorGait: A Hybrid Model for Accurate Gait Recognition in LiDAR Point
Cloud Planar Projections
ABSTRACT: Gait recognition is a remote biometric technology that utilizes the dynamic
characteristics of human movement to identify individuals even under various
extreme lighting conditions. Due to the limitation in spatial perception
capability inherent in 2D gait representations, LiDAR can directly capture 3D
gait features and represent them as point clouds, reducing environmental and
lighting interference in recognition while significantly advancing privacy
protection. For complex 3D representations, shallow networks fail to achieve
accurate recognition, making vision Transformers the foremost prevalent method.
However, the prevalence of dumb patches has limited the widespread use of
Transformer architecture in gait recognition. This paper proposes a method
named HorGait, which utilizes a hybrid model with a Transformer architecture
for gait recognition on the planar projection of 3D point clouds from LiDAR.
Specifically, it employs a hybrid model structure called LHM Block to achieve
input adaptation, long-range, and high-order spatial interaction of the
Transformer architecture. Additionally, it uses large convolutional kernel CNNs
to segment the input representation, replacing attention windows to reduce dumb
patches. We conducted extensive experiments, and the results show that HorGait
achieves state-of-the-art performance among Transformer architecture methods on
the SUSTech1K dataset, verifying that the hybrid model can complete the full
Transformer process and perform better in point cloud planar projection. The
outstanding performance of HorGait offers new insights for the future
application of the Transformer architecture in gait recognition.
| no_new_dataset | 0.949059 |
2410.09724 | Jixuan Leng | Jixuan Leng, Chengsong Huang, Banghua Zhu, Jiaxin Huang | Taming Overconfidence in LLMs: Reward Calibration in RLHF | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language model calibration refers to the alignment between the confidence of
the model and the actual performance of its responses. While previous studies
point out the overconfidence phenomenon in Large Language Models (LLMs) and
show that LLMs trained with Reinforcement Learning from Human Feedback (RLHF)
are overconfident with a more sharpened output probability, in this study, we
reveal that RLHF tends to lead models to express verbalized overconfidence in
their own responses. We investigate the underlying cause of this overconfidence
and demonstrate that reward models used for Proximal Policy Optimization (PPO)
exhibit inherent biases towards high-confidence scores regardless of the actual
quality of responses. Building upon this insight, we propose two PPO variants:
PPO-M: PPO with Calibrated Reward Modeling and PPO-C: PPO with Calibrated
Reward Calculation. PPO-M integrates explicit confidence scores in reward model
training, which calibrates reward models to better capture the alignment
between response quality and verbalized confidence. PPO-C adjusts the reward
score during PPO based on the difference between the current reward and the
exponential average of past rewards. Both PPO-M and PPO-C can be seamlessly
integrated into the current PPO pipeline and do not require additional golden
labels. We evaluate our methods on both Llama3-8B and Mistral-7B across six
diverse datasets including multiple-choice and open-ended generation.
Experimental results demonstrate that both of our methods can reduce
calibration error and maintain performance comparable to standard PPO. We
further show that they could preserve model capabilities in open-ended
conversational settings.
| [
{
"version": "v1",
"created": "Sun, 13 Oct 2024 04:48:40 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 23:36:40 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Leng",
"Jixuan",
""
],
[
"Huang",
"Chengsong",
""
],
[
"Zhu",
"Banghua",
""
],
[
"Huang",
"Jiaxin",
""
]
]
| TITLE: Taming Overconfidence in LLMs: Reward Calibration in RLHF
ABSTRACT: Language model calibration refers to the alignment between the confidence of
the model and the actual performance of its responses. While previous studies
point out the overconfidence phenomenon in Large Language Models (LLMs) and
show that LLMs trained with Reinforcement Learning from Human Feedback (RLHF)
are overconfident with a more sharpened output probability, in this study, we
reveal that RLHF tends to lead models to express verbalized overconfidence in
their own responses. We investigate the underlying cause of this overconfidence
and demonstrate that reward models used for Proximal Policy Optimization (PPO)
exhibit inherent biases towards high-confidence scores regardless of the actual
quality of responses. Building upon this insight, we propose two PPO variants:
PPO-M: PPO with Calibrated Reward Modeling and PPO-C: PPO with Calibrated
Reward Calculation. PPO-M integrates explicit confidence scores in reward model
training, which calibrates reward models to better capture the alignment
between response quality and verbalized confidence. PPO-C adjusts the reward
score during PPO based on the difference between the current reward and the
exponential average of past rewards. Both PPO-M and PPO-C can be seamlessly
integrated into the current PPO pipeline and do not require additional golden
labels. We evaluate our methods on both Llama3-8B and Mistral-7B across six
diverse datasets including multiple-choice and open-ended generation.
Experimental results demonstrate that both of our methods can reduce
calibration error and maintain performance comparable to standard PPO. We
further show that they could preserve model capabilities in open-ended
conversational settings.
| no_new_dataset | 0.943295 |
2410.10010 | Muhammad Gohar Javed | Muhammad Gohar Javed, Chuan Guo, Li Cheng and Xingyu Li | InterMask: 3D Human Interaction Generation via Collaborative Masked
Modeling | Project webpage: https://gohar-malik.github.io/intermask | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generating realistic 3D human-human interactions from textual descriptions
remains a challenging task. Existing approaches, typically based on diffusion
models, often produce results lacking realism and fidelity. In this work, we
introduce InterMask, a novel framework for generating human interactions using
collaborative masked modeling in discrete space. InterMask first employs a
VQ-VAE to transform each motion sequence into a 2D discrete motion token map.
Unlike traditional 1D VQ token maps, it better preserves fine-grained
spatio-temporal details and promotes spatial awareness within each token.
Building on this representation, InterMask utilizes a generative masked
modeling framework to collaboratively model the tokens of two interacting
individuals. This is achieved by employing a transformer architecture
specifically designed to capture complex spatio-temporal inter-dependencies.
During training, it randomly masks the motion tokens of both individuals and
learns to predict them. For inference, starting from fully masked sequences, it
progressively fills in the tokens for both individuals. With its enhanced
motion representation, dedicated architecture, and effective learning strategy,
InterMask achieves state-of-the-art results, producing high-fidelity and
diverse human interactions. It outperforms previous methods, achieving an FID
of $5.154$ (vs $5.535$ of in2IN) on the InterHuman dataset and $0.399$ (vs
$5.207$ of InterGen) on the InterX dataset. Additionally, InterMask seamlessly
supports reaction generation without the need for model redesign or
fine-tuning.
| [
{
"version": "v1",
"created": "Sun, 13 Oct 2024 21:11:04 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Oct 2024 23:22:41 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 07:42:20 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Javed",
"Muhammad Gohar",
""
],
[
"Guo",
"Chuan",
""
],
[
"Cheng",
"Li",
""
],
[
"Li",
"Xingyu",
""
]
]
| TITLE: InterMask: 3D Human Interaction Generation via Collaborative Masked
Modeling
ABSTRACT: Generating realistic 3D human-human interactions from textual descriptions
remains a challenging task. Existing approaches, typically based on diffusion
models, often produce results lacking realism and fidelity. In this work, we
introduce InterMask, a novel framework for generating human interactions using
collaborative masked modeling in discrete space. InterMask first employs a
VQ-VAE to transform each motion sequence into a 2D discrete motion token map.
Unlike traditional 1D VQ token maps, it better preserves fine-grained
spatio-temporal details and promotes spatial awareness within each token.
Building on this representation, InterMask utilizes a generative masked
modeling framework to collaboratively model the tokens of two interacting
individuals. This is achieved by employing a transformer architecture
specifically designed to capture complex spatio-temporal inter-dependencies.
During training, it randomly masks the motion tokens of both individuals and
learns to predict them. For inference, starting from fully masked sequences, it
progressively fills in the tokens for both individuals. With its enhanced
motion representation, dedicated architecture, and effective learning strategy,
InterMask achieves state-of-the-art results, producing high-fidelity and
diverse human interactions. It outperforms previous methods, achieving an FID
of $5.154$ (vs $5.535$ of in2IN) on the InterHuman dataset and $0.399$ (vs
$5.207$ of InterGen) on the InterX dataset. Additionally, InterMask seamlessly
supports reaction generation without the need for model redesign or
fine-tuning.
| no_new_dataset | 0.944382 |
2410.10322 | Binghui Li | Binghui Li, Zhixuan Pan, Kaifeng Lyu, Jian Li | Feature Averaging: An Implicit Bias of Gradient Descent Leading to
Non-Robustness in Neural Networks | Published as a conference paper at ICLR 2025; 72 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we investigate a particular implicit bias in gradient descent
training, which we term "Feature Averaging," and argue that it is one of the
principal factors contributing to the non-robustness of deep neural networks.
We show that, even when multiple discriminative features are present in the
input data, neural networks trained by gradient descent tend to rely on an
average (or a certain combination) of these features for classification, rather
than distinguishing and leveraging each feature individually. Specifically, we
provide a detailed theoretical analysis of the training dynamics of two-layer
ReLU networks on a binary classification task, where the data distribution
consists of multiple clusters with mutually orthogonal centers. We rigorously
prove that gradient descent biases the network towards feature averaging, where
the weights of each hidden neuron represent an average of the cluster centers
(each corresponding to a distinct feature), thereby making the network
vulnerable to input perturbations aligned with the negative direction of the
averaged features. On the positive side, we demonstrate that this vulnerability
can be mitigated through more granular supervision. In particular, we prove
that a two-layer ReLU network can achieve optimal robustness when trained to
classify individual features rather than merely the original binary classes.
Finally, we validate our theoretical findings with experiments on synthetic
datasets, MNIST, and CIFAR-10, and confirm the prevalence of feature averaging
and its impact on adversarial robustness. We hope these theoretical and
empirical insights deepen the understanding of how gradient descent shapes
feature learning and adversarial robustness, and how more detailed supervision
can enhance robustness.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 09:28:32 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 04:06:51 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Binghui",
""
],
[
"Pan",
"Zhixuan",
""
],
[
"Lyu",
"Kaifeng",
""
],
[
"Li",
"Jian",
""
]
]
| TITLE: Feature Averaging: An Implicit Bias of Gradient Descent Leading to
Non-Robustness in Neural Networks
ABSTRACT: In this work, we investigate a particular implicit bias in gradient descent
training, which we term "Feature Averaging," and argue that it is one of the
principal factors contributing to the non-robustness of deep neural networks.
We show that, even when multiple discriminative features are present in the
input data, neural networks trained by gradient descent tend to rely on an
average (or a certain combination) of these features for classification, rather
than distinguishing and leveraging each feature individually. Specifically, we
provide a detailed theoretical analysis of the training dynamics of two-layer
ReLU networks on a binary classification task, where the data distribution
consists of multiple clusters with mutually orthogonal centers. We rigorously
prove that gradient descent biases the network towards feature averaging, where
the weights of each hidden neuron represent an average of the cluster centers
(each corresponding to a distinct feature), thereby making the network
vulnerable to input perturbations aligned with the negative direction of the
averaged features. On the positive side, we demonstrate that this vulnerability
can be mitigated through more granular supervision. In particular, we prove
that a two-layer ReLU network can achieve optimal robustness when trained to
classify individual features rather than merely the original binary classes.
Finally, we validate our theoretical findings with experiments on synthetic
datasets, MNIST, and CIFAR-10, and confirm the prevalence of feature averaging
and its impact on adversarial robustness. We hope these theoretical and
empirical insights deepen the understanding of how gradient descent shapes
feature learning and adversarial robustness, and how more detailed supervision
can enhance robustness.
| no_new_dataset | 0.948632 |
2410.11019 | Jing Liang | Jing Liang, He Yin, Xuewei Qi, Jong Jin Park, Min Sun, Rajasimman
Madhivanan, Dinesh Manocha | ET-Former: Efficient Triplane Deformable Attention for 3D Semantic Scene
Completion From Monocular Camera | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce ET-Former, a novel end-to-end algorithm for semantic scene
completion using a single monocular camera. Our approach generates a semantic
occupancy map from single RGB observation while simultaneously providing
uncertainty estimates for semantic predictions. By designing a triplane-based
deformable attention mechanism, our approach improves geometric understanding
of the scene than other SOTA approaches and reduces noise in semantic
predictions. Additionally, through the use of a Conditional Variational
AutoEncoder (CVAE), we estimate the uncertainties of these predictions. The
generated semantic and uncertainty maps will help formulate navigation
strategies that facilitate safe and permissible decision making in the future.
Evaluated on the Semantic-KITTI dataset, ET-Former achieves the highest
Intersection over Union (IoU) and mean IoU (mIoU) scores while maintaining the
lowest GPU memory usage, surpassing state-of-the-art (SOTA) methods. It
improves the SOTA scores of IoU from 44.71 to 51.49 and mIoU from 15.04 to
16.30 on SeamnticKITTI test, with a notably low training memory consumption of
10.9 GB. Project page: https://github.com/jingGM/ET-Former.git.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 19:14:49 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 18:48:48 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liang",
"Jing",
""
],
[
"Yin",
"He",
""
],
[
"Qi",
"Xuewei",
""
],
[
"Park",
"Jong Jin",
""
],
[
"Sun",
"Min",
""
],
[
"Madhivanan",
"Rajasimman",
""
],
[
"Manocha",
"Dinesh",
""
]
]
| TITLE: ET-Former: Efficient Triplane Deformable Attention for 3D Semantic Scene
Completion From Monocular Camera
ABSTRACT: We introduce ET-Former, a novel end-to-end algorithm for semantic scene
completion using a single monocular camera. Our approach generates a semantic
occupancy map from single RGB observation while simultaneously providing
uncertainty estimates for semantic predictions. By designing a triplane-based
deformable attention mechanism, our approach improves geometric understanding
of the scene than other SOTA approaches and reduces noise in semantic
predictions. Additionally, through the use of a Conditional Variational
AutoEncoder (CVAE), we estimate the uncertainties of these predictions. The
generated semantic and uncertainty maps will help formulate navigation
strategies that facilitate safe and permissible decision making in the future.
Evaluated on the Semantic-KITTI dataset, ET-Former achieves the highest
Intersection over Union (IoU) and mean IoU (mIoU) scores while maintaining the
lowest GPU memory usage, surpassing state-of-the-art (SOTA) methods. It
improves the SOTA scores of IoU from 44.71 to 51.49 and mIoU from 15.04 to
16.30 on SeamnticKITTI test, with a notably low training memory consumption of
10.9 GB. Project page: https://github.com/jingGM/ET-Former.git.
| no_new_dataset | 0.945801 |
2410.11112 | Alan T. L. Bacellar | Alan T. L. Bacellar, Zachary Susskind, Mauricio Breternitz Jr., Eugene
John, Lizy K. John, Priscila M. V. Lima and Felipe M. G. Fran\c{c}a | Differentiable Weightless Neural Networks | null | International Conference on Machine Learning (ICML) 2024 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the Differentiable Weightless Neural Network (DWN), a model
based on interconnected lookup tables. Training of DWNs is enabled by a novel
Extended Finite Difference technique for approximate differentiation of binary
values. We propose Learnable Mapping, Learnable Reduction, and Spectral
Regularization to further improve the accuracy and efficiency of these models.
We evaluate DWNs in three edge computing contexts: (1) an FPGA-based hardware
accelerator, where they demonstrate superior latency, throughput, energy
efficiency, and model area compared to state-of-the-art solutions, (2) a
low-power microcontroller, where they achieve preferable accuracy to XGBoost
while subject to stringent memory constraints, and (3) ultra-low-cost chips,
where they consistently outperform small models in both accuracy and projected
hardware area. DWNs also compare favorably against leading approaches for
tabular datasets, with higher average rank. Overall, our work positions DWNs as
a pioneering solution for edge-compatible high-throughput neural networks.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 21:43:48 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Nov 2024 18:00:19 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Nov 2024 13:59:05 GMT"
},
{
"version": "v4",
"created": "Fri, 6 Dec 2024 18:23:05 GMT"
},
{
"version": "v5",
"created": "Sun, 2 Mar 2025 17:48:06 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Bacellar",
"Alan T. L.",
""
],
[
"Susskind",
"Zachary",
""
],
[
"Breternitz",
"Mauricio",
"Jr."
],
[
"John",
"Eugene",
""
],
[
"John",
"Lizy K.",
""
],
[
"Lima",
"Priscila M. V.",
""
],
[
"França",
"Felipe M. G.",
""
]
]
| TITLE: Differentiable Weightless Neural Networks
ABSTRACT: We introduce the Differentiable Weightless Neural Network (DWN), a model
based on interconnected lookup tables. Training of DWNs is enabled by a novel
Extended Finite Difference technique for approximate differentiation of binary
values. We propose Learnable Mapping, Learnable Reduction, and Spectral
Regularization to further improve the accuracy and efficiency of these models.
We evaluate DWNs in three edge computing contexts: (1) an FPGA-based hardware
accelerator, where they demonstrate superior latency, throughput, energy
efficiency, and model area compared to state-of-the-art solutions, (2) a
low-power microcontroller, where they achieve preferable accuracy to XGBoost
while subject to stringent memory constraints, and (3) ultra-low-cost chips,
where they consistently outperform small models in both accuracy and projected
hardware area. DWNs also compare favorably against leading approaches for
tabular datasets, with higher average rank. Overall, our work positions DWNs as
a pioneering solution for edge-compatible high-throughput neural networks.
| no_new_dataset | 0.945701 |
2410.11502 | Chao Qian | Rong-Xi Tan, Ke Xue, Shen-Huan Lyu, Haopu Shang, Yao Wang, Yaoyuan
Wang, Sheng Fu, Chao Qian | Offline Model-Based Optimization by Learning to Rank | ICLR 2025 | null | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Offline model-based optimization (MBO) aims to identify a design that
maximizes a black-box function using only a fixed, pre-collected dataset of
designs and their corresponding scores. A common approach in offline MBO is to
train a regression-based surrogate model by minimizing mean squared error (MSE)
and then find the best design within this surrogate model by different
optimizers (e.g., gradient ascent). However, a critical challenge is the risk
of out-of-distribution errors, i.e., the surrogate model may typically
overestimate the scores and mislead the optimizers into suboptimal regions.
Prior works have attempted to address this issue in various ways, such as using
regularization techniques and ensemble learning to enhance the robustness of
the model, but it still remains. In this paper, we argue that regression models
trained with MSE are not well-aligned with the primary goal of offline MBO,
which is to select promising designs rather than to predict their scores
precisely. Notably, if a surrogate model can maintain the order of candidate
designs based on their relative score relationships, it can produce the best
designs even without precise predictions. To validate it, we conduct
experiments to compare the relationship between the quality of the final
designs and MSE, finding that the correlation is really very weak. In contrast,
a metric that measures order-maintaining quality shows a significantly stronger
correlation. Based on this observation, we propose learning a ranking-based
model that leverages learning to rank techniques to prioritize promising
designs based on their relative scores. We show that the generalization error
on ranking loss can be well bounded. Empirical results across diverse tasks
demonstrate the superior performance of our proposed ranking-based models than
twenty existing methods.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 11:15:03 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 11:38:11 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Tan",
"Rong-Xi",
""
],
[
"Xue",
"Ke",
""
],
[
"Lyu",
"Shen-Huan",
""
],
[
"Shang",
"Haopu",
""
],
[
"Wang",
"Yao",
""
],
[
"Wang",
"Yaoyuan",
""
],
[
"Fu",
"Sheng",
""
],
[
"Qian",
"Chao",
""
]
]
| TITLE: Offline Model-Based Optimization by Learning to Rank
ABSTRACT: Offline model-based optimization (MBO) aims to identify a design that
maximizes a black-box function using only a fixed, pre-collected dataset of
designs and their corresponding scores. A common approach in offline MBO is to
train a regression-based surrogate model by minimizing mean squared error (MSE)
and then find the best design within this surrogate model by different
optimizers (e.g., gradient ascent). However, a critical challenge is the risk
of out-of-distribution errors, i.e., the surrogate model may typically
overestimate the scores and mislead the optimizers into suboptimal regions.
Prior works have attempted to address this issue in various ways, such as using
regularization techniques and ensemble learning to enhance the robustness of
the model, but it still remains. In this paper, we argue that regression models
trained with MSE are not well-aligned with the primary goal of offline MBO,
which is to select promising designs rather than to predict their scores
precisely. Notably, if a surrogate model can maintain the order of candidate
designs based on their relative score relationships, it can produce the best
designs even without precise predictions. To validate it, we conduct
experiments to compare the relationship between the quality of the final
designs and MSE, finding that the correlation is really very weak. In contrast,
a metric that measures order-maintaining quality shows a significantly stronger
correlation. Based on this observation, we propose learning a ranking-based
model that leverages learning to rank techniques to prioritize promising
designs based on their relative scores. We show that the generalization error
on ranking loss can be well bounded. Empirical results across diverse tasks
demonstrate the superior performance of our proposed ranking-based models than
twenty existing methods.
| no_new_dataset | 0.945551 |
2410.12085 | Fengyu Gao | Fengyu Gao, Ruida Zhou, Tianhao Wang, Cong Shen, Jing Yang | Data-adaptive Differentially Private Prompt Synthesis for In-Context
Learning | Accepted to ICLR 2025 | null | null | null | cs.CR cs.AI cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) rely on the contextual information embedded in
examples/demonstrations to perform in-context learning (ICL). To mitigate the
risk of LLMs potentially leaking private information contained in examples in
the prompt, we introduce a novel data-adaptive differentially private algorithm
called AdaDPSyn to generate synthetic examples from the private dataset and
then use these synthetic examples to perform ICL. The objective of AdaDPSyn is
to adaptively adjust the noise level in the data synthesis mechanism according
to the inherent statistical properties of the data, thereby preserving high ICL
accuracy while maintaining formal differential privacy guarantees. A key
innovation in AdaDPSyn is the Precision-Focused Iterative Radius Reduction
technique, which dynamically refines the aggregation radius - the scope of data
grouping for noise addition - based on patterns observed in data clustering,
thereby minimizing the amount of additive noise. We conduct extensive
experiments on standard benchmarks and compare AdaDPSyn with DP few-shot
generation algorithm (Tang et al., 2023). The experiments demonstrate that
AdaDPSyn not only outperforms DP few-shot generation, but also maintains high
accuracy levels close to those of non-private baselines, providing an effective
solution for ICL with privacy protection.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 22:06:30 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 06:29:15 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Gao",
"Fengyu",
""
],
[
"Zhou",
"Ruida",
""
],
[
"Wang",
"Tianhao",
""
],
[
"Shen",
"Cong",
""
],
[
"Yang",
"Jing",
""
]
]
| TITLE: Data-adaptive Differentially Private Prompt Synthesis for In-Context
Learning
ABSTRACT: Large Language Models (LLMs) rely on the contextual information embedded in
examples/demonstrations to perform in-context learning (ICL). To mitigate the
risk of LLMs potentially leaking private information contained in examples in
the prompt, we introduce a novel data-adaptive differentially private algorithm
called AdaDPSyn to generate synthetic examples from the private dataset and
then use these synthetic examples to perform ICL. The objective of AdaDPSyn is
to adaptively adjust the noise level in the data synthesis mechanism according
to the inherent statistical properties of the data, thereby preserving high ICL
accuracy while maintaining formal differential privacy guarantees. A key
innovation in AdaDPSyn is the Precision-Focused Iterative Radius Reduction
technique, which dynamically refines the aggregation radius - the scope of data
grouping for noise addition - based on patterns observed in data clustering,
thereby minimizing the amount of additive noise. We conduct extensive
experiments on standard benchmarks and compare AdaDPSyn with DP few-shot
generation algorithm (Tang et al., 2023). The experiments demonstrate that
AdaDPSyn not only outperforms DP few-shot generation, but also maintains high
accuracy levels close to those of non-private baselines, providing an effective
solution for ICL with privacy protection.
| no_new_dataset | 0.947527 |
2410.12343 | Yang Liu Aron | Zihao Zhou, Yang Liu, Xianghong Xu, Qian Li | Federated Temporal Graph Clustering | 8 pages, 1 figure | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by/4.0/ | Temporal graph clustering is a complex task that involves discovering
meaningful structures in dynamic graphs where relationships and entities change
over time. Existing methods typically require centralized data collection,
which poses significant privacy and communication challenges. In this work, we
introduce a novel Federated Temporal Graph Clustering (FTGC) framework that
enables decentralized training of graph neural networks (GNNs) across multiple
clients, ensuring data privacy throughout the process. Our approach
incorporates a temporal aggregation mechanism to effectively capture the
evolution of graph structures over time and a federated optimization strategy
to collaboratively learn high-quality clustering representations. By preserving
data privacy and reducing communication overhead, our framework achieves
competitive performance on temporal graph datasets, making it a promising
solution for privacy-sensitive, real-world applications involving dynamic data.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 08:04:57 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Feb 2025 09:58:53 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 12:15:38 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhou",
"Zihao",
""
],
[
"Liu",
"Yang",
""
],
[
"Xu",
"Xianghong",
""
],
[
"Li",
"Qian",
""
]
]
| TITLE: Federated Temporal Graph Clustering
ABSTRACT: Temporal graph clustering is a complex task that involves discovering
meaningful structures in dynamic graphs where relationships and entities change
over time. Existing methods typically require centralized data collection,
which poses significant privacy and communication challenges. In this work, we
introduce a novel Federated Temporal Graph Clustering (FTGC) framework that
enables decentralized training of graph neural networks (GNNs) across multiple
clients, ensuring data privacy throughout the process. Our approach
incorporates a temporal aggregation mechanism to effectively capture the
evolution of graph structures over time and a federated optimization strategy
to collaboratively learn high-quality clustering representations. By preserving
data privacy and reducing communication overhead, our framework achieves
competitive performance on temporal graph datasets, making it a promising
solution for privacy-sensitive, real-world applications involving dynamic data.
| no_new_dataset | 0.949389 |
2410.12952 | Mingyang Chen | Mingyang Chen, Haoze Sun, Tianpeng Li, Fan Yang, Hao Liang, Keer Lu,
Bin Cui, Wentao Zhang, Zenan Zhou, Weipeng Chen | Facilitating Multi-turn Function Calling for LLMs via Compositional
Instruction Tuning | Accepted to ICLR 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have exhibited significant potential in
performing diverse tasks, including the ability to call functions or use
external tools to enhance their performance. While current research on function
calling by LLMs primarily focuses on single-turn interactions, this paper
addresses the overlooked necessity for LLMs to engage in multi-turn function
calling--critical for handling compositional, real-world queries that require
planning with functions but not only use functions. To facilitate this, we
introduce an approach, BUTTON, which generates synthetic compositional
instruction tuning data via bottom-up instruction construction and top-down
trajectory generation. In the bottom-up phase, we generate simple atomic tasks
based on real-world scenarios and build compositional tasks using heuristic
strategies based on atomic tasks. Corresponding function definitions are then
synthesized for these compositional tasks. The top-down phase features a
multi-agent environment where interactions among simulated humans, assistants,
and tools are utilized to gather multi-turn function calling trajectories. This
approach ensures task compositionality and allows for effective function and
trajectory generation by examining atomic tasks within compositional tasks. We
produce a dataset BUTTONInstruct comprising 8k data points and demonstrate its
effectiveness through extensive experiments across various LLMs.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 18:40:26 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 02:27:02 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Mingyang",
""
],
[
"Sun",
"Haoze",
""
],
[
"Li",
"Tianpeng",
""
],
[
"Yang",
"Fan",
""
],
[
"Liang",
"Hao",
""
],
[
"Lu",
"Keer",
""
],
[
"Cui",
"Bin",
""
],
[
"Zhang",
"Wentao",
""
],
[
"Zhou",
"Zenan",
""
],
[
"Chen",
"Weipeng",
""
]
]
| TITLE: Facilitating Multi-turn Function Calling for LLMs via Compositional
Instruction Tuning
ABSTRACT: Large Language Models (LLMs) have exhibited significant potential in
performing diverse tasks, including the ability to call functions or use
external tools to enhance their performance. While current research on function
calling by LLMs primarily focuses on single-turn interactions, this paper
addresses the overlooked necessity for LLMs to engage in multi-turn function
calling--critical for handling compositional, real-world queries that require
planning with functions but not only use functions. To facilitate this, we
introduce an approach, BUTTON, which generates synthetic compositional
instruction tuning data via bottom-up instruction construction and top-down
trajectory generation. In the bottom-up phase, we generate simple atomic tasks
based on real-world scenarios and build compositional tasks using heuristic
strategies based on atomic tasks. Corresponding function definitions are then
synthesized for these compositional tasks. The top-down phase features a
multi-agent environment where interactions among simulated humans, assistants,
and tools are utilized to gather multi-turn function calling trajectories. This
approach ensures task compositionality and allows for effective function and
trajectory generation by examining atomic tasks within compositional tasks. We
produce a dataset BUTTONInstruct comprising 8k data points and demonstrate its
effectiveness through extensive experiments across various LLMs.
| new_dataset | 0.952042 |
2410.13085 | Peng Xia | Peng Xia, Kangyu Zhu, Haoran Li, Tianze Wang, Weijia Shi, Sheng Wang,
Linjun Zhang, James Zou, Huaxiu Yao | MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language
Models | ICLR 2025 | null | null | null | cs.LG cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) has demonstrated significant potential in
healthcare, particularly in disease diagnosis and treatment planning. Recent
progress in Medical Large Vision-Language Models (Med-LVLMs) has opened up new
possibilities for interactive diagnostic tools. However, these models often
suffer from factual hallucination, which can lead to incorrect diagnoses.
Fine-tuning and retrieval-augmented generation (RAG) have emerged as methods to
address these issues. However, the amount of high-quality data and distribution
shifts between training data and deployment data limit the application of
fine-tuning methods. Although RAG is lightweight and effective, existing
RAG-based approaches are not sufficiently general to different medical domains
and can potentially cause misalignment issues, both between modalities and
between the model and the ground truth. In this paper, we propose a versatile
multimodal RAG system, MMed-RAG, designed to enhance the factuality of
Med-LVLMs. Our approach introduces a domain-aware retrieval mechanism, an
adaptive retrieved contexts selection method, and a provable RAG-based
preference fine-tuning strategy. These innovations make the RAG process
sufficiently general and reliable, significantly improving alignment when
introducing retrieved contexts. Experimental results across five medical
datasets (involving radiology, ophthalmology, pathology) on medical VQA and
report generation demonstrate that MMed-RAG can achieve an average improvement
of 43.8% in the factual accuracy of Med-LVLMs. Our data and code are available
in https://github.com/richard-peng-xia/MMed-RAG.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 23:03:27 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 03:08:28 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xia",
"Peng",
""
],
[
"Zhu",
"Kangyu",
""
],
[
"Li",
"Haoran",
""
],
[
"Wang",
"Tianze",
""
],
[
"Shi",
"Weijia",
""
],
[
"Wang",
"Sheng",
""
],
[
"Zhang",
"Linjun",
""
],
[
"Zou",
"James",
""
],
[
"Yao",
"Huaxiu",
""
]
]
| TITLE: MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language
Models
ABSTRACT: Artificial Intelligence (AI) has demonstrated significant potential in
healthcare, particularly in disease diagnosis and treatment planning. Recent
progress in Medical Large Vision-Language Models (Med-LVLMs) has opened up new
possibilities for interactive diagnostic tools. However, these models often
suffer from factual hallucination, which can lead to incorrect diagnoses.
Fine-tuning and retrieval-augmented generation (RAG) have emerged as methods to
address these issues. However, the amount of high-quality data and distribution
shifts between training data and deployment data limit the application of
fine-tuning methods. Although RAG is lightweight and effective, existing
RAG-based approaches are not sufficiently general to different medical domains
and can potentially cause misalignment issues, both between modalities and
between the model and the ground truth. In this paper, we propose a versatile
multimodal RAG system, MMed-RAG, designed to enhance the factuality of
Med-LVLMs. Our approach introduces a domain-aware retrieval mechanism, an
adaptive retrieved contexts selection method, and a provable RAG-based
preference fine-tuning strategy. These innovations make the RAG process
sufficiently general and reliable, significantly improving alignment when
introducing retrieved contexts. Experimental results across five medical
datasets (involving radiology, ophthalmology, pathology) on medical VQA and
report generation demonstrate that MMed-RAG can achieve an average improvement
of 43.8% in the factual accuracy of Med-LVLMs. Our data and code are available
in https://github.com/richard-peng-xia/MMed-RAG.
| no_new_dataset | 0.949949 |
2410.13213 | Xiang Shu | Caigao Jiang, Xiang Shu, Hong Qian, Xingyu Lu, Jun Zhou, Aimin Zhou,
Yang Yu | LLMOPT: Learning to Define and Solve General Optimization Problems from
Scratch | null | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimization problems are prevalent across various scenarios. Formulating and
then solving optimization problems described by natural language often requires
highly specialized human expertise, which could block the widespread
application of optimization-based decision making. To automate problem
formulation and solving, leveraging large language models (LLMs) has emerged as
a potential way. However, this kind of approach suffers from the issue of
optimization generalization. Namely, the accuracy of most current LLM-based
methods and the generality of optimization problem types that they can model
are still limited. In this paper, we propose a unified learning-based framework
called LLMOPT to boost optimization generalization. Starting from the natural
language descriptions of optimization problems and a pre-trained LLM, LLMOPT
constructs the introduced five-element formulation as a universal model for
learning to define diverse optimization problem types. Then, LLMOPT employs the
multi-instruction tuning to enhance both problem formalization and solver code
generation accuracy and generality. After that, to prevent hallucinations in
LLMs, such as sacrificing solving accuracy to avoid execution errors, the model
alignment and self-correction mechanism are adopted in LLMOPT. We evaluate the
optimization generalization ability of LLMOPT and compared methods across six
real-world datasets covering roughly 20 fields such as health, environment,
energy and manufacturing, etc. Extensive experiment results show that LLMOPT is
able to model various optimization problem types such as linear/nonlinear
programming, mixed integer programming, and combinatorial optimization, and
achieves a notable 11.08% average solving accuracy improvement compared with
the state-of-the-art methods. The code is available at
https://github.com/caigaojiang/LLMOPT.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 04:37:37 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 03:20:08 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Jiang",
"Caigao",
""
],
[
"Shu",
"Xiang",
""
],
[
"Qian",
"Hong",
""
],
[
"Lu",
"Xingyu",
""
],
[
"Zhou",
"Jun",
""
],
[
"Zhou",
"Aimin",
""
],
[
"Yu",
"Yang",
""
]
]
| TITLE: LLMOPT: Learning to Define and Solve General Optimization Problems from
Scratch
ABSTRACT: Optimization problems are prevalent across various scenarios. Formulating and
then solving optimization problems described by natural language often requires
highly specialized human expertise, which could block the widespread
application of optimization-based decision making. To automate problem
formulation and solving, leveraging large language models (LLMs) has emerged as
a potential way. However, this kind of approach suffers from the issue of
optimization generalization. Namely, the accuracy of most current LLM-based
methods and the generality of optimization problem types that they can model
are still limited. In this paper, we propose a unified learning-based framework
called LLMOPT to boost optimization generalization. Starting from the natural
language descriptions of optimization problems and a pre-trained LLM, LLMOPT
constructs the introduced five-element formulation as a universal model for
learning to define diverse optimization problem types. Then, LLMOPT employs the
multi-instruction tuning to enhance both problem formalization and solver code
generation accuracy and generality. After that, to prevent hallucinations in
LLMs, such as sacrificing solving accuracy to avoid execution errors, the model
alignment and self-correction mechanism are adopted in LLMOPT. We evaluate the
optimization generalization ability of LLMOPT and compared methods across six
real-world datasets covering roughly 20 fields such as health, environment,
energy and manufacturing, etc. Extensive experiment results show that LLMOPT is
able to model various optimization problem types such as linear/nonlinear
programming, mixed integer programming, and combinatorial optimization, and
achieves a notable 11.08% average solving accuracy improvement compared with
the state-of-the-art methods. The code is available at
https://github.com/caigaojiang/LLMOPT.
| no_new_dataset | 0.944434 |
2410.13586 | Xinyi Yuan | Xinyi Yuan, Zhiwei Shang, Zifan Wang, Chenkai Wang, Zhao Shan, Meixin
Zhu, Chenjia Bai, Xuelong Li, Weiwei Wan, Kensuke Harada | Preference Aligned Diffusion Planner for Quadrupedal Locomotion Control | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion models demonstrate superior performance in capturing complex
distributions from large-scale datasets, providing a promising solution for
quadrupedal locomotion control. However, the robustness of the diffusion
planner is inherently dependent on the diversity of the pre-collected datasets.
To mitigate this issue, we propose a two-stage learning framework to enhance
the capability of the diffusion planner under limited dataset
(reward-agnostic). Through the offline stage, the diffusion planner learns the
joint distribution of state-action sequences from expert datasets without using
reward labels. Subsequently, we perform the online interaction in the
simulation environment based on the trained offline planner, which
significantly diversified the original behavior and thus improves the
robustness. Specifically, we propose a novel weak preference labeling method
without the ground-truth reward or human preferences. The proposed method
exhibits superior stability and velocity tracking accuracy in pacing, trotting,
and bounding gait under different speeds and can perform a zero-shot transfer
to the real Unitree Go1 robots. The project website for this paper is at
https://shangjaven.github.io/preference-aligned-diffusion-legged.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 14:21:32 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 14:24:45 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yuan",
"Xinyi",
""
],
[
"Shang",
"Zhiwei",
""
],
[
"Wang",
"Zifan",
""
],
[
"Wang",
"Chenkai",
""
],
[
"Shan",
"Zhao",
""
],
[
"Zhu",
"Meixin",
""
],
[
"Bai",
"Chenjia",
""
],
[
"Li",
"Xuelong",
""
],
[
"Wan",
"Weiwei",
""
],
[
"Harada",
"Kensuke",
""
]
]
| TITLE: Preference Aligned Diffusion Planner for Quadrupedal Locomotion Control
ABSTRACT: Diffusion models demonstrate superior performance in capturing complex
distributions from large-scale datasets, providing a promising solution for
quadrupedal locomotion control. However, the robustness of the diffusion
planner is inherently dependent on the diversity of the pre-collected datasets.
To mitigate this issue, we propose a two-stage learning framework to enhance
the capability of the diffusion planner under limited dataset
(reward-agnostic). Through the offline stage, the diffusion planner learns the
joint distribution of state-action sequences from expert datasets without using
reward labels. Subsequently, we perform the online interaction in the
simulation environment based on the trained offline planner, which
significantly diversified the original behavior and thus improves the
robustness. Specifically, we propose a novel weak preference labeling method
without the ground-truth reward or human preferences. The proposed method
exhibits superior stability and velocity tracking accuracy in pacing, trotting,
and bounding gait under different speeds and can perform a zero-shot transfer
to the real Unitree Go1 robots. The project website for this paper is at
https://shangjaven.github.io/preference-aligned-diffusion-legged.
| no_new_dataset | 0.949902 |
2410.13757 | Zichen Zhu | Zichen Zhu, Hao Tang, Yansi Li, Dingye Liu, Hongshen Xu, Kunyao Lan,
Danyang Zhang, Yixuan Jiang, Hao Zhou, Chenrun Wang, Situo Zhang, Liangtai
Sun, Yixiao Wang, Yuheng Sun, Lu Chen, Kai Yu | MobA: Multifaceted Memory-Enhanced Adaptive Planning for Efficient
Mobile Task Automation | NAACL 2025 Demo Track | null | null | null | cs.MA cs.AI cs.CL cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Existing Multimodal Large Language Model (MLLM)-based agents face significant
challenges in handling complex GUI (Graphical User Interface) interactions on
devices. These challenges arise from the dynamic and structured nature of GUI
environments, which integrate text, images, and spatial relationships, as well
as the variability in action spaces across different pages and tasks. To
address these limitations, we propose MobA, a novel MLLM-based mobile assistant
system. MobA introduces an adaptive planning module that incorporates a
reflection mechanism for error recovery and dynamically adjusts plans to align
with the real environment contexts and action module's execution capacity.
Additionally, a multifaceted memory module provides comprehensive memory
support to enhance adaptability and efficiency. We also present MobBench, a
dataset designed for complex mobile interactions. Experimental results on
MobBench and AndroidArena demonstrate MobA's ability to handle dynamic GUI
environments and perform complex mobile task.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 16:53:50 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 07:34:35 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhu",
"Zichen",
""
],
[
"Tang",
"Hao",
""
],
[
"Li",
"Yansi",
""
],
[
"Liu",
"Dingye",
""
],
[
"Xu",
"Hongshen",
""
],
[
"Lan",
"Kunyao",
""
],
[
"Zhang",
"Danyang",
""
],
[
"Jiang",
"Yixuan",
""
],
[
"Zhou",
"Hao",
""
],
[
"Wang",
"Chenrun",
""
],
[
"Zhang",
"Situo",
""
],
[
"Sun",
"Liangtai",
""
],
[
"Wang",
"Yixiao",
""
],
[
"Sun",
"Yuheng",
""
],
[
"Chen",
"Lu",
""
],
[
"Yu",
"Kai",
""
]
]
| TITLE: MobA: Multifaceted Memory-Enhanced Adaptive Planning for Efficient
Mobile Task Automation
ABSTRACT: Existing Multimodal Large Language Model (MLLM)-based agents face significant
challenges in handling complex GUI (Graphical User Interface) interactions on
devices. These challenges arise from the dynamic and structured nature of GUI
environments, which integrate text, images, and spatial relationships, as well
as the variability in action spaces across different pages and tasks. To
address these limitations, we propose MobA, a novel MLLM-based mobile assistant
system. MobA introduces an adaptive planning module that incorporates a
reflection mechanism for error recovery and dynamically adjusts plans to align
with the real environment contexts and action module's execution capacity.
Additionally, a multifaceted memory module provides comprehensive memory
support to enhance adaptability and efficiency. We also present MobBench, a
dataset designed for complex mobile interactions. Experimental results on
MobBench and AndroidArena demonstrate MobA's ability to handle dynamic GUI
environments and perform complex mobile task.
| new_dataset | 0.957238 |
2410.13770 | Antonio Sclocchi | Antonio Sclocchi, Alessandro Favero, Noam Itzhak Levi, Matthieu Wyart | Probing the Latent Hierarchical Structure of Data via Diffusion Models | 10 pages, 6 figures | null | null | null | stat.ML cond-mat.dis-nn cs.LG | http://creativecommons.org/licenses/by/4.0/ | High-dimensional data must be highly structured to be learnable. Although the
compositional and hierarchical nature of data is often put forward to explain
learnability, quantitative measurements establishing these properties are
scarce. Likewise, accessing the latent variables underlying such a data
structure remains a challenge. In this work, we show that forward-backward
experiments in diffusion-based models, where data is noised and then denoised
to generate new samples, are a promising tool to probe the latent structure of
data. We predict in simple hierarchical models that, in this process, changes
in data occur by correlated chunks, with a length scale that diverges at a
noise level where a phase transition is known to take place. Remarkably, we
confirm this prediction in both text and image datasets using state-of-the-art
diffusion models. Our results show how latent variable changes manifest in the
data and establish how to measure these effects in real data using diffusion
models.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 17:08:39 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 20:28:34 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Sclocchi",
"Antonio",
""
],
[
"Favero",
"Alessandro",
""
],
[
"Levi",
"Noam Itzhak",
""
],
[
"Wyart",
"Matthieu",
""
]
]
| TITLE: Probing the Latent Hierarchical Structure of Data via Diffusion Models
ABSTRACT: High-dimensional data must be highly structured to be learnable. Although the
compositional and hierarchical nature of data is often put forward to explain
learnability, quantitative measurements establishing these properties are
scarce. Likewise, accessing the latent variables underlying such a data
structure remains a challenge. In this work, we show that forward-backward
experiments in diffusion-based models, where data is noised and then denoised
to generate new samples, are a promising tool to probe the latent structure of
data. We predict in simple hierarchical models that, in this process, changes
in data occur by correlated chunks, with a length scale that diverges at a
noise level where a phase transition is known to take place. Remarkably, we
confirm this prediction in both text and image datasets using state-of-the-art
diffusion models. Our results show how latent variable changes manifest in the
data and establish how to measure these effects in real data using diffusion
models.
| no_new_dataset | 0.954095 |
2410.14109 | Seong Ho Pahng | Seong Ho Pahng and Sahand Hormoz | Improving Graph Neural Networks by Learning Continuous Edge Directions | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) traditionally employ a message-passing mechanism
that resembles diffusion over undirected graphs, which often leads to
homogenization of node features and reduced discriminative power in tasks such
as node classification. Our key insight for addressing this limitation is to
assign fuzzy edge directions -- that can vary continuously from node $i$
pointing to node $j$ to vice versa -- to the edges of a graph so that features
can preferentially flow in one direction between nodes to enable long-range
information transmission across the graph. We also introduce a novel
complex-valued Laplacian for directed graphs with fuzzy edges where the real
and imaginary parts represent information flow in opposite directions. Using
this Laplacian, we propose a general framework, called Continuous Edge
Direction (CoED) GNN, for learning on graphs with fuzzy edges and prove its
expressivity limits using a generalization of the Weisfeiler-Leman (WL) graph
isomorphism test for directed graphs with fuzzy edges. Our architecture
aggregates neighbor features scaled by the learned edge directions and
processes the aggregated messages from in-neighbors and out-neighbors
separately alongside the self-features of the nodes. Since continuous edge
directions are differentiable, they can be learned jointly with the GNN weights
via gradient-based optimization. CoED GNN is particularly well-suited for graph
ensemble data where the graph structure remains fixed but multiple realizations
of node features are available, such as in gene regulatory networks, web
connectivity graphs, and power grids. We demonstrate through extensive
experiments on both synthetic and real graph ensemble datasets that learning
continuous edge directions significantly improves performance both for
undirected and directed graphs compared with existing methods.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 01:34:35 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 20:41:51 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Pahng",
"Seong Ho",
""
],
[
"Hormoz",
"Sahand",
""
]
]
| TITLE: Improving Graph Neural Networks by Learning Continuous Edge Directions
ABSTRACT: Graph Neural Networks (GNNs) traditionally employ a message-passing mechanism
that resembles diffusion over undirected graphs, which often leads to
homogenization of node features and reduced discriminative power in tasks such
as node classification. Our key insight for addressing this limitation is to
assign fuzzy edge directions -- that can vary continuously from node $i$
pointing to node $j$ to vice versa -- to the edges of a graph so that features
can preferentially flow in one direction between nodes to enable long-range
information transmission across the graph. We also introduce a novel
complex-valued Laplacian for directed graphs with fuzzy edges where the real
and imaginary parts represent information flow in opposite directions. Using
this Laplacian, we propose a general framework, called Continuous Edge
Direction (CoED) GNN, for learning on graphs with fuzzy edges and prove its
expressivity limits using a generalization of the Weisfeiler-Leman (WL) graph
isomorphism test for directed graphs with fuzzy edges. Our architecture
aggregates neighbor features scaled by the learned edge directions and
processes the aggregated messages from in-neighbors and out-neighbors
separately alongside the self-features of the nodes. Since continuous edge
directions are differentiable, they can be learned jointly with the GNN weights
via gradient-based optimization. CoED GNN is particularly well-suited for graph
ensemble data where the graph structure remains fixed but multiple realizations
of node features are available, such as in gene regulatory networks, web
connectivity graphs, and power grids. We demonstrate through extensive
experiments on both synthetic and real graph ensemble datasets that learning
continuous edge directions significantly improves performance both for
undirected and directed graphs compared with existing methods.
| no_new_dataset | 0.955236 |
2410.14493 | Kaixin Lin | Jiajing Wu, Kaixin Lin, Dan Lin, Bozhao Zhang, Zhiying Wu, Jianzhong
Su | Safeguarding Blockchain Ecosystem: Understanding and Detecting Attack
Transactions on Cross-chain Bridges | Accepted by WWW 2025. Please cite the conference version of this
paper, e.g., "Jiajing Wu, Kaixin Lin, Dan Lin, Bozhao Zhang, Zhiying Wu,
Jianzhong Su. Safeguarding Blockchain Ecosystem: Understanding and Detecting
Attack Transactions on Cross-chain Bridges. In Proceedings of the ACM Web
Conference 2025 (WWW, 2025)" | null | 10.1145/3696410.3714604 | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Cross-chain bridges are essential decentralized applications (DApps) to
facilitate interoperability between different blockchain networks. Unlike
regular DApps, the functionality of cross-chain bridges relies on the
collaboration of information both on and off the chain, which exposes them to a
wider risk of attacks. According to our statistics, attacks on cross-chain
bridges have resulted in losses of nearly 4.3 billion dollars since 2021.
Therefore, it is particularly necessary to understand and detect attacks on
cross-chain bridges. In this paper, we collect the largest number of
cross-chain bridge attack incidents to date, including 49 attacks that occurred
between June 2021 and September 2024. Our analysis reveal that attacks against
cross-chain business logic cause significantly more damage than those that do
not. These cross-chain attacks exhibit different patterns compared to normal
transactions in terms of call structure, which effectively indicates potential
attack behaviors. Given the significant losses in these cases and the scarcity
of related research, this paper aims to detect attacks against cross-chain
business logic, and propose the BridgeGuard tool. Specifically, BridgeGuard
models cross-chain transactions from a graph perspective, and employs a
two-stage detection framework comprising global and local graph mining to
identify attack patterns in cross-chain transactions. We conduct multiple
experiments on the datasets with 203 attack transactions and 40,000 normal
cross-chain transactions. The results show that BridgeGuard's reported recall
score is 36.32\% higher than that of state-of-the-art tools and can detect
unknown attack transactions.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 14:25:05 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 02:59:24 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wu",
"Jiajing",
""
],
[
"Lin",
"Kaixin",
""
],
[
"Lin",
"Dan",
""
],
[
"Zhang",
"Bozhao",
""
],
[
"Wu",
"Zhiying",
""
],
[
"Su",
"Jianzhong",
""
]
]
| TITLE: Safeguarding Blockchain Ecosystem: Understanding and Detecting Attack
Transactions on Cross-chain Bridges
ABSTRACT: Cross-chain bridges are essential decentralized applications (DApps) to
facilitate interoperability between different blockchain networks. Unlike
regular DApps, the functionality of cross-chain bridges relies on the
collaboration of information both on and off the chain, which exposes them to a
wider risk of attacks. According to our statistics, attacks on cross-chain
bridges have resulted in losses of nearly 4.3 billion dollars since 2021.
Therefore, it is particularly necessary to understand and detect attacks on
cross-chain bridges. In this paper, we collect the largest number of
cross-chain bridge attack incidents to date, including 49 attacks that occurred
between June 2021 and September 2024. Our analysis reveal that attacks against
cross-chain business logic cause significantly more damage than those that do
not. These cross-chain attacks exhibit different patterns compared to normal
transactions in terms of call structure, which effectively indicates potential
attack behaviors. Given the significant losses in these cases and the scarcity
of related research, this paper aims to detect attacks against cross-chain
business logic, and propose the BridgeGuard tool. Specifically, BridgeGuard
models cross-chain transactions from a graph perspective, and employs a
two-stage detection framework comprising global and local graph mining to
identify attack patterns in cross-chain transactions. We conduct multiple
experiments on the datasets with 203 attack transactions and 40,000 normal
cross-chain transactions. The results show that BridgeGuard's reported recall
score is 36.32\% higher than that of state-of-the-art tools and can detect
unknown attack transactions.
| no_new_dataset | 0.933491 |
2410.14723 | XinFu Li | Xinfu Li, Junying Zhang, Xindi Ma | BeniFul: Backdoor Defense via Middle Feature Analysis for Deep Neural
Networks | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backdoor defenses have recently become important in resisting backdoor
attacks in deep neural networks (DNNs), where attackers implant backdoors into
the DNN model by injecting backdoor samples into the training dataset. Although
there are many defense methods to achieve backdoor detection for DNN inputs and
backdoor elimination for DNN models, they still have not presented a clear
explanation of the relationship between these two missions. In this paper, we
use the features from the middle layer of the DNN model to analyze the
difference between backdoor and benign samples and propose Backdoor
Consistency, which indicates that at least one backdoor exists in the DNN model
if the backdoor trigger is detected exactly on input. By analyzing the middle
features, we design an effective and comprehensive backdoor defense method
named BeniFul, which consists of two parts: a gray-box backdoor input detection
and a white-box backdoor elimination. Specifically, we use the reconstruction
distance from the Variational Auto-Encoder and model inference results to
implement backdoor input detection and a feature distance loss to achieve
backdoor elimination. Experimental results on CIFAR-10 and Tiny ImageNet
against five state-of-the-art attacks demonstrate that our BeniFul exhibits a
great defense capability in backdoor input detection and backdoor elimination.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 13:14:55 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Xinfu",
""
],
[
"Zhang",
"Junying",
""
],
[
"Ma",
"Xindi",
""
]
]
| TITLE: BeniFul: Backdoor Defense via Middle Feature Analysis for Deep Neural
Networks
ABSTRACT: Backdoor defenses have recently become important in resisting backdoor
attacks in deep neural networks (DNNs), where attackers implant backdoors into
the DNN model by injecting backdoor samples into the training dataset. Although
there are many defense methods to achieve backdoor detection for DNN inputs and
backdoor elimination for DNN models, they still have not presented a clear
explanation of the relationship between these two missions. In this paper, we
use the features from the middle layer of the DNN model to analyze the
difference between backdoor and benign samples and propose Backdoor
Consistency, which indicates that at least one backdoor exists in the DNN model
if the backdoor trigger is detected exactly on input. By analyzing the middle
features, we design an effective and comprehensive backdoor defense method
named BeniFul, which consists of two parts: a gray-box backdoor input detection
and a white-box backdoor elimination. Specifically, we use the reconstruction
distance from the Variational Auto-Encoder and model inference results to
implement backdoor input detection and a feature distance loss to achieve
backdoor elimination. Experimental results on CIFAR-10 and Tiny ImageNet
against five state-of-the-art attacks demonstrate that our BeniFul exhibits a
great defense capability in backdoor input detection and backdoor elimination.
| no_new_dataset | 0.942348 |
2410.14853 | Wanyu Du | Wanyu Du, Song Feng, James Gung, Lijia Sun, Yi Zhang, Saab Mansour,
Yanjun Qi | DFlow: Diverse Dialogue Flow Simulation with Large Language Models | 16 pages | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Developing language model-based dialogue agents requires effective data to
train models that can follow specific task logic. However, most existing data
simulation methods focus on increasing diversity in language, topics, or
dialogue acts at the utterance level, largely neglecting a critical aspect of
task logic diversity at the dialogue level. This paper proposes a novel data
simulation method designed to enhance the diversity of synthetic dialogues by
focusing on task execution logic. Our method uses LLMs to generate decision
tree-structured task plans, which enables the derivation of diverse dialogue
trajectories for a given task. Each trajectory, referred to as a "dialog flow",
guides the generation of a multi-turn dialogue that follows a unique
trajectory. We apply this method to generate a task-oriented dialogue dataset
comprising 3,886 dialogue flows across 15 different domains. We validate the
effectiveness of this dataset using the next action prediction task, where
models fine-tuned on our dataset outperform strong baselines, including GPT-4.
Upon acceptance of this paper, we plan to release the code and data publicly.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 20:35:28 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 23:22:15 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Du",
"Wanyu",
""
],
[
"Feng",
"Song",
""
],
[
"Gung",
"James",
""
],
[
"Sun",
"Lijia",
""
],
[
"Zhang",
"Yi",
""
],
[
"Mansour",
"Saab",
""
],
[
"Qi",
"Yanjun",
""
]
]
| TITLE: DFlow: Diverse Dialogue Flow Simulation with Large Language Models
ABSTRACT: Developing language model-based dialogue agents requires effective data to
train models that can follow specific task logic. However, most existing data
simulation methods focus on increasing diversity in language, topics, or
dialogue acts at the utterance level, largely neglecting a critical aspect of
task logic diversity at the dialogue level. This paper proposes a novel data
simulation method designed to enhance the diversity of synthetic dialogues by
focusing on task execution logic. Our method uses LLMs to generate decision
tree-structured task plans, which enables the derivation of diverse dialogue
trajectories for a given task. Each trajectory, referred to as a "dialog flow",
guides the generation of a multi-turn dialogue that follows a unique
trajectory. We apply this method to generate a task-oriented dialogue dataset
comprising 3,886 dialogue flows across 15 different domains. We validate the
effectiveness of this dataset using the next action prediction task, where
models fine-tuned on our dataset outperform strong baselines, including GPT-4.
Upon acceptance of this paper, we plan to release the code and data publicly.
| new_dataset | 0.954816 |
2410.15744 | Yankai Jiang | Yankai Jiang, Wenhui Lei, Xiaofan Zhang, Shaoting Zhang | Unleashing the Potential of Vision-Language Pre-Training for 3D
Zero-Shot Lesion Segmentation via Mask-Attribute Alignment | Accepted as ICLR 2025 conference paper | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in medical vision-language pre-training models have
driven significant progress in zero-shot disease recognition. However,
transferring image-level knowledge to pixel-level tasks, such as lesion
segmentation in 3D CT scans, remains a critical challenge. Due to the
complexity and variability of pathological visual characteristics, existing
methods struggle to align fine-grained lesion features not encountered during
training with disease-related textual representations. In this paper, we
present Malenia, a novel multi-scale lesion-level mask-attribute alignment
framework, specifically designed for 3D zero-shot lesion segmentation. Malenia
improves the compatibility between mask representations and their associated
elemental attributes, explicitly linking the visual features of unseen lesions
with the extensible knowledge learned from previously seen ones. Furthermore,
we design a Cross-Modal Knowledge Injection module to enhance both visual and
textual features with mutually beneficial information, effectively guiding the
generation of segmentation results. Comprehensive experiments across three
datasets and 12 lesion categories validate the superior performance of Malenia.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 08:01:58 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 16:58:17 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Jiang",
"Yankai",
""
],
[
"Lei",
"Wenhui",
""
],
[
"Zhang",
"Xiaofan",
""
],
[
"Zhang",
"Shaoting",
""
]
]
| TITLE: Unleashing the Potential of Vision-Language Pre-Training for 3D
Zero-Shot Lesion Segmentation via Mask-Attribute Alignment
ABSTRACT: Recent advancements in medical vision-language pre-training models have
driven significant progress in zero-shot disease recognition. However,
transferring image-level knowledge to pixel-level tasks, such as lesion
segmentation in 3D CT scans, remains a critical challenge. Due to the
complexity and variability of pathological visual characteristics, existing
methods struggle to align fine-grained lesion features not encountered during
training with disease-related textual representations. In this paper, we
present Malenia, a novel multi-scale lesion-level mask-attribute alignment
framework, specifically designed for 3D zero-shot lesion segmentation. Malenia
improves the compatibility between mask representations and their associated
elemental attributes, explicitly linking the visual features of unseen lesions
with the extensible knowledge learned from previously seen ones. Furthermore,
we design a Cross-Modal Knowledge Injection module to enhance both visual and
textual features with mutually beneficial information, effectively guiding the
generation of segmentation results. Comprehensive experiments across three
datasets and 12 lesion categories validate the superior performance of Malenia.
| no_new_dataset | 0.941007 |
2410.16251 | Baixiang Huang | Baixiang Huang, Canyu Chen, Xiongxiao Xu, Ali Payani, Kai Shu | Can Knowledge Editing Really Correct Hallucinations? | ICLR 2025. Main paper: 10 pages; total: 34 pages (including
appendix). The first two authors contributed equally to this work. Code,
data, results, and additional resources are available on the project website:
https://llm-editing.github.io | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) suffer from hallucinations, referring to the
non-factual information in generated content, despite their superior capacities
across tasks. Meanwhile, knowledge editing has been developed as a new popular
paradigm to correct erroneous factual knowledge encoded in LLMs with the
advantage of avoiding retraining from scratch. However, a common issue of
existing evaluation datasets for knowledge editing is that they do not ensure
that LLMs actually generate hallucinated answers to the evaluation questions
before editing. When LLMs are evaluated on such datasets after being edited by
different techniques, it is hard to directly adopt the performance to assess
the effectiveness of different knowledge editing methods in correcting
hallucinations. Thus, the fundamental question remains insufficiently
validated: Can knowledge editing really correct hallucinations in LLMs? We
proposed HalluEditBench to holistically benchmark knowledge editing methods in
correcting real-world hallucinations. First, we rigorously construct a massive
hallucination dataset with 9 domains, 26 topics and more than 6,000
hallucinations. Then, we assess the performance of knowledge editing methods in
a holistic way on five dimensions including Efficacy, Generalization,
Portability, Locality, and Robustness. Through HalluEditBench, we have provided
new insights into the potentials and limitations of different knowledge editing
methods in correcting hallucinations, which could inspire future improvements
and facilitate progress in the field of knowledge editing.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 17:55:54 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Oct 2024 18:00:01 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 15:37:23 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Huang",
"Baixiang",
""
],
[
"Chen",
"Canyu",
""
],
[
"Xu",
"Xiongxiao",
""
],
[
"Payani",
"Ali",
""
],
[
"Shu",
"Kai",
""
]
]
| TITLE: Can Knowledge Editing Really Correct Hallucinations?
ABSTRACT: Large Language Models (LLMs) suffer from hallucinations, referring to the
non-factual information in generated content, despite their superior capacities
across tasks. Meanwhile, knowledge editing has been developed as a new popular
paradigm to correct erroneous factual knowledge encoded in LLMs with the
advantage of avoiding retraining from scratch. However, a common issue of
existing evaluation datasets for knowledge editing is that they do not ensure
that LLMs actually generate hallucinated answers to the evaluation questions
before editing. When LLMs are evaluated on such datasets after being edited by
different techniques, it is hard to directly adopt the performance to assess
the effectiveness of different knowledge editing methods in correcting
hallucinations. Thus, the fundamental question remains insufficiently
validated: Can knowledge editing really correct hallucinations in LLMs? We
proposed HalluEditBench to holistically benchmark knowledge editing methods in
correcting real-world hallucinations. First, we rigorously construct a massive
hallucination dataset with 9 domains, 26 topics and more than 6,000
hallucinations. Then, we assess the performance of knowledge editing methods in
a holistic way on five dimensions including Efficacy, Generalization,
Portability, Locality, and Robustness. Through HalluEditBench, we have provided
new insights into the potentials and limitations of different knowledge editing
methods in correcting hallucinations, which could inspire future improvements
and facilitate progress in the field of knowledge editing.
| new_dataset | 0.956715 |
2410.18084 | Lingdong Kong | Hengwei Bian and Lingdong Kong and Haozhe Xie and Liang Pan and Yu
Qiao and Ziwei Liu | DynamicCity: Large-Scale 4D Occupancy Generation from Dynamic Scenes | ICLR 2025 Spotlight; 35 pages, 18 figures, 15 tables; Project Page at
https://dynamic-city.github.io/ | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Urban scene generation has been developing rapidly recently. However,
existing methods primarily focus on generating static and single-frame scenes,
overlooking the inherently dynamic nature of real-world driving environments.
In this work, we introduce DynamicCity, a novel 4D occupancy generation
framework capable of generating large-scale, high-quality dynamic 4D scenes
with semantics. DynamicCity mainly consists of two key models. 1) A VAE model
for learning HexPlane as the compact 4D representation. Instead of using naive
averaging operations, DynamicCity employs a novel Projection Module to
effectively compress 4D features into six 2D feature maps for HexPlane
construction, which significantly enhances HexPlane fitting quality (up to
12.56 mIoU gain). Furthermore, we utilize an Expansion & Squeeze Strategy to
reconstruct 3D feature volumes in parallel, which improves both network
training efficiency and reconstruction accuracy than naively querying each 3D
point (up to 7.05 mIoU gain, 2.06x training speedup, and 70.84% memory
reduction). 2) A DiT-based diffusion model for HexPlane generation. To make
HexPlane feasible for DiT generation, a Padded Rollout Operation is proposed to
reorganize all six feature planes of the HexPlane as a squared 2D feature map.
In particular, various conditions could be introduced in the diffusion or
sampling process, supporting versatile 4D generation applications, such as
trajectory- and command-driven generation, inpainting, and layout-conditioned
generation. Extensive experiments on the CarlaSC and Waymo datasets demonstrate
that DynamicCity significantly outperforms existing state-of-the-art 4D
occupancy generation methods across multiple metrics. The code and models have
been released to facilitate future research.
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2024 17:59:58 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 04:31:23 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Bian",
"Hengwei",
""
],
[
"Kong",
"Lingdong",
""
],
[
"Xie",
"Haozhe",
""
],
[
"Pan",
"Liang",
""
],
[
"Qiao",
"Yu",
""
],
[
"Liu",
"Ziwei",
""
]
]
| TITLE: DynamicCity: Large-Scale 4D Occupancy Generation from Dynamic Scenes
ABSTRACT: Urban scene generation has been developing rapidly recently. However,
existing methods primarily focus on generating static and single-frame scenes,
overlooking the inherently dynamic nature of real-world driving environments.
In this work, we introduce DynamicCity, a novel 4D occupancy generation
framework capable of generating large-scale, high-quality dynamic 4D scenes
with semantics. DynamicCity mainly consists of two key models. 1) A VAE model
for learning HexPlane as the compact 4D representation. Instead of using naive
averaging operations, DynamicCity employs a novel Projection Module to
effectively compress 4D features into six 2D feature maps for HexPlane
construction, which significantly enhances HexPlane fitting quality (up to
12.56 mIoU gain). Furthermore, we utilize an Expansion & Squeeze Strategy to
reconstruct 3D feature volumes in parallel, which improves both network
training efficiency and reconstruction accuracy than naively querying each 3D
point (up to 7.05 mIoU gain, 2.06x training speedup, and 70.84% memory
reduction). 2) A DiT-based diffusion model for HexPlane generation. To make
HexPlane feasible for DiT generation, a Padded Rollout Operation is proposed to
reorganize all six feature planes of the HexPlane as a squared 2D feature map.
In particular, various conditions could be introduced in the diffusion or
sampling process, supporting versatile 4D generation applications, such as
trajectory- and command-driven generation, inpainting, and layout-conditioned
generation. Extensive experiments on the CarlaSC and Waymo datasets demonstrate
that DynamicCity significantly outperforms existing state-of-the-art 4D
occupancy generation methods across multiple metrics. The code and models have
been released to facilitate future research.
| no_new_dataset | 0.949995 |
2410.19631 | Julien Roy | Ihor Neporozhnii, Julien Roy, Emmanuel Bengio, Jason Hartford | Efficient Biological Data Acquisition through Inference Set Design | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In drug discovery, highly automated high-throughput laboratories are used to
screen a large number of compounds in search of effective drugs. These
experiments are expensive, so one might hope to reduce their cost by only
experimenting on a subset of the compounds, and predicting the outcomes of the
remaining experiments. In this work, we model this scenario as a sequential
subset selection problem: we aim to select the smallest set of candidates in
order to achieve some desired level of accuracy for the system as a whole. Our
key observation is that, if there is heterogeneity in the difficulty of the
prediction problem across the input space, selectively obtaining the labels for
the hardest examples in the acquisition pool will leave only the relatively
easy examples to remain in the inference set, leading to better overall system
performance. We call this mechanism inference set design, and propose the use
of a confidence-based active learning solution to prune out these challenging
examples. Our algorithm includes an explicit stopping criterion that interrupts
the acquisition loop when it is sufficiently confident that the system has
reached the target performance. Our empirical studies on image and molecular
datasets, as well as a real-world large-scale biological assay, show that
active learning for inference set design leads to significant reduction in
experimental cost while retaining high system performance.
| [
{
"version": "v1",
"created": "Fri, 25 Oct 2024 15:34:03 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Nov 2024 17:51:33 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 23:46:21 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Neporozhnii",
"Ihor",
""
],
[
"Roy",
"Julien",
""
],
[
"Bengio",
"Emmanuel",
""
],
[
"Hartford",
"Jason",
""
]
]
| TITLE: Efficient Biological Data Acquisition through Inference Set Design
ABSTRACT: In drug discovery, highly automated high-throughput laboratories are used to
screen a large number of compounds in search of effective drugs. These
experiments are expensive, so one might hope to reduce their cost by only
experimenting on a subset of the compounds, and predicting the outcomes of the
remaining experiments. In this work, we model this scenario as a sequential
subset selection problem: we aim to select the smallest set of candidates in
order to achieve some desired level of accuracy for the system as a whole. Our
key observation is that, if there is heterogeneity in the difficulty of the
prediction problem across the input space, selectively obtaining the labels for
the hardest examples in the acquisition pool will leave only the relatively
easy examples to remain in the inference set, leading to better overall system
performance. We call this mechanism inference set design, and propose the use
of a confidence-based active learning solution to prune out these challenging
examples. Our algorithm includes an explicit stopping criterion that interrupts
the acquisition loop when it is sufficiently confident that the system has
reached the target performance. Our empirical studies on image and molecular
datasets, as well as a real-world large-scale biological assay, show that
active learning for inference set design leads to significant reduction in
experimental cost while retaining high system performance.
| no_new_dataset | 0.947962 |
2410.20026 | Hao Ding | Hao Ding, Yuqian Zhang, Wenzheng Cheng, Xinyu Wang, Xu Lian, Chenhao
Yu, Hongchao Shu, Ji Woong Kim, Axel Krieger, Mathias Unberath | Towards Robust Algorithms for Surgical Phase Recognition via Digital
Twin Representation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Surgical phase recognition (SPR) is an integral component of surgical data
science, enabling high-level surgical analysis. End-to-end trained neural
networks that predict surgical phase directly from videos have shown excellent
performance on benchmarks. However, these models struggle with robustness due
to non-causal associations in the training set. Our goal is to improve model
robustness to variations in the surgical videos by leveraging the digital twin
(DT) paradigm -- an intermediary layer to separate high-level analysis (SPR)
from low-level processing. As a proof of concept, we present a DT
representation-based framework for SPR from videos. The framework employs
vision foundation models with reliable low-level scene understanding to craft
DT representation. We embed the DT representation in place of raw video inputs
in the state-of-the-art SPR model. The framework is trained on the Cholec80
dataset and evaluated on out-of-distribution (OOD) and corrupted test samples.
Contrary to the vulnerability of the baseline model, our framework demonstrates
strong robustness on both OOD and corrupted samples, with a video-level
accuracy of 80.3 on a highly corrupted Cholec80 test set, 67.9 on the
challenging CRCD dataset, and 99.8 on an internal robotic surgery dataset,
outperforming the baseline by 3.9, 16.8, and 90.9 respectively. We also find
that using DT representation as an augmentation to the raw input can
significantly improve model robustness. Our findings lend support to the thesis
that DT representations are effective in enhancing model robustness. Future
work will seek to improve the feature informativeness and incorporate
interpretability for a more comprehensive framework.
| [
{
"version": "v1",
"created": "Sat, 26 Oct 2024 00:49:06 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 02:45:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ding",
"Hao",
""
],
[
"Zhang",
"Yuqian",
""
],
[
"Cheng",
"Wenzheng",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Lian",
"Xu",
""
],
[
"Yu",
"Chenhao",
""
],
[
"Shu",
"Hongchao",
""
],
[
"Kim",
"Ji Woong",
""
],
[
"Krieger",
"Axel",
""
],
[
"Unberath",
"Mathias",
""
]
]
| TITLE: Towards Robust Algorithms for Surgical Phase Recognition via Digital
Twin Representation
ABSTRACT: Surgical phase recognition (SPR) is an integral component of surgical data
science, enabling high-level surgical analysis. End-to-end trained neural
networks that predict surgical phase directly from videos have shown excellent
performance on benchmarks. However, these models struggle with robustness due
to non-causal associations in the training set. Our goal is to improve model
robustness to variations in the surgical videos by leveraging the digital twin
(DT) paradigm -- an intermediary layer to separate high-level analysis (SPR)
from low-level processing. As a proof of concept, we present a DT
representation-based framework for SPR from videos. The framework employs
vision foundation models with reliable low-level scene understanding to craft
DT representation. We embed the DT representation in place of raw video inputs
in the state-of-the-art SPR model. The framework is trained on the Cholec80
dataset and evaluated on out-of-distribution (OOD) and corrupted test samples.
Contrary to the vulnerability of the baseline model, our framework demonstrates
strong robustness on both OOD and corrupted samples, with a video-level
accuracy of 80.3 on a highly corrupted Cholec80 test set, 67.9 on the
challenging CRCD dataset, and 99.8 on an internal robotic surgery dataset,
outperforming the baseline by 3.9, 16.8, and 90.9 respectively. We also find
that using DT representation as an augmentation to the raw input can
significantly improve model robustness. Our findings lend support to the thesis
that DT representations are effective in enhancing model robustness. Future
work will seek to improve the feature informativeness and incorporate
interpretability for a more comprehensive framework.
| no_new_dataset | 0.951278 |
2410.21629 | Pratheba Selvaraju | Pratheba Selvaraju, Victoria Fernandez Abrevaya, Timo Bolkart, Rick
Akkerman, Tianyu Ding, Faezeh Amjadi, Ilya Zharkov | OFER: Occluded Face Expression Reconstruction | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Reconstructing 3D face models from a single image is an inherently ill-posed
problem, which becomes even more challenging in the presence of occlusions. In
addition to fewer available observations, occlusions introduce an extra source
of ambiguity where multiple reconstructions can be equally valid. Despite the
ubiquity of the problem, very few methods address its multi-hypothesis nature.
In this paper we introduce OFER, a novel approach for single-image 3D face
reconstruction that can generate plausible, diverse, and expressive 3D faces,
even under strong occlusions. Specifically, we train two diffusion models to
generate the shape and expression coefficients of a face parametric model,
conditioned on the input image. This approach captures the multi-modal nature
of the problem, generating a distribution of solutions as output. However, to
maintain consistency across diverse expressions, the challenge is to select the
best matching shape. To achieve this, we propose a novel ranking mechanism that
sorts the outputs of the shape diffusion network based on predicted shape
accuracy scores. We evaluate our method using standard benchmarks and introduce
CO-545, a new protocol and dataset designed to assess the accuracy of
expressive faces under occlusion. Our results show improved performance over
occlusion-based methods, while also enabling the generation of diverse
expressions for a given image.
| [
{
"version": "v1",
"created": "Tue, 29 Oct 2024 00:21:26 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 19:16:33 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Selvaraju",
"Pratheba",
""
],
[
"Abrevaya",
"Victoria Fernandez",
""
],
[
"Bolkart",
"Timo",
""
],
[
"Akkerman",
"Rick",
""
],
[
"Ding",
"Tianyu",
""
],
[
"Amjadi",
"Faezeh",
""
],
[
"Zharkov",
"Ilya",
""
]
]
| TITLE: OFER: Occluded Face Expression Reconstruction
ABSTRACT: Reconstructing 3D face models from a single image is an inherently ill-posed
problem, which becomes even more challenging in the presence of occlusions. In
addition to fewer available observations, occlusions introduce an extra source
of ambiguity where multiple reconstructions can be equally valid. Despite the
ubiquity of the problem, very few methods address its multi-hypothesis nature.
In this paper we introduce OFER, a novel approach for single-image 3D face
reconstruction that can generate plausible, diverse, and expressive 3D faces,
even under strong occlusions. Specifically, we train two diffusion models to
generate the shape and expression coefficients of a face parametric model,
conditioned on the input image. This approach captures the multi-modal nature
of the problem, generating a distribution of solutions as output. However, to
maintain consistency across diverse expressions, the challenge is to select the
best matching shape. To achieve this, we propose a novel ranking mechanism that
sorts the outputs of the shape diffusion network based on predicted shape
accuracy scores. We evaluate our method using standard benchmarks and introduce
CO-545, a new protocol and dataset designed to assess the accuracy of
expressive faces under occlusion. Our results show improved performance over
occlusion-based methods, while also enabling the generation of diverse
expressions for a given image.
| new_dataset | 0.963609 |
2410.22729 | Joseph Janssen | Vincent Guan, Joseph Janssen, Hossein Rahmani, Andrew Warren, Stephen
Zhang, Elina Robeva, Geoffrey Schiebinger | Identifying Drift, Diffusion, and Causal Structure from Temporal
Snapshots | null | null | null | null | stat.ML cs.LG math.ST stat.TH | http://creativecommons.org/licenses/by/4.0/ | Stochastic differential equations (SDEs) are a fundamental tool for modelling
dynamic processes, including gene regulatory networks (GRNs), contaminant
transport, financial markets, and image generation. However, learning the
underlying SDE from data is a challenging task, especially if individual
trajectories are not observable. Motivated by burgeoning research in
single-cell datasets, we present the first comprehensive approach for jointly
identifying the drift and diffusion of an SDE from its temporal marginals.
Assuming linear drift and additive diffusion, we prove that these parameters
are identifiable from marginals if and only if the initial distribution lacks
any generalized rotational symmetries. We further prove that the causal graph
of any SDE with additive diffusion can be recovered from the SDE parameters. To
complement this theory, we adapt entropy-regularized optimal transport to
handle anisotropic diffusion, and introduce APPEX (Alternating Projection
Parameter Estimation from $X_0$), an iterative algorithm designed to estimate
the drift, diffusion, and causal graph of an additive noise SDE, solely from
temporal marginals. We show that APPEX iteratively decreases Kullback-Leibler
divergence to the true solution, and demonstrate its effectiveness on simulated
data from linear additive noise SDEs.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 06:28:21 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 00:23:28 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Guan",
"Vincent",
""
],
[
"Janssen",
"Joseph",
""
],
[
"Rahmani",
"Hossein",
""
],
[
"Warren",
"Andrew",
""
],
[
"Zhang",
"Stephen",
""
],
[
"Robeva",
"Elina",
""
],
[
"Schiebinger",
"Geoffrey",
""
]
]
| TITLE: Identifying Drift, Diffusion, and Causal Structure from Temporal
Snapshots
ABSTRACT: Stochastic differential equations (SDEs) are a fundamental tool for modelling
dynamic processes, including gene regulatory networks (GRNs), contaminant
transport, financial markets, and image generation. However, learning the
underlying SDE from data is a challenging task, especially if individual
trajectories are not observable. Motivated by burgeoning research in
single-cell datasets, we present the first comprehensive approach for jointly
identifying the drift and diffusion of an SDE from its temporal marginals.
Assuming linear drift and additive diffusion, we prove that these parameters
are identifiable from marginals if and only if the initial distribution lacks
any generalized rotational symmetries. We further prove that the causal graph
of any SDE with additive diffusion can be recovered from the SDE parameters. To
complement this theory, we adapt entropy-regularized optimal transport to
handle anisotropic diffusion, and introduce APPEX (Alternating Projection
Parameter Estimation from $X_0$), an iterative algorithm designed to estimate
the drift, diffusion, and causal graph of an additive noise SDE, solely from
temporal marginals. We show that APPEX iteratively decreases Kullback-Leibler
divergence to the true solution, and demonstrate its effectiveness on simulated
data from linear additive noise SDEs.
| no_new_dataset | 0.943556 |
2410.23208 | Michael Beukman | Michael Matthews, Michael Beukman, Chris Lu, Jakob Foerster | Kinetix: Investigating the Training of General Agents through Open-Ended
Physics-Based Control Tasks | ICLR 2025 Oral. The first two authors contributed equally. Project
page located at: https://kinetix-env.github.io/ | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While large models trained with self-supervised learning on offline datasets
have shown remarkable capabilities in text and image domains, achieving the
same generalisation for agents that act in sequential decision problems remains
an open challenge. In this work, we take a step towards this goal by
procedurally generating tens of millions of 2D physics-based tasks and using
these to train a general reinforcement learning (RL) agent for physical
control. To this end, we introduce Kinetix: an open-ended space of
physics-based RL environments that can represent tasks ranging from robotic
locomotion and grasping to video games and classic RL environments, all within
a unified framework. Kinetix makes use of our novel hardware-accelerated
physics engine Jax2D that allows us to cheaply simulate billions of environment
steps during training. Our trained agent exhibits strong physical reasoning
capabilities in 2D space, being able to zero-shot solve unseen human-designed
environments. Furthermore, fine-tuning this general agent on tasks of interest
shows significantly stronger performance than training an RL agent *tabula
rasa*. This includes solving some environments that standard RL training
completely fails at. We believe this demonstrates the feasibility of large
scale, mixed-quality pre-training for online RL and we hope that Kinetix will
serve as a useful framework to investigate this further.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 16:59:41 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 14:29:16 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Matthews",
"Michael",
""
],
[
"Beukman",
"Michael",
""
],
[
"Lu",
"Chris",
""
],
[
"Foerster",
"Jakob",
""
]
]
| TITLE: Kinetix: Investigating the Training of General Agents through Open-Ended
Physics-Based Control Tasks
ABSTRACT: While large models trained with self-supervised learning on offline datasets
have shown remarkable capabilities in text and image domains, achieving the
same generalisation for agents that act in sequential decision problems remains
an open challenge. In this work, we take a step towards this goal by
procedurally generating tens of millions of 2D physics-based tasks and using
these to train a general reinforcement learning (RL) agent for physical
control. To this end, we introduce Kinetix: an open-ended space of
physics-based RL environments that can represent tasks ranging from robotic
locomotion and grasping to video games and classic RL environments, all within
a unified framework. Kinetix makes use of our novel hardware-accelerated
physics engine Jax2D that allows us to cheaply simulate billions of environment
steps during training. Our trained agent exhibits strong physical reasoning
capabilities in 2D space, being able to zero-shot solve unseen human-designed
environments. Furthermore, fine-tuning this general agent on tasks of interest
shows significantly stronger performance than training an RL agent *tabula
rasa*. This includes solving some environments that standard RL training
completely fails at. We believe this demonstrates the feasibility of large
scale, mixed-quality pre-training for online RL and we hope that Kinetix will
serve as a useful framework to investigate this further.
| no_new_dataset | 0.946151 |
2410.23751 | Yedu Krishna P | S Balasubramanian, M Sai Subramaniam, Sai Sriram Talasu, Yedu Krishna
P, Manepalli Pranav Phanindra Sai, Ravi Mukkamala and Darshan Gera | EXACFS -- A CIL Method to mitigate Catastrophic Forgetting | null | null | 10.1145/3702250.3702267 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNNS) excel at learning from static datasets but
struggle with continual learning, where data arrives sequentially. Catastrophic
forgetting, the phenomenon of forgetting previously learned knowledge, is a
primary challenge. This paper introduces EXponentially Averaged Class-wise
Feature Significance (EXACFS) to mitigate this issue in the class incremental
learning (CIL) setting. By estimating the significance of model features for
each learned class using loss gradients, gradually aging the significance
through the incremental tasks and preserving the significant features through a
distillation loss, EXACFS effectively balances remembering old knowledge
(stability) and learning new knowledge (plasticity). Extensive experiments on
CIFAR-100 and ImageNet-100 demonstrate EXACFS's superior performance in
preserving stability while acquiring plasticity.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 09:11:56 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 09:30:42 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Balasubramanian",
"S",
""
],
[
"Subramaniam",
"M Sai",
""
],
[
"Talasu",
"Sai Sriram",
""
],
[
"P",
"Yedu Krishna",
""
],
[
"Sai",
"Manepalli Pranav Phanindra",
""
],
[
"Mukkamala",
"Ravi",
""
],
[
"Gera",
"Darshan",
""
]
]
| TITLE: EXACFS -- A CIL Method to mitigate Catastrophic Forgetting
ABSTRACT: Deep neural networks (DNNS) excel at learning from static datasets but
struggle with continual learning, where data arrives sequentially. Catastrophic
forgetting, the phenomenon of forgetting previously learned knowledge, is a
primary challenge. This paper introduces EXponentially Averaged Class-wise
Feature Significance (EXACFS) to mitigate this issue in the class incremental
learning (CIL) setting. By estimating the significance of model features for
each learned class using loss gradients, gradually aging the significance
through the incremental tasks and preserving the significant features through a
distillation loss, EXACFS effectively balances remembering old knowledge
(stability) and learning new knowledge (plasticity). Extensive experiments on
CIFAR-100 and ImageNet-100 demonstrate EXACFS's superior performance in
preserving stability while acquiring plasticity.
| no_new_dataset | 0.94868 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.