id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.03462 | Ahmed Njifenjou | Ahmed Njifenjou, Virgile Sucal, Bassam Jabaian, Fabrice Lef\`evre | Open-Source Large Language Models as Multilingual Crowdworkers:
Synthesizing Open-Domain Dialogues in Several Languages With No Examples in
Targets and No Machine Translation | null | null | null | null | cs.CL cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | The prevailing paradigm in the domain of Open-Domain Dialogue agents
predominantly focuses on the English language, encompassing both models and
datasets. Furthermore, the financial and temporal investments required for
crowdsourcing such datasets for finetuning are substantial, particularly when
multiple languages are involved. Fortunately, advancements in Large Language
Models (LLMs) have unveiled a plethora of possibilities across diverse tasks.
Specifically, instruction-tuning has enabled LLMs to execute tasks based on
natural language instructions, occasionally surpassing the performance of human
crowdworkers. Additionally, these models possess the capability to function in
various languages within a single thread. Consequently, to generate new samples
in different languages, we propose leveraging these capabilities to replicate
the data collection process. We introduce a pipeline for generating Open-Domain
Dialogue data in multiple Target Languages using LLMs, with demonstrations
provided in a unique Source Language. By eschewing explicit Machine Translation
in this approach, we enhance the adherence to language-specific nuances. We
apply this methodology to the PersonaChat dataset. To enhance the openness of
generated dialogues and mimic real life scenarii, we added the notion of speech
events corresponding to the type of conversation the speakers are involved in
and also that of common ground which represents the premises of a conversation.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 12:52:14 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Njifenjou",
"Ahmed",
""
],
[
"Sucal",
"Virgile",
""
],
[
"Jabaian",
"Bassam",
""
],
[
"Lefèvre",
"Fabrice",
""
]
]
| TITLE: Open-Source Large Language Models as Multilingual Crowdworkers:
Synthesizing Open-Domain Dialogues in Several Languages With No Examples in
Targets and No Machine Translation
ABSTRACT: The prevailing paradigm in the domain of Open-Domain Dialogue agents
predominantly focuses on the English language, encompassing both models and
datasets. Furthermore, the financial and temporal investments required for
crowdsourcing such datasets for finetuning are substantial, particularly when
multiple languages are involved. Fortunately, advancements in Large Language
Models (LLMs) have unveiled a plethora of possibilities across diverse tasks.
Specifically, instruction-tuning has enabled LLMs to execute tasks based on
natural language instructions, occasionally surpassing the performance of human
crowdworkers. Additionally, these models possess the capability to function in
various languages within a single thread. Consequently, to generate new samples
in different languages, we propose leveraging these capabilities to replicate
the data collection process. We introduce a pipeline for generating Open-Domain
Dialogue data in multiple Target Languages using LLMs, with demonstrations
provided in a unique Source Language. By eschewing explicit Machine Translation
in this approach, we enhance the adherence to language-specific nuances. We
apply this methodology to the PersonaChat dataset. To enhance the openness of
generated dialogues and mimic real life scenarii, we added the notion of speech
events corresponding to the type of conversation the speakers are involved in
and also that of common ground which represents the premises of a conversation.
| no_new_dataset | 0.887497 |
2503.03476 | Xiaoyi Wei | Jiaxin Tu, Xiaoyi Wei, Yueqi Zhang, Taixian Hou, Xiaofei Gao, Zhiyan
Dong, Peng Zhai, and Lihua Zhang | Continuous Control of Diverse Skills in Quadruped Robots Without
Complete Expert Datasets | Accepted by ICRA 2025 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning diverse skills for quadruped robots presents significant challenges,
such as mastering complex transitions between different skills and handling
tasks of varying difficulty. Existing imitation learning methods, while
successful, rely on expensive datasets to reproduce expert behaviors. Inspired
by introspective learning, we propose Progressive Adversarial Self-Imitation
Skill Transition (PASIST), a novel method that eliminates the need for complete
expert datasets. PASIST autonomously explores and selects high-quality
trajectories based on predefined target poses instead of demonstrations,
leveraging the Generative Adversarial Self-Imitation Learning (GASIL)
framework. To further enhance learning, We develop a skill selection module to
mitigate mode collapse by balancing the weights of skills with varying levels
of difficulty. Through these methods, PASIST is able to reproduce skills
corresponding to the target pose while achieving smooth and natural transitions
between them. Evaluations on both simulation platforms and the Solo 8 robot
confirm the effectiveness of PASIST, offering an efficient alternative to
expert-driven learning.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 13:12:49 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Tu",
"Jiaxin",
""
],
[
"Wei",
"Xiaoyi",
""
],
[
"Zhang",
"Yueqi",
""
],
[
"Hou",
"Taixian",
""
],
[
"Gao",
"Xiaofei",
""
],
[
"Dong",
"Zhiyan",
""
],
[
"Zhai",
"Peng",
""
],
[
"Zhang",
"Lihua",
""
]
]
| TITLE: Continuous Control of Diverse Skills in Quadruped Robots Without
Complete Expert Datasets
ABSTRACT: Learning diverse skills for quadruped robots presents significant challenges,
such as mastering complex transitions between different skills and handling
tasks of varying difficulty. Existing imitation learning methods, while
successful, rely on expensive datasets to reproduce expert behaviors. Inspired
by introspective learning, we propose Progressive Adversarial Self-Imitation
Skill Transition (PASIST), a novel method that eliminates the need for complete
expert datasets. PASIST autonomously explores and selects high-quality
trajectories based on predefined target poses instead of demonstrations,
leveraging the Generative Adversarial Self-Imitation Learning (GASIL)
framework. To further enhance learning, We develop a skill selection module to
mitigate mode collapse by balancing the weights of skills with varying levels
of difficulty. Through these methods, PASIST is able to reproduce skills
corresponding to the target pose while achieving smooth and natural transitions
between them. Evaluations on both simulation platforms and the Solo 8 robot
confirm the effectiveness of PASIST, offering an efficient alternative to
expert-driven learning.
| no_new_dataset | 0.950088 |
2503.03485 | Soumya Ghosh | Alexis Chevalier, Soumya Ghosh, Urvi Awasthi, James Watkins, Julia
Bieniewska, Nichita Mitrea, Olga Kotova, Kirill Shkura, Andrew Noble, Michael
Steinbaugh, Julien Delile, Christoph Meier, Leonid Zhukov, Iya Khalil,
Srayanta Mukherjee, Judith Mueller | TEDDY: A Family Of Foundation Models For Understanding Single Cell
Biology | null | null | null | null | cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the biological mechanism of disease is critical for medicine,
and in particular drug discovery. AI-powered analysis of genome-scale
biological data hold great potential in this regard. The increasing
availability of single-cell RNA sequencing data has enabled the development of
large foundation models for disease biology. However, existing foundation
models either do not improve or only modestly improve over task-specific models
in downstream applications. Here, we explored two avenues for improving the
state-of-the-art. First, we scaled the pre-training dataset to 116 million
cells, which is larger than those used by previous models. Second, we leveraged
the availability of large-scale biological annotations as a form of supervision
during pre-training. We trained the TEDDY family of models comprising six
transformer-based state-of-the-art single-cell foundation models with 70
million, 160 million, and 400 million parameters. We vetted our models on two
downstream evaluation tasks -- identifying the underlying disease state of
held-out donors not seen during training and distinguishing healthy cells from
diseased ones for disease conditions and donors not seen during training.
Scaling experiments showed that performance improved predictably with both data
volume and parameter count. Our models showed substantial improvement over
existing work on the first task and more muted improvements on the second.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 13:24:57 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Chevalier",
"Alexis",
""
],
[
"Ghosh",
"Soumya",
""
],
[
"Awasthi",
"Urvi",
""
],
[
"Watkins",
"James",
""
],
[
"Bieniewska",
"Julia",
""
],
[
"Mitrea",
"Nichita",
""
],
[
"Kotova",
"Olga",
""
],
[
"Shkura",
"Kirill",
""
],
[
"Noble",
"Andrew",
""
],
[
"Steinbaugh",
"Michael",
""
],
[
"Delile",
"Julien",
""
],
[
"Meier",
"Christoph",
""
],
[
"Zhukov",
"Leonid",
""
],
[
"Khalil",
"Iya",
""
],
[
"Mukherjee",
"Srayanta",
""
],
[
"Mueller",
"Judith",
""
]
]
| TITLE: TEDDY: A Family Of Foundation Models For Understanding Single Cell
Biology
ABSTRACT: Understanding the biological mechanism of disease is critical for medicine,
and in particular drug discovery. AI-powered analysis of genome-scale
biological data hold great potential in this regard. The increasing
availability of single-cell RNA sequencing data has enabled the development of
large foundation models for disease biology. However, existing foundation
models either do not improve or only modestly improve over task-specific models
in downstream applications. Here, we explored two avenues for improving the
state-of-the-art. First, we scaled the pre-training dataset to 116 million
cells, which is larger than those used by previous models. Second, we leveraged
the availability of large-scale biological annotations as a form of supervision
during pre-training. We trained the TEDDY family of models comprising six
transformer-based state-of-the-art single-cell foundation models with 70
million, 160 million, and 400 million parameters. We vetted our models on two
downstream evaluation tasks -- identifying the underlying disease state of
held-out donors not seen during training and distinguishing healthy cells from
diseased ones for disease conditions and donors not seen during training.
Scaling experiments showed that performance improved predictably with both data
volume and parameter count. Our models showed substantial improvement over
existing work on the first task and more muted improvements on the second.
| no_new_dataset | 0.948298 |
2503.03486 | Maresa Schr\"oder | Maresa Schr\"oder, Valentyn Melnychuk, Stefan Feuerriegel | Differentially Private Learners for Heterogeneous Treatment Effects | Published at ICLR 2025 | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Patient data is widely used to estimate heterogeneous treatment effects and
thus understand the effectiveness and safety of drugs. Yet, patient data
includes highly sensitive information that must be kept private. In this work,
we aim to estimate the conditional average treatment effect (CATE) from
observational data under differential privacy. Specifically, we present
DP-CATE, a novel framework for CATE estimation that is Neyman-orthogonal and
further ensures differential privacy of the estimates. Our framework is highly
general: it applies to any two-stage CATE meta-learner with a Neyman-orthogonal
loss function, and any machine learning model can be used for nuisance
estimation. We further provide an extension of our DP-CATE, where we employ
RKHS regression to release the complete CATE function while ensuring
differential privacy. We demonstrate our DP-CATE across various experiments
using synthetic and real-world datasets. To the best of our knowledge, we are
the first to provide a framework for CATE estimation that is Neyman-orthogonal
and differentially private.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 13:24:58 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Schröder",
"Maresa",
""
],
[
"Melnychuk",
"Valentyn",
""
],
[
"Feuerriegel",
"Stefan",
""
]
]
| TITLE: Differentially Private Learners for Heterogeneous Treatment Effects
ABSTRACT: Patient data is widely used to estimate heterogeneous treatment effects and
thus understand the effectiveness and safety of drugs. Yet, patient data
includes highly sensitive information that must be kept private. In this work,
we aim to estimate the conditional average treatment effect (CATE) from
observational data under differential privacy. Specifically, we present
DP-CATE, a novel framework for CATE estimation that is Neyman-orthogonal and
further ensures differential privacy of the estimates. Our framework is highly
general: it applies to any two-stage CATE meta-learner with a Neyman-orthogonal
loss function, and any machine learning model can be used for nuisance
estimation. We further provide an extension of our DP-CATE, where we employ
RKHS regression to release the complete CATE function while ensuring
differential privacy. We demonstrate our DP-CATE across various experiments
using synthetic and real-world datasets. To the best of our knowledge, we are
the first to provide a framework for CATE estimation that is Neyman-orthogonal
and differentially private.
| no_new_dataset | 0.948775 |
2503.03499 | Wonjun Kang | Wonjun Kang, Kevin Galim, Yuchen Zeng, Minjae Lee, Hyung Il Koo, Nam
Ik Cho | State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for
State Space Models | Code is available at https://github.com/furiosa-ai/ssm-state-tuning | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State Space Models (SSMs) have emerged as efficient alternatives to
Transformers, mitigating their quadratic computational cost. However, the
application of Parameter-Efficient Fine-Tuning (PEFT) methods to SSMs remains
largely unexplored. In particular, prompt-based methods like Prompt Tuning and
Prefix-Tuning, which are widely used in Transformers, do not perform well on
SSMs. To address this, we propose state-based methods as a superior alternative
to prompt-based methods. This new family of methods naturally stems from the
architectural characteristics of SSMs. State-based methods adjust state-related
features directly instead of depending on external prompts. Furthermore, we
introduce a novel state-based PEFT method: State-offset Tuning. At every
timestep, our method directly affects the state at the current step, leading to
more effective adaptation. Through extensive experiments across diverse
datasets, we demonstrate the effectiveness of our method. Code is available at
https://github.com/furiosa-ai/ssm-state-tuning.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 13:44:42 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Kang",
"Wonjun",
""
],
[
"Galim",
"Kevin",
""
],
[
"Zeng",
"Yuchen",
""
],
[
"Lee",
"Minjae",
""
],
[
"Koo",
"Hyung Il",
""
],
[
"Cho",
"Nam Ik",
""
]
]
| TITLE: State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for
State Space Models
ABSTRACT: State Space Models (SSMs) have emerged as efficient alternatives to
Transformers, mitigating their quadratic computational cost. However, the
application of Parameter-Efficient Fine-Tuning (PEFT) methods to SSMs remains
largely unexplored. In particular, prompt-based methods like Prompt Tuning and
Prefix-Tuning, which are widely used in Transformers, do not perform well on
SSMs. To address this, we propose state-based methods as a superior alternative
to prompt-based methods. This new family of methods naturally stems from the
architectural characteristics of SSMs. State-based methods adjust state-related
features directly instead of depending on external prompts. Furthermore, we
introduce a novel state-based PEFT method: State-offset Tuning. At every
timestep, our method directly affects the state at the current step, leading to
more effective adaptation. Through extensive experiments across diverse
datasets, we demonstrate the effectiveness of our method. Code is available at
https://github.com/furiosa-ai/ssm-state-tuning.
| no_new_dataset | 0.94743 |
2503.03500 | Karuna K Chandra | Arvindh Arun, Karuna K Chandra, Akshit Sinha, Balakumar Velayutham,
Jashn Arora, Manish Jain, Ponnurangam Kumaraguru | Topo Goes Political: TDA-Based Controversy Detection in Imbalanced
Reddit Political Data | null | null | 10.1145/3701716.3717535 | null | cs.SI | http://creativecommons.org/licenses/by/4.0/ | The detection of controversial content in political discussions on the
Internet is a critical challenge in maintaining healthy digital discourse.
Unlike much of the existing literature that relies on synthetically balanced
data, our work preserves the natural distribution of controversial and
non-controversial posts. This real-world imbalance highlights a core challenge
that needs to be addressed for practical deployment. Our study re-evaluates
well-established methods for detecting controversial content. We curate our own
dataset focusing on the Indian political context that preserves the natural
distribution of controversial content, with only 12.9% of the posts in our
dataset being controversial. This disparity reflects the true imbalance in
real-world political discussions and highlights a critical limitation in the
existing evaluation methods. Benchmarking on datasets that model data imbalance
is vital for ensuring real-world applicability. Thus, in this work, (i) we
release our dataset, with an emphasis on class imbalance, that focuses on the
Indian political context, (ii) we evaluate existing methods from this domain on
this dataset and demonstrate their limitations in the imbalanced setting, (iii)
we introduce an intuitive metric to measure a model's robustness to class
imbalance, (iv) we also incorporate ideas from the domain of Topological Data
Analysis, specifically Persistent Homology, to curate features that provide
richer representations of the data. Furthermore, we benchmark models trained
with topological features against established baselines.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 13:46:39 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Arun",
"Arvindh",
""
],
[
"Chandra",
"Karuna K",
""
],
[
"Sinha",
"Akshit",
""
],
[
"Velayutham",
"Balakumar",
""
],
[
"Arora",
"Jashn",
""
],
[
"Jain",
"Manish",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
]
| TITLE: Topo Goes Political: TDA-Based Controversy Detection in Imbalanced
Reddit Political Data
ABSTRACT: The detection of controversial content in political discussions on the
Internet is a critical challenge in maintaining healthy digital discourse.
Unlike much of the existing literature that relies on synthetically balanced
data, our work preserves the natural distribution of controversial and
non-controversial posts. This real-world imbalance highlights a core challenge
that needs to be addressed for practical deployment. Our study re-evaluates
well-established methods for detecting controversial content. We curate our own
dataset focusing on the Indian political context that preserves the natural
distribution of controversial content, with only 12.9% of the posts in our
dataset being controversial. This disparity reflects the true imbalance in
real-world political discussions and highlights a critical limitation in the
existing evaluation methods. Benchmarking on datasets that model data imbalance
is vital for ensuring real-world applicability. Thus, in this work, (i) we
release our dataset, with an emphasis on class imbalance, that focuses on the
Indian political context, (ii) we evaluate existing methods from this domain on
this dataset and demonstrate their limitations in the imbalanced setting, (iii)
we introduce an intuitive metric to measure a model's robustness to class
imbalance, (iv) we also incorporate ideas from the domain of Topological Data
Analysis, specifically Persistent Homology, to curate features that provide
richer representations of the data. Furthermore, we benchmark models trained
with topological features against established baselines.
| new_dataset | 0.961606 |
2503.03501 | Gavriel Habib | Gavriel Habib, Noa Barzilay, Or Shimshi, Rami Ben-Ari, Nir Darshan | CarGait: Cross-Attention based Re-ranking for Gait recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Gait recognition is a computer vision task that identifies individuals based
on their walking patterns. Gait recognition performance is commonly evaluated
by ranking a gallery of candidates and measuring the accuracy at the top
Rank-$K$. Existing models are typically single-staged, i.e. searching for the
probe's nearest neighbors in a gallery using a single global feature
representation. Although these models typically excel at retrieving the correct
identity within the top-$K$ predictions, they struggle when hard negatives
appear in the top short-list, leading to relatively low performance at the
highest ranks (e.g., Rank-1). In this paper, we introduce CarGait, a
Cross-Attention Re-ranking method for gait recognition, that involves
re-ordering the top-$K$ list leveraging the fine-grained correlations between
pairs of gait sequences through cross-attention between gait strips. This
re-ranking scheme can be adapted to existing single-stage models to enhance
their final results. We demonstrate the capabilities of CarGait by extensive
experiments on three common gait datasets, Gait3D, GREW, and OU-MVLP, and seven
different gait models, showing consistent improvements in Rank-1,5 accuracy,
superior results over existing re-ranking methods, and strong baselines.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 13:47:02 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Habib",
"Gavriel",
""
],
[
"Barzilay",
"Noa",
""
],
[
"Shimshi",
"Or",
""
],
[
"Ben-Ari",
"Rami",
""
],
[
"Darshan",
"Nir",
""
]
]
| TITLE: CarGait: Cross-Attention based Re-ranking for Gait recognition
ABSTRACT: Gait recognition is a computer vision task that identifies individuals based
on their walking patterns. Gait recognition performance is commonly evaluated
by ranking a gallery of candidates and measuring the accuracy at the top
Rank-$K$. Existing models are typically single-staged, i.e. searching for the
probe's nearest neighbors in a gallery using a single global feature
representation. Although these models typically excel at retrieving the correct
identity within the top-$K$ predictions, they struggle when hard negatives
appear in the top short-list, leading to relatively low performance at the
highest ranks (e.g., Rank-1). In this paper, we introduce CarGait, a
Cross-Attention Re-ranking method for gait recognition, that involves
re-ordering the top-$K$ list leveraging the fine-grained correlations between
pairs of gait sequences through cross-attention between gait strips. This
re-ranking scheme can be adapted to existing single-stage models to enhance
their final results. We demonstrate the capabilities of CarGait by extensive
experiments on three common gait datasets, Gait3D, GREW, and OU-MVLP, and seven
different gait models, showing consistent improvements in Rank-1,5 accuracy,
superior results over existing re-ranking methods, and strong baselines.
| no_new_dataset | 0.94545 |
2503.03512 | Ali Erkan | Ali Erkan and Tunga G\"ung\"or | An Aspect Extraction Framework using Different Embedding Types, Learning
Models, and Dependency Structure | Aspect-based Sentiment Analysis, Aspect Extraction, Natural Language
Processing, Machine Learning, Deep Neural Networks, Turkish | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Aspect-based sentiment analysis has gained significant attention in recent
years due to its ability to provide fine-grained insights for sentiment
expressions related to specific features of entities. An important component of
aspect-based sentiment analysis is aspect extraction, which involves
identifying and extracting aspect terms from text. Effective aspect extraction
serves as the foundation for accurate sentiment analysis at the aspect level.
In this paper, we propose aspect extraction models that use different types of
embeddings for words and part-of-speech tags and that combine several learning
models. We also propose tree positional encoding that is based on dependency
parsing output to capture better the aspect positions in sentences. In
addition, a new aspect extraction dataset is built for Turkish by machine
translating an English dataset in a controlled setting. The experiments
conducted on two Turkish datasets showed that the proposed models mostly
outperform the studies that use the same datasets, and incorporating tree
positional encoding increases the performance of the models.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 13:57:48 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Erkan",
"Ali",
""
],
[
"Güngör",
"Tunga",
""
]
]
| TITLE: An Aspect Extraction Framework using Different Embedding Types, Learning
Models, and Dependency Structure
ABSTRACT: Aspect-based sentiment analysis has gained significant attention in recent
years due to its ability to provide fine-grained insights for sentiment
expressions related to specific features of entities. An important component of
aspect-based sentiment analysis is aspect extraction, which involves
identifying and extracting aspect terms from text. Effective aspect extraction
serves as the foundation for accurate sentiment analysis at the aspect level.
In this paper, we propose aspect extraction models that use different types of
embeddings for words and part-of-speech tags and that combine several learning
models. We also propose tree positional encoding that is based on dependency
parsing output to capture better the aspect positions in sentences. In
addition, a new aspect extraction dataset is built for Turkish by machine
translating an English dataset in a controlled setting. The experiments
conducted on two Turkish datasets showed that the proposed models mostly
outperform the studies that use the same datasets, and incorporating tree
positional encoding increases the performance of the models.
| new_dataset | 0.960063 |
2503.03523 | Jun Yan | Maryam Al Shami, Jun Yan, Emmanuel Thepie Fapi | O-RAN xApps Conflict Management using Graph Convolutional Networks | 9 pages, 10 figures | null | null | null | cs.NI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Open Radio Access Network (O-RAN) adopts a flexible, open, and virtualized
structure with standardized interfaces, reducing dependency on a single
supplier. Conflict management in O-RAN refers to the process of identifying and
resolving conflicts between network applications. xApps are applications
deployed at the RAN Intelligent Controller (RIC) that leverage advanced AI/ML
algorithms to make dynamic decisions for network optimization. The lack of a
unified mechanism to coordinate and prioritize the actions of different
applications can create three types of conflicts (direct, indirect, and
implicit). In our paper, we introduce a novel data-driven GCN-based method
called Graph-based xApps Conflict and Root Cause Analysis Engine (GRACE) based
on Graph Convolutional Network (GCN). It detects three types of conflicts
(direct, indirect, and implicit) and pinpoints the root causes (xApps). GRACE
captures the complex and hidden dependencies among the xApps, the controlled
parameters, and the KPIs in O-RAN to detect possible conflicts. Then, it
identifies the root causes (xApps) contributing to the detected conflicts. The
proposed method was tested on highly imbalanced datasets where the number of
conflict instances ranges from 40% to 10%. The model is tested in a setting
that simulates real-world scenarios where conflicts are rare to assess its
performance and generalizability. Experimental results demonstrate an
exceptional performance, achieving a high F1-score greater than 98% for all the
case studies.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:07:29 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Shami",
"Maryam Al",
""
],
[
"Yan",
"Jun",
""
],
[
"Fapi",
"Emmanuel Thepie",
""
]
]
| TITLE: O-RAN xApps Conflict Management using Graph Convolutional Networks
ABSTRACT: Open Radio Access Network (O-RAN) adopts a flexible, open, and virtualized
structure with standardized interfaces, reducing dependency on a single
supplier. Conflict management in O-RAN refers to the process of identifying and
resolving conflicts between network applications. xApps are applications
deployed at the RAN Intelligent Controller (RIC) that leverage advanced AI/ML
algorithms to make dynamic decisions for network optimization. The lack of a
unified mechanism to coordinate and prioritize the actions of different
applications can create three types of conflicts (direct, indirect, and
implicit). In our paper, we introduce a novel data-driven GCN-based method
called Graph-based xApps Conflict and Root Cause Analysis Engine (GRACE) based
on Graph Convolutional Network (GCN). It detects three types of conflicts
(direct, indirect, and implicit) and pinpoints the root causes (xApps). GRACE
captures the complex and hidden dependencies among the xApps, the controlled
parameters, and the KPIs in O-RAN to detect possible conflicts. Then, it
identifies the root causes (xApps) contributing to the detected conflicts. The
proposed method was tested on highly imbalanced datasets where the number of
conflict instances ranges from 40% to 10%. The model is tested in a setting
that simulates real-world scenarios where conflicts are rare to assess its
performance and generalizability. Experimental results demonstrate an
exceptional performance, achieving a high F1-score greater than 98% for all the
case studies.
| no_new_dataset | 0.948202 |
2503.03529 | David Johnson | David S. Johnson | Higher Stakes, Healthier Trust? An Application-Grounded Approach to
Assessing Healthy Trust in High-Stakes Human-AI Collaboration | 11 pages, 5 figures; submitted to IJCAI 2025 | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | Human-AI collaboration is increasingly promoted to improve high-stakes
decision-making, yet its benefits have not been fully realized.
Application-grounded evaluations are needed to better evaluate methods for
improving collaboration but often require domain experts, making studies costly
and limiting their generalizability. Current evaluation methods are constrained
by limited public datasets and reliance on proxy tasks. To address these
challenges, we propose an application-grounded framework for large-scale,
online evaluations of vision-based decision-making tasks. The framework
introduces Blockies, a parametric approach for generating datasets of simulated
diagnostic tasks, offering control over the traits and biases in the data used
to train real-world models. These tasks are designed to be easy to learn but
difficult to master, enabling participation by non-experts. The framework also
incorporates storytelling and monetary incentives to manipulate perceived task
stakes. An initial empirical study demonstrated that the high-stakes condition
significantly reduced healthy distrust of AI, despite longer decision-making
times. These findings underscore the importance of perceived stakes in
fostering healthy distrust and demonstrate the framework's potential for
scalable evaluation of high-stakes Human-AI collaboration.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:11:19 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Johnson",
"David S.",
""
]
]
| TITLE: Higher Stakes, Healthier Trust? An Application-Grounded Approach to
Assessing Healthy Trust in High-Stakes Human-AI Collaboration
ABSTRACT: Human-AI collaboration is increasingly promoted to improve high-stakes
decision-making, yet its benefits have not been fully realized.
Application-grounded evaluations are needed to better evaluate methods for
improving collaboration but often require domain experts, making studies costly
and limiting their generalizability. Current evaluation methods are constrained
by limited public datasets and reliance on proxy tasks. To address these
challenges, we propose an application-grounded framework for large-scale,
online evaluations of vision-based decision-making tasks. The framework
introduces Blockies, a parametric approach for generating datasets of simulated
diagnostic tasks, offering control over the traits and biases in the data used
to train real-world models. These tasks are designed to be easy to learn but
difficult to master, enabling participation by non-experts. The framework also
incorporates storytelling and monetary incentives to manipulate perceived task
stakes. An initial empirical study demonstrated that the high-stakes condition
significantly reduced healthy distrust of AI, despite longer decision-making
times. These findings underscore the importance of perceived stakes in
fostering healthy distrust and demonstrate the framework's potential for
scalable evaluation of high-stakes Human-AI collaboration.
| no_new_dataset | 0.949856 |
2503.03535 | Po-Chien Luan | Po-Chien Luan, Yang Gao, Celine Demonsant, Alexandre Alahi | Unified Human Localization and Trajectory Prediction with Monocular
Vision | ICRA 2025 | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Conventional human trajectory prediction models rely on clean curated data,
requiring specialized equipment or manual labeling, which is often impractical
for robotic applications. The existing predictors tend to overfit to clean
observation affecting their robustness when used with noisy inputs. In this
work, we propose MonoTransmotion (MT), a Transformer-based framework that uses
only a monocular camera to jointly solve localization and prediction tasks. Our
framework has two main modules: Bird's Eye View (BEV) localization and
trajectory prediction. The BEV localization module estimates the position of a
person using 2D human poses, enhanced by a novel directional loss for smoother
sequential localizations. The trajectory prediction module predicts future
motion from these estimates. We show that by jointly training both tasks with
our unified framework, our method is more robust in real-world scenarios made
of noisy inputs. We validate our MT network on both curated and non-curated
datasets. On the curated dataset, MT achieves around 12% improvement over
baseline models on BEV localization and trajectory prediction. On real-world
non-curated dataset, experimental results indicate that MT maintains similar
performance levels, highlighting its robustness and generalization capability.
The code is available at https://github.com/vita-epfl/MonoTransmotion.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:18:39 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Luan",
"Po-Chien",
""
],
[
"Gao",
"Yang",
""
],
[
"Demonsant",
"Celine",
""
],
[
"Alahi",
"Alexandre",
""
]
]
| TITLE: Unified Human Localization and Trajectory Prediction with Monocular
Vision
ABSTRACT: Conventional human trajectory prediction models rely on clean curated data,
requiring specialized equipment or manual labeling, which is often impractical
for robotic applications. The existing predictors tend to overfit to clean
observation affecting their robustness when used with noisy inputs. In this
work, we propose MonoTransmotion (MT), a Transformer-based framework that uses
only a monocular camera to jointly solve localization and prediction tasks. Our
framework has two main modules: Bird's Eye View (BEV) localization and
trajectory prediction. The BEV localization module estimates the position of a
person using 2D human poses, enhanced by a novel directional loss for smoother
sequential localizations. The trajectory prediction module predicts future
motion from these estimates. We show that by jointly training both tasks with
our unified framework, our method is more robust in real-world scenarios made
of noisy inputs. We validate our MT network on both curated and non-curated
datasets. On the curated dataset, MT achieves around 12% improvement over
baseline models on BEV localization and trajectory prediction. On real-world
non-curated dataset, experimental results indicate that MT maintains similar
performance levels, highlighting its robustness and generalization capability.
The code is available at https://github.com/vita-epfl/MonoTransmotion.
| no_new_dataset | 0.950088 |
2503.03543 | Dragos Costea | Dragos Costea, Alina Marcu, Marius Leordeanu | A self-supervised cyclic neural-analytic approach for novel view
synthesis and 3D reconstruction | Published in BMVC 2024, 10 pages, 4 figures | British Machine Vision Conference (BMVC), 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generating novel views from recorded videos is crucial for enabling
autonomous UAV navigation. Recent advancements in neural rendering have
facilitated the rapid development of methods capable of rendering new
trajectories. However, these methods often fail to generalize well to regions
far from the training data without an optimized flight path, leading to
suboptimal reconstructions. We propose a self-supervised cyclic neural-analytic
pipeline that combines high-quality neural rendering outputs with precise
geometric insights from analytical methods. Our solution improves RGB and mesh
reconstructions for novel view synthesis, especially in undersampled areas and
regions that are completely different from the training dataset. We use an
effective transformer-based architecture for image reconstruction to refine and
adapt the synthesis process, enabling effective handling of novel, unseen poses
without relying on extensive labeled datasets. Our findings demonstrate
substantial improvements in rendering views of novel and also 3D
reconstruction, which to the best of our knowledge is a first, setting a new
standard for autonomous navigation in complex outdoor environments.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:28:01 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Costea",
"Dragos",
""
],
[
"Marcu",
"Alina",
""
],
[
"Leordeanu",
"Marius",
""
]
]
| TITLE: A self-supervised cyclic neural-analytic approach for novel view
synthesis and 3D reconstruction
ABSTRACT: Generating novel views from recorded videos is crucial for enabling
autonomous UAV navigation. Recent advancements in neural rendering have
facilitated the rapid development of methods capable of rendering new
trajectories. However, these methods often fail to generalize well to regions
far from the training data without an optimized flight path, leading to
suboptimal reconstructions. We propose a self-supervised cyclic neural-analytic
pipeline that combines high-quality neural rendering outputs with precise
geometric insights from analytical methods. Our solution improves RGB and mesh
reconstructions for novel view synthesis, especially in undersampled areas and
regions that are completely different from the training dataset. We use an
effective transformer-based architecture for image reconstruction to refine and
adapt the synthesis process, enabling effective handling of novel, unseen poses
without relying on extensive labeled datasets. Our findings demonstrate
substantial improvements in rendering views of novel and also 3D
reconstruction, which to the best of our knowledge is a first, setting a new
standard for autonomous navigation in complex outdoor environments.
| no_new_dataset | 0.948728 |
2503.03548 | Milin Patel | Milin Patel, Rolf Jung | Simulation-Based Performance Evaluation of 3D Object Detection Methods
with Deep Learning for a LiDAR Point Cloud Dataset in a SOTIF-related Use
Case | null | Proceedings of the 10th International Conference on Vehicle
Technology and Intelligent Transport Systems VEHITS - Volume 1, 415-426, 2024
, Angers, France | 10.5220/0012707300003702 | null | cs.CV cs.LG cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Safety of the Intended Functionality (SOTIF) addresses sensor performance
limitations and deep learning-based object detection insufficiencies to ensure
the intended functionality of Automated Driving Systems (ADS). This paper
presents a methodology examining the adaptability and performance evaluation of
the 3D object detection methods on a LiDAR point cloud dataset generated by
simulating a SOTIF-related Use Case. The major contributions of this paper
include defining and modelling a SOTIF-related Use Case with 21 diverse weather
conditions and generating a LiDAR point cloud dataset suitable for application
of 3D object detection methods. The dataset consists of 547 frames,
encompassing clear, cloudy, rainy weather conditions, corresponding to
different times of the day, including noon, sunset, and night. Employing
MMDetection3D and OpenPCDET toolkits, the performance of State-of-the-Art
(SOTA) 3D object detection methods is evaluated and compared by testing the
pre-trained Deep Learning (DL) models on the generated dataset using Average
Precision (AP) and Recall metrics.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:32:32 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Patel",
"Milin",
""
],
[
"Jung",
"Rolf",
""
]
]
| TITLE: Simulation-Based Performance Evaluation of 3D Object Detection Methods
with Deep Learning for a LiDAR Point Cloud Dataset in a SOTIF-related Use
Case
ABSTRACT: Safety of the Intended Functionality (SOTIF) addresses sensor performance
limitations and deep learning-based object detection insufficiencies to ensure
the intended functionality of Automated Driving Systems (ADS). This paper
presents a methodology examining the adaptability and performance evaluation of
the 3D object detection methods on a LiDAR point cloud dataset generated by
simulating a SOTIF-related Use Case. The major contributions of this paper
include defining and modelling a SOTIF-related Use Case with 21 diverse weather
conditions and generating a LiDAR point cloud dataset suitable for application
of 3D object detection methods. The dataset consists of 547 frames,
encompassing clear, cloudy, rainy weather conditions, corresponding to
different times of the day, including noon, sunset, and night. Employing
MMDetection3D and OpenPCDET toolkits, the performance of State-of-the-Art
(SOTA) 3D object detection methods is evaluated and compared by testing the
pre-trained Deep Learning (DL) models on the generated dataset using Average
Precision (AP) and Recall metrics.
| new_dataset | 0.966569 |
2503.03556 | Xiaomeng Zhu | Xiaomeng Zhu, Yuyang Li, Leiyao Cui, Pengfei Li, Huan-ang Gao, Yixin
Zhu, Hao Zhao | Afford-X: Generalizable and Slim Affordance Reasoning for Task-oriented
Manipulation | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Object affordance reasoning, the ability to infer object functionalities
based on physical properties, is fundamental for task-oriented planning and
activities in both humans and Artificial Intelligence (AI). This capability,
required for planning and executing daily activities in a task-oriented manner,
relies on commonsense knowledge of object physics and functionalities,
extending beyond simple object recognition. Current computational models for
affordance reasoning from perception lack generalizability, limiting their
applicability in novel scenarios. Meanwhile, comprehensive Large Language
Models (LLMs) with emerging reasoning capabilities are challenging to deploy on
local devices for task-oriented manipulations. Here, we introduce LVIS-Aff, a
large-scale dataset comprising 1,496 tasks and 119k images, designed to enhance
the generalizability of affordance reasoning from perception. Utilizing this
dataset, we develop Afford-X, an end-to-end trainable affordance reasoning
model that incorporates Verb Attention and Bi-Fusion modules to improve
multi-modal understanding. This model achieves up to a 12.1% performance
improvement over the best-reported results from non-LLM methods, while also
demonstrating a 1.2% enhancement compared to our previous conference paper.
Additionally, it maintains a compact 187M parameter size and infers nearly 50
times faster than the GPT-4V API. Our work demonstrates the potential for
efficient, generalizable affordance reasoning models that can be deployed on
local devices for task-oriented manipulations. We showcase Afford-X's
effectiveness in enabling task-oriented manipulations for robots across various
tasks and environments, underscoring its efficiency and broad implications for
advancing robotics and AI systems in real-world applications.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 14:44:53 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhu",
"Xiaomeng",
""
],
[
"Li",
"Yuyang",
""
],
[
"Cui",
"Leiyao",
""
],
[
"Li",
"Pengfei",
""
],
[
"Gao",
"Huan-ang",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Zhao",
"Hao",
""
]
]
| TITLE: Afford-X: Generalizable and Slim Affordance Reasoning for Task-oriented
Manipulation
ABSTRACT: Object affordance reasoning, the ability to infer object functionalities
based on physical properties, is fundamental for task-oriented planning and
activities in both humans and Artificial Intelligence (AI). This capability,
required for planning and executing daily activities in a task-oriented manner,
relies on commonsense knowledge of object physics and functionalities,
extending beyond simple object recognition. Current computational models for
affordance reasoning from perception lack generalizability, limiting their
applicability in novel scenarios. Meanwhile, comprehensive Large Language
Models (LLMs) with emerging reasoning capabilities are challenging to deploy on
local devices for task-oriented manipulations. Here, we introduce LVIS-Aff, a
large-scale dataset comprising 1,496 tasks and 119k images, designed to enhance
the generalizability of affordance reasoning from perception. Utilizing this
dataset, we develop Afford-X, an end-to-end trainable affordance reasoning
model that incorporates Verb Attention and Bi-Fusion modules to improve
multi-modal understanding. This model achieves up to a 12.1% performance
improvement over the best-reported results from non-LLM methods, while also
demonstrating a 1.2% enhancement compared to our previous conference paper.
Additionally, it maintains a compact 187M parameter size and infers nearly 50
times faster than the GPT-4V API. Our work demonstrates the potential for
efficient, generalizable affordance reasoning models that can be deployed on
local devices for task-oriented manipulations. We showcase Afford-X's
effectiveness in enabling task-oriented manipulations for robots across various
tasks and environments, underscoring its efficiency and broad implications for
advancing robotics and AI systems in real-world applications.
| new_dataset | 0.962072 |
2503.03573 | Jonas Dube | Jonas Dube, Julius K\"uhn, Chen Wang, Sonal Mistry, Guido Klemz, Alice
Galdi, Thorsten Kamps | Triple Evaporation of Bialkali Antimonide Photocathodes and
Photoemission Characterization at the PhoTEx Experiment | The following article has been submitted to Journal of Applied
Physics | null | null | null | physics.acc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The development of high-performance photocathodes is essential for generating
high-brightness electron beams required by existing and future accelerators.
This work introduces a state-of-the-art triple evaporation growth system
designed for bialkali antimonide photocathodes. By enabling the simultaneous
deposition of all three materials, this system significantly enhances vacuum
stability and the reproducibility of photocathode fabrication. Complementing
this, the novel characterization system PhoTEx allows spatially and spectrally
resolved measurements of key photocathode parameters, such as quantum
efficiency (QE), mean transverse energy (MTE), reflectance and lifetime.
Crucially, all measurements are performed within a single compact setup,
without moving the sample, preserving ultra-high vacuum conditions. The
spectral resolved measurement of the reflectance allows the investigation of
the color. Photocathode colorimetry may provide valuable insights into material
homogeneity and aging. A Na-K-Sb photocathode was grown using the triple
evaporation method, achieving an initial QE of $5.5\,\%$ at $520\,$nm. The
photocathode was characterized at PhoTEx over two months, demonstrating
consistent MTE measurements and a dataset with spectral response, reflectance
and colorimetry data. Together, the triple evaporation growth system and PhoTEx
mark a significant advancement in optimizing photocathodes with exceptional
performance, paving the way for brighter and more stable electron sources for
next-generation accelerator facilities.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 15:01:42 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Dube",
"Jonas",
""
],
[
"Kühn",
"Julius",
""
],
[
"Wang",
"Chen",
""
],
[
"Mistry",
"Sonal",
""
],
[
"Klemz",
"Guido",
""
],
[
"Galdi",
"Alice",
""
],
[
"Kamps",
"Thorsten",
""
]
]
| TITLE: Triple Evaporation of Bialkali Antimonide Photocathodes and
Photoemission Characterization at the PhoTEx Experiment
ABSTRACT: The development of high-performance photocathodes is essential for generating
high-brightness electron beams required by existing and future accelerators.
This work introduces a state-of-the-art triple evaporation growth system
designed for bialkali antimonide photocathodes. By enabling the simultaneous
deposition of all three materials, this system significantly enhances vacuum
stability and the reproducibility of photocathode fabrication. Complementing
this, the novel characterization system PhoTEx allows spatially and spectrally
resolved measurements of key photocathode parameters, such as quantum
efficiency (QE), mean transverse energy (MTE), reflectance and lifetime.
Crucially, all measurements are performed within a single compact setup,
without moving the sample, preserving ultra-high vacuum conditions. The
spectral resolved measurement of the reflectance allows the investigation of
the color. Photocathode colorimetry may provide valuable insights into material
homogeneity and aging. A Na-K-Sb photocathode was grown using the triple
evaporation method, achieving an initial QE of $5.5\,\%$ at $520\,$nm. The
photocathode was characterized at PhoTEx over two months, demonstrating
consistent MTE measurements and a dataset with spectral response, reflectance
and colorimetry data. Together, the triple evaporation growth system and PhoTEx
mark a significant advancement in optimizing photocathodes with exceptional
performance, paving the way for brighter and more stable electron sources for
next-generation accelerator facilities.
| no_new_dataset | 0.947769 |
2503.03607 | Keqi Chen | Keqi Chen, Zekai Sun, Yuhua Wen, Huijun Lian, Yingming Gao, Ya Li | Psy-Insight: Explainable Multi-turn Bilingual Dataset for Mental Health
Counseling | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | The in-context learning capabilities of large language models (LLMs) show
great potential in mental health support. However, the lack of counseling
datasets, particularly in Chinese corpora, restricts their application in this
field. To address this, we constructed Psy-Insight, the first mental
health-oriented explainable multi-task bilingual dataset. We collected
face-to-face multi-turn counseling dialogues, which are annotated with
multi-task labels and conversation process explanations. Our annotations
include psychotherapy, emotion, strategy, and topic labels, as well as
turn-level reasoning and session-level guidance. Psy-Insight is not only
suitable for tasks such as label recognition but also meets the need for
training LLMs to act as empathetic counselors through logical reasoning.
Experiments show that training LLMs on Psy-Insight enables the models to not
only mimic the conversation style but also understand the underlying strategies
and reasoning of counseling.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 15:44:21 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Chen",
"Keqi",
""
],
[
"Sun",
"Zekai",
""
],
[
"Wen",
"Yuhua",
""
],
[
"Lian",
"Huijun",
""
],
[
"Gao",
"Yingming",
""
],
[
"Li",
"Ya",
""
]
]
| TITLE: Psy-Insight: Explainable Multi-turn Bilingual Dataset for Mental Health
Counseling
ABSTRACT: The in-context learning capabilities of large language models (LLMs) show
great potential in mental health support. However, the lack of counseling
datasets, particularly in Chinese corpora, restricts their application in this
field. To address this, we constructed Psy-Insight, the first mental
health-oriented explainable multi-task bilingual dataset. We collected
face-to-face multi-turn counseling dialogues, which are annotated with
multi-task labels and conversation process explanations. Our annotations
include psychotherapy, emotion, strategy, and topic labels, as well as
turn-level reasoning and session-level guidance. Psy-Insight is not only
suitable for tasks such as label recognition but also meets the need for
training LLMs to act as empathetic counselors through logical reasoning.
Experiments show that training LLMs on Psy-Insight enables the models to not
only mimic the conversation style but also understand the underlying strategies
and reasoning of counseling.
| new_dataset | 0.959611 |
2503.03609 | Lingli Cao | Lingli Cao and He Zhang and Shanshan Li and Danyang Li and Yanjing
Yang and Chenxing Zhong and Xin Zhou and Yue Xie | Enhancing the Accuracy and Comprehensibility in Architectural Tactics
Detection via Small Model-Augmented Prompt Engineering | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Architectural tactics (ATs), as the concrete implementation of architectural
decisions in code, address non-functional requirements of software systems. Due
to the implicit nature of architectural knowledge in code implementation,
developers may risk inadvertently altering or removing these tactics during
code modifications or optimizations. Such unintended changes can trigger
architectural erosion, gradually undermining the system's original design.
While many researchers have proposed machine learning-based methods to improve
the accuracy of detecting ATs in code, the black-box nature and the required
architectural domain knowledge pose significant challenges for developers in
verifying the results. Effective verification requires not only accurate
detection results but also interpretable explanations that enhance their
comprehensibility. However, this is a critical gap in current research. Large
language models (LLMs) can generate easily interpretable ATs detection comments
if they have domain knowledge. Fine-tuning LLMs to acquire domain knowledge
faces challenges such as catastrophic forgetting and hardware constraints.
Thus, we propose Prmt4TD, a small model-augmented prompting framework to
enhance the accuracy and comprehensibility of ATs detection. Combining
fine-tuned small models with In-Context Learning can also reduce fine-tuning
costs while equipping the LLM with additional domain knowledge. Prmt4TD can
leverage the remarkable processing and reasoning capabilities of LLMs to
generate easily interpretable ATs detection results. Our evaluation results
demonstrate that Prmt4TD achieves accuracy (\emph{F1-score}) improvement of
13\%-23\% on the ATs balanced dataset and enhances the comprehensibility of the
detection results.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 15:47:22 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Cao",
"Lingli",
""
],
[
"Zhang",
"He",
""
],
[
"Li",
"Shanshan",
""
],
[
"Li",
"Danyang",
""
],
[
"Yang",
"Yanjing",
""
],
[
"Zhong",
"Chenxing",
""
],
[
"Zhou",
"Xin",
""
],
[
"Xie",
"Yue",
""
]
]
| TITLE: Enhancing the Accuracy and Comprehensibility in Architectural Tactics
Detection via Small Model-Augmented Prompt Engineering
ABSTRACT: Architectural tactics (ATs), as the concrete implementation of architectural
decisions in code, address non-functional requirements of software systems. Due
to the implicit nature of architectural knowledge in code implementation,
developers may risk inadvertently altering or removing these tactics during
code modifications or optimizations. Such unintended changes can trigger
architectural erosion, gradually undermining the system's original design.
While many researchers have proposed machine learning-based methods to improve
the accuracy of detecting ATs in code, the black-box nature and the required
architectural domain knowledge pose significant challenges for developers in
verifying the results. Effective verification requires not only accurate
detection results but also interpretable explanations that enhance their
comprehensibility. However, this is a critical gap in current research. Large
language models (LLMs) can generate easily interpretable ATs detection comments
if they have domain knowledge. Fine-tuning LLMs to acquire domain knowledge
faces challenges such as catastrophic forgetting and hardware constraints.
Thus, we propose Prmt4TD, a small model-augmented prompting framework to
enhance the accuracy and comprehensibility of ATs detection. Combining
fine-tuned small models with In-Context Learning can also reduce fine-tuning
costs while equipping the LLM with additional domain knowledge. Prmt4TD can
leverage the remarkable processing and reasoning capabilities of LLMs to
generate easily interpretable ATs detection results. Our evaluation results
demonstrate that Prmt4TD achieves accuracy (\emph{F1-score}) improvement of
13\%-23\% on the ATs balanced dataset and enhances the comprehensibility of the
detection results.
| no_new_dataset | 0.953057 |
2503.03613 | Songlong Xing | Songlong Xing, Zhengyu Zhao, Nicu Sebe | CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards
Zero-shot Adversarial Robustness of CLIP | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Despite its prevalent use in image-text matching tasks in a zero-shot manner,
CLIP has been shown to be highly vulnerable to adversarial perturbations added
onto images. Recent studies propose to finetune the vision encoder of CLIP with
adversarial samples generated on the fly, and show improved robustness against
adversarial attacks on a spectrum of downstream datasets, a property termed as
zero-shot robustness. In this paper, we show that malicious perturbations that
seek to maximise the classification loss lead to `falsely stable' images, and
propose to leverage the pre-trained vision encoder of CLIP to counterattack
such adversarial images during inference to achieve robustness. Our paradigm is
simple and training-free, providing the first method to defend CLIP from
adversarial attacks at test time, which is orthogonal to existing methods
aiming to boost zero-shot adversarial robustness of CLIP. We conduct
experiments across 16 classification datasets, and demonstrate stable and
consistent gains compared to test-time defence methods adapted from existing
adversarial robustness studies that do not rely on external networks, without
noticeably impairing performance on clean images. We also show that our
paradigm can be employed on CLIP models that have been adversarially finetuned
to further enhance their robustness at test time. Our code is available
\href{https://github.com/Sxing2/CLIP-Test-time-Counterattacks}{here}.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 15:51:59 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Xing",
"Songlong",
""
],
[
"Zhao",
"Zhengyu",
""
],
[
"Sebe",
"Nicu",
""
]
]
| TITLE: CLIP is Strong Enough to Fight Back: Test-time Counterattacks towards
Zero-shot Adversarial Robustness of CLIP
ABSTRACT: Despite its prevalent use in image-text matching tasks in a zero-shot manner,
CLIP has been shown to be highly vulnerable to adversarial perturbations added
onto images. Recent studies propose to finetune the vision encoder of CLIP with
adversarial samples generated on the fly, and show improved robustness against
adversarial attacks on a spectrum of downstream datasets, a property termed as
zero-shot robustness. In this paper, we show that malicious perturbations that
seek to maximise the classification loss lead to `falsely stable' images, and
propose to leverage the pre-trained vision encoder of CLIP to counterattack
such adversarial images during inference to achieve robustness. Our paradigm is
simple and training-free, providing the first method to defend CLIP from
adversarial attacks at test time, which is orthogonal to existing methods
aiming to boost zero-shot adversarial robustness of CLIP. We conduct
experiments across 16 classification datasets, and demonstrate stable and
consistent gains compared to test-time defence methods adapted from existing
adversarial robustness studies that do not rely on external networks, without
noticeably impairing performance on clean images. We also show that our
paradigm can be employed on CLIP models that have been adversarially finetuned
to further enhance their robustness at test time. Our code is available
\href{https://github.com/Sxing2/CLIP-Test-time-Counterattacks}{here}.
| no_new_dataset | 0.946001 |
2503.03622 | Arun Ganesh | Arun Ganesh, Ryan McKenna, Brendan McMahan, Adam Smith, Fan Wu | It's My Data Too: Private ML for Datasets with Multi-User Training
Examples | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We initiate a study of algorithms for model training with user-level
differential privacy (DP), where each example may be attributed to multiple
users, which we call the multi-attribution model. We first provide a carefully
chosen definition of user-level DP under the multi-attribution model. Training
in the multi-attribution model is facilitated by solving the contribution
bounding problem, i.e. the problem of selecting a subset of the dataset for
which each user is associated with a limited number of examples. We propose a
greedy baseline algorithm for the contribution bounding problem. We then
empirically study this algorithm for a synthetic logistic regression task and a
transformer training task, including studying variants of this baseline
algorithm that optimize the subset chosen using different techniques and
criteria. We find that the baseline algorithm remains competitive with its
variants in most settings, and build a better understanding of the practical
importance of a bias-variance tradeoff inherent in solutions to the
contribution bounding problem.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:02:09 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ganesh",
"Arun",
""
],
[
"McKenna",
"Ryan",
""
],
[
"McMahan",
"Brendan",
""
],
[
"Smith",
"Adam",
""
],
[
"Wu",
"Fan",
""
]
]
| TITLE: It's My Data Too: Private ML for Datasets with Multi-User Training
Examples
ABSTRACT: We initiate a study of algorithms for model training with user-level
differential privacy (DP), where each example may be attributed to multiple
users, which we call the multi-attribution model. We first provide a carefully
chosen definition of user-level DP under the multi-attribution model. Training
in the multi-attribution model is facilitated by solving the contribution
bounding problem, i.e. the problem of selecting a subset of the dataset for
which each user is associated with a limited number of examples. We propose a
greedy baseline algorithm for the contribution bounding problem. We then
empirically study this algorithm for a synthetic logistic regression task and a
transformer training task, including studying variants of this baseline
algorithm that optimize the subset chosen using different techniques and
criteria. We find that the baseline algorithm remains competitive with its
variants in most settings, and build a better understanding of the practical
importance of a bias-variance tradeoff inherent in solutions to the
contribution bounding problem.
| no_new_dataset | 0.946051 |
2503.03625 | Anastasia Georgiou | Anastasia Georgiou, Daniel Jungen, Luise Kaven, Verena Hunstig,
Constantine Frangakis, Ioannis Kevrekidis and Alexander Mitsos | Deterministic Global Optimization of the Acquisition Function in
Bayesian Optimization: To Do or Not To Do? | 32 pages, 7 figures, 7 tables | null | null | null | math.OC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Bayesian Optimization (BO) with Gaussian Processes relies on optimizing an
acquisition function to determine sampling. We investigate the advantages and
disadvantages of using a deterministic global solver (MAiNGO) compared to
conventional local and stochastic global solvers (L-BFGS-B and multi-start,
respectively) for the optimization of the acquisition function. For CPU
efficiency, we set a time limit for MAiNGO, taking the best point as optimal.
We perform repeated numerical experiments, initially using the Muller-Brown
potential as a benchmark function, utilizing the lower confidence bound
acquisition function; we further validate our findings with three alternative
benchmark functions. Statistical analysis reveals that when the acquisition
function is more exploitative (as opposed to exploratory), BO with MAiNGO
converges in fewer iterations than with the local solvers. However, when the
dataset lacks diversity, or when the acquisition function is overly
exploitative, BO with MAiNGO, compared to the local solvers, is more likely to
converge to a local rather than a global ly near-optimal solution of the
black-box function. L-BFGS-B and multi-start mitigate this risk in BO by
introducing stochasticity in the selection of the next sampling point, which
enhances the exploration of uncharted regions in the search space and reduces
dependence on acquisition function hyperparameters. Ultimately, suboptimal
optimization of poorly chosen acquisition functions may be preferable to their
optimal solution. When the acquisition function is more exploratory, BO with
MAiNGO, multi-start, and L-BFGS-B achieve comparable probabilities of
convergence to a globally near-optimal solution (although BO with MAiNGO may
require more iterations to converge under these conditions).
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:05:26 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Georgiou",
"Anastasia",
""
],
[
"Jungen",
"Daniel",
""
],
[
"Kaven",
"Luise",
""
],
[
"Hunstig",
"Verena",
""
],
[
"Frangakis",
"Constantine",
""
],
[
"Kevrekidis",
"Ioannis",
""
],
[
"Mitsos",
"Alexander",
""
]
]
| TITLE: Deterministic Global Optimization of the Acquisition Function in
Bayesian Optimization: To Do or Not To Do?
ABSTRACT: Bayesian Optimization (BO) with Gaussian Processes relies on optimizing an
acquisition function to determine sampling. We investigate the advantages and
disadvantages of using a deterministic global solver (MAiNGO) compared to
conventional local and stochastic global solvers (L-BFGS-B and multi-start,
respectively) for the optimization of the acquisition function. For CPU
efficiency, we set a time limit for MAiNGO, taking the best point as optimal.
We perform repeated numerical experiments, initially using the Muller-Brown
potential as a benchmark function, utilizing the lower confidence bound
acquisition function; we further validate our findings with three alternative
benchmark functions. Statistical analysis reveals that when the acquisition
function is more exploitative (as opposed to exploratory), BO with MAiNGO
converges in fewer iterations than with the local solvers. However, when the
dataset lacks diversity, or when the acquisition function is overly
exploitative, BO with MAiNGO, compared to the local solvers, is more likely to
converge to a local rather than a global ly near-optimal solution of the
black-box function. L-BFGS-B and multi-start mitigate this risk in BO by
introducing stochasticity in the selection of the next sampling point, which
enhances the exploration of uncharted regions in the search space and reduces
dependence on acquisition function hyperparameters. Ultimately, suboptimal
optimization of poorly chosen acquisition functions may be preferable to their
optimal solution. When the acquisition function is more exploratory, BO with
MAiNGO, multi-start, and L-BFGS-B achieve comparable probabilities of
convergence to a globally near-optimal solution (although BO with MAiNGO may
require more iterations to converge under these conditions).
| no_new_dataset | 0.939582 |
2503.03637 | WooJin Jung | Woo-Jin Jung, Dong-Hee Paek, and Seung-Hyun Kong | 4D Radar Ground Truth Augmentation with LiDAR-to-4D Radar Data Synthesis | 24 pages | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ground truth augmentation (GT-Aug) is a common method for LiDAR-based object
detection, as it enhances object density by leveraging ground truth bounding
boxes (GT bboxes). However, directly applying GT-Aug to 4D Radar tensor data
overlooks important measurements outside the GT bboxes-such as
sidelobes-leading to synthetic distributions that deviate from real-world 4D
Radar data. To address this limitation, we propose 4D Radar Ground Truth
Augmentation (4DR GT-Aug). Our approach first augments LiDAR data and then
converts it to 4D Radar data via a LiDAR-to-4D Radar data synthesis (L2RDaS)
module, which explicitly accounts for measurements both inside and outside GT
bboxes. In doing so, it produces 4D Radar data distributions that more closely
resemble real-world measurements, thereby improving object detection accuracy.
Experiments on the K-Radar dataset show that the proposed method achieves
improved performance compared to conventional GT-Aug in object detection for 4D
Radar. The implementation code is available at
https://github.com/kaist-avelab/K-Radar.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:16:46 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Jung",
"Woo-Jin",
""
],
[
"Paek",
"Dong-Hee",
""
],
[
"Kong",
"Seung-Hyun",
""
]
]
| TITLE: 4D Radar Ground Truth Augmentation with LiDAR-to-4D Radar Data Synthesis
ABSTRACT: Ground truth augmentation (GT-Aug) is a common method for LiDAR-based object
detection, as it enhances object density by leveraging ground truth bounding
boxes (GT bboxes). However, directly applying GT-Aug to 4D Radar tensor data
overlooks important measurements outside the GT bboxes-such as
sidelobes-leading to synthetic distributions that deviate from real-world 4D
Radar data. To address this limitation, we propose 4D Radar Ground Truth
Augmentation (4DR GT-Aug). Our approach first augments LiDAR data and then
converts it to 4D Radar data via a LiDAR-to-4D Radar data synthesis (L2RDaS)
module, which explicitly accounts for measurements both inside and outside GT
bboxes. In doing so, it produces 4D Radar data distributions that more closely
resemble real-world measurements, thereby improving object detection accuracy.
Experiments on the K-Radar dataset show that the proposed method achieves
improved performance compared to conventional GT-Aug in object detection for 4D
Radar. The implementation code is available at
https://github.com/kaist-avelab/K-Radar.
| no_new_dataset | 0.952794 |
2503.03640 | Yuezhe Tian | Yuezhe Tian, Kangchen Yao, Xiaoyang Yu | An Adaptive Underwater Image Enhancement Framework via Multi-Domain
Fusion and Color Compensation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Underwater optical imaging is severely degraded by light absorption,
scattering, and color distortion, hindering visibility and accurate image
analysis. This paper presents an adaptive enhancement framework integrating
illumination compensation, multi-domain filtering, and dynamic color
correction. A hybrid illumination compensation strategy combining CLAHE, Gamma
correction, and Retinex enhances visibility. A two-stage filtering process,
including spatial-domain (Gaussian, Bilateral, Guided) and frequency-domain
(Fourier, Wavelet) methods, effectively reduces noise while preserving details.
To correct color distortion, an adaptive color compensation (ACC) model
estimates spectral attenuation and water type to combine RCP, DCP, and MUDCP
dynamically. Finally, a perceptually guided color balance mechanism ensures
natural color restoration. Experimental results on benchmark datasets
demonstrate superior performance over state-of-the-art methods in contrast
enhancement, color correction, and structural preservation, making the
framework robust for underwater imaging applications.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:19:56 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Tian",
"Yuezhe",
""
],
[
"Yao",
"Kangchen",
""
],
[
"Yu",
"Xiaoyang",
""
]
]
| TITLE: An Adaptive Underwater Image Enhancement Framework via Multi-Domain
Fusion and Color Compensation
ABSTRACT: Underwater optical imaging is severely degraded by light absorption,
scattering, and color distortion, hindering visibility and accurate image
analysis. This paper presents an adaptive enhancement framework integrating
illumination compensation, multi-domain filtering, and dynamic color
correction. A hybrid illumination compensation strategy combining CLAHE, Gamma
correction, and Retinex enhances visibility. A two-stage filtering process,
including spatial-domain (Gaussian, Bilateral, Guided) and frequency-domain
(Fourier, Wavelet) methods, effectively reduces noise while preserving details.
To correct color distortion, an adaptive color compensation (ACC) model
estimates spectral attenuation and water type to combine RCP, DCP, and MUDCP
dynamically. Finally, a perceptually guided color balance mechanism ensures
natural color restoration. Experimental results on benchmark datasets
demonstrate superior performance over state-of-the-art methods in contrast
enhancement, color correction, and structural preservation, making the
framework robust for underwater imaging applications.
| no_new_dataset | 0.950778 |
2503.03652 | Re'em Harel | Re'em Harel and Niv Gilboa and Yuval Pinter | Token-Level Privacy in Large Language Models | null | null | null | null | cs.CL cs.CR | http://creativecommons.org/licenses/by/4.0/ | The use of language models as remote services requires transmitting private
information to external providers, raising significant privacy concerns. This
process not only risks exposing sensitive data to untrusted service providers
but also leaves it vulnerable to interception by eavesdroppers. Existing
privacy-preserving methods for natural language processing (NLP) interactions
primarily rely on semantic similarity, overlooking the role of contextual
information. In this work, we introduce dchi-stencil, a novel token-level
privacy-preserving mechanism that integrates contextual and semantic
information while ensuring strong privacy guarantees under the dchi
differential privacy framework, achieving 2epsilon-dchi-privacy. By
incorporating both semantic and contextual nuances, dchi-stencil achieves a
robust balance between privacy and utility. We evaluate dchi-stencil using
state-of-the-art language models and diverse datasets, achieving comparable and
even better trade-off between utility and privacy compared to existing methods.
This work highlights the potential of dchi-stencil to set a new standard for
privacy-preserving NLP in modern, high-risk applications.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:27:25 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Harel",
"Re'em",
""
],
[
"Gilboa",
"Niv",
""
],
[
"Pinter",
"Yuval",
""
]
]
| TITLE: Token-Level Privacy in Large Language Models
ABSTRACT: The use of language models as remote services requires transmitting private
information to external providers, raising significant privacy concerns. This
process not only risks exposing sensitive data to untrusted service providers
but also leaves it vulnerable to interception by eavesdroppers. Existing
privacy-preserving methods for natural language processing (NLP) interactions
primarily rely on semantic similarity, overlooking the role of contextual
information. In this work, we introduce dchi-stencil, a novel token-level
privacy-preserving mechanism that integrates contextual and semantic
information while ensuring strong privacy guarantees under the dchi
differential privacy framework, achieving 2epsilon-dchi-privacy. By
incorporating both semantic and contextual nuances, dchi-stencil achieves a
robust balance between privacy and utility. We evaluate dchi-stencil using
state-of-the-art language models and diverse datasets, achieving comparable and
even better trade-off between utility and privacy compared to existing methods.
This work highlights the potential of dchi-stencil to set a new standard for
privacy-preserving NLP in modern, high-risk applications.
| no_new_dataset | 0.945045 |
2503.03654 | Jessica Hoffmann | Jessica Hoffmann, Christiane Ahlheim, Zac Yu, Aria Walfrand, Jarvis
Jin, Marie Tano, Ahmad Beirami, Erin van Liemt, Nithum Thain, Hakim Sidahmed
and Lucas Dixon | Improving Neutral Point of View Text Generation through
Parameter-Efficient Reinforcement Learning and a Small-Scale High-Quality
Dataset | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper describes the construction of a dataset and the evaluation of
training methods to improve generative large language models' (LLMs) ability to
answer queries on sensitive topics with a Neutral Point of View (NPOV), i.e.,
to provide significantly more informative, diverse and impartial answers. The
dataset, the SHQ-NPOV dataset, comprises 300 high-quality, human-written
quadruplets: a query on a sensitive topic, an answer, an NPOV rating, and a set
of links to source texts elaborating the various points of view. The first key
contribution of this paper is a new methodology to create such datasets through
iterative rounds of human peer-critique and annotator training, which we
release alongside the dataset. The second key contribution is the
identification of a highly effective training regime for parameter-efficient
reinforcement learning (PE-RL) to improve NPOV generation. We compare and
extensively evaluate PE-RL and multiple baselines-including LoRA finetuning (a
strong baseline), SFT and RLHF.
PE-RL not only improves on overall NPOV quality compared to the strongest
baseline ($97.06\%\rightarrow 99.08\%$), but also scores much higher on
features linguists identify as key to separating good answers from the best
answers ($60.25\%\rightarrow 85.21\%$ for presence of supportive details,
$68.74\%\rightarrow 91.43\%$ for absence of oversimplification). A qualitative
analysis corroborates this. Finally, our evaluation finds no statistical
differences between results on topics that appear in the training dataset and
those on separated evaluation topics, which provides strong evidence that our
approach to training PE-RL exhibits very effective out of topic generalization.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:32:47 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Hoffmann",
"Jessica",
""
],
[
"Ahlheim",
"Christiane",
""
],
[
"Yu",
"Zac",
""
],
[
"Walfrand",
"Aria",
""
],
[
"Jin",
"Jarvis",
""
],
[
"Tano",
"Marie",
""
],
[
"Beirami",
"Ahmad",
""
],
[
"van Liemt",
"Erin",
""
],
[
"Thain",
"Nithum",
""
],
[
"Sidahmed",
"Hakim",
""
],
[
"Dixon",
"Lucas",
""
]
]
| TITLE: Improving Neutral Point of View Text Generation through
Parameter-Efficient Reinforcement Learning and a Small-Scale High-Quality
Dataset
ABSTRACT: This paper describes the construction of a dataset and the evaluation of
training methods to improve generative large language models' (LLMs) ability to
answer queries on sensitive topics with a Neutral Point of View (NPOV), i.e.,
to provide significantly more informative, diverse and impartial answers. The
dataset, the SHQ-NPOV dataset, comprises 300 high-quality, human-written
quadruplets: a query on a sensitive topic, an answer, an NPOV rating, and a set
of links to source texts elaborating the various points of view. The first key
contribution of this paper is a new methodology to create such datasets through
iterative rounds of human peer-critique and annotator training, which we
release alongside the dataset. The second key contribution is the
identification of a highly effective training regime for parameter-efficient
reinforcement learning (PE-RL) to improve NPOV generation. We compare and
extensively evaluate PE-RL and multiple baselines-including LoRA finetuning (a
strong baseline), SFT and RLHF.
PE-RL not only improves on overall NPOV quality compared to the strongest
baseline ($97.06\%\rightarrow 99.08\%$), but also scores much higher on
features linguists identify as key to separating good answers from the best
answers ($60.25\%\rightarrow 85.21\%$ for presence of supportive details,
$68.74\%\rightarrow 91.43\%$ for absence of oversimplification). A qualitative
analysis corroborates this. Finally, our evaluation finds no statistical
differences between results on topics that appear in the training dataset and
those on separated evaluation topics, which provides strong evidence that our
approach to training PE-RL exhibits very effective out of topic generalization.
| new_dataset | 0.797162 |
2503.03655 | Thomas P\"ollabauer | Thomas P\"ollabauer, Michael Gasser, Tristan Wirth, Sarah Berkei,
Volker Knauthe, Arjan Kuijper | Improving 6D Object Pose Estimation of metallic Household and Industry
Objects | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 6D object pose estimation suffers from reduced accuracy when applied to
metallic objects. We set out to improve the state-of-the-art by addressing
challenges such as reflections and specular highlights in industrial
applications. Our novel BOP-compatible dataset, featuring a diverse set of
metallic objects (cans, household, and industrial items) under various lighting
and background conditions, provides additional geometric and visual cues. We
demonstrate that these cues can be effectively leveraged to enhance overall
performance. To illustrate the usefulness of the additional features, we
improve upon the GDRNPP algorithm by introducing an additional keypoint
prediction and material estimator head in order to improve spatial scene
understanding. Evaluations on the new dataset show improved accuracy for
metallic objects, supporting the hypothesis that additional geometric and
visual cues can improve learning.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:35:15 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Pöllabauer",
"Thomas",
""
],
[
"Gasser",
"Michael",
""
],
[
"Wirth",
"Tristan",
""
],
[
"Berkei",
"Sarah",
""
],
[
"Knauthe",
"Volker",
""
],
[
"Kuijper",
"Arjan",
""
]
]
| TITLE: Improving 6D Object Pose Estimation of metallic Household and Industry
Objects
ABSTRACT: 6D object pose estimation suffers from reduced accuracy when applied to
metallic objects. We set out to improve the state-of-the-art by addressing
challenges such as reflections and specular highlights in industrial
applications. Our novel BOP-compatible dataset, featuring a diverse set of
metallic objects (cans, household, and industrial items) under various lighting
and background conditions, provides additional geometric and visual cues. We
demonstrate that these cues can be effectively leveraged to enhance overall
performance. To illustrate the usefulness of the additional features, we
improve upon the GDRNPP algorithm by introducing an additional keypoint
prediction and material estimator head in order to improve spatial scene
understanding. Evaluations on the new dataset show improved accuracy for
metallic objects, supporting the hypothesis that additional geometric and
visual cues can improve learning.
| new_dataset | 0.956022 |
2503.03684 | Alina Basharat | Alina Basharat, Yijun Bian, Ping Xu and Zhi Tian | Towards Trustworthy Federated Learning | null | null | null | null | cs.LG cs.CR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper develops a comprehensive framework to address three critical
trustworthy challenges in federated learning (FL): robustness against Byzantine
attacks, fairness, and privacy preservation. To improve the system's defense
against Byzantine attacks that send malicious information to bias the system's
performance, we develop a Two-sided Norm Based Screening (TNBS) mechanism,
which allows the central server to crop the gradients that have the l lowest
norms and h highest norms. TNBS functions as a screening tool to filter out
potential malicious participants whose gradients are far from the honest ones.
To promote egalitarian fairness, we adopt the q-fair federated learning
(q-FFL). Furthermore, we adopt a differential privacy-based scheme to prevent
raw data at local clients from being inferred by curious parties. Convergence
guarantees are provided for the proposed framework under different scenarios.
Experimental results on real datasets demonstrate that the proposed framework
effectively improves robustness and fairness while managing the trade-off
between privacy and accuracy. This work appears to be the first study that
experimentally and theoretically addresses fairness, privacy, and robustness in
trustworthy FL.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 17:25:20 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Basharat",
"Alina",
""
],
[
"Bian",
"Yijun",
""
],
[
"Xu",
"Ping",
""
],
[
"Tian",
"Zhi",
""
]
]
| TITLE: Towards Trustworthy Federated Learning
ABSTRACT: This paper develops a comprehensive framework to address three critical
trustworthy challenges in federated learning (FL): robustness against Byzantine
attacks, fairness, and privacy preservation. To improve the system's defense
against Byzantine attacks that send malicious information to bias the system's
performance, we develop a Two-sided Norm Based Screening (TNBS) mechanism,
which allows the central server to crop the gradients that have the l lowest
norms and h highest norms. TNBS functions as a screening tool to filter out
potential malicious participants whose gradients are far from the honest ones.
To promote egalitarian fairness, we adopt the q-fair federated learning
(q-FFL). Furthermore, we adopt a differential privacy-based scheme to prevent
raw data at local clients from being inferred by curious parties. Convergence
guarantees are provided for the proposed framework under different scenarios.
Experimental results on real datasets demonstrate that the proposed framework
effectively improves robustness and fairness while managing the trade-off
between privacy and accuracy. This work appears to be the first study that
experimentally and theoretically addresses fairness, privacy, and robustness in
trustworthy FL.
| no_new_dataset | 0.947088 |
2503.03686 | Rui Ye | Rui Ye, Shuo Tang, Rui Ge, Yaxin Du, Zhenfei Yin, Siheng Chen, Jing
Shao | MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems | 26 pages, 7 figures | null | null | null | cs.CL cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LLM-based multi-agent systems (MAS) have shown significant potential in
tackling diverse tasks. However, to design effective MAS, existing approaches
heavily rely on manual configurations or multiple calls of advanced LLMs,
resulting in inadaptability and high inference costs. In this paper, we
simplify the process of building an MAS by reframing it as a generative
language task, where the input is a user query and the output is a
corresponding MAS. To address this novel task, we unify the representation of
MAS as executable code and propose a consistency-oriented data construction
pipeline to create a high-quality dataset comprising coherent and consistent
query-MAS pairs. Using this dataset, we train MAS-GPT, an open-source
medium-sized LLM that is capable of generating query-adaptive MAS within a
single LLM inference. The generated MAS can be seamlessly applied to process
user queries and deliver high-quality responses. Extensive experiments on 9
benchmarks and 5 LLMs show that the proposed MAS-GPT consistently outperforms
10+ baseline MAS methods on diverse settings, indicating MAS-GPT's high
effectiveness, efficiency and strong generalization ability. Code will be
available at https://github.com/rui-ye/MAS-GPT.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 17:27:59 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ye",
"Rui",
""
],
[
"Tang",
"Shuo",
""
],
[
"Ge",
"Rui",
""
],
[
"Du",
"Yaxin",
""
],
[
"Yin",
"Zhenfei",
""
],
[
"Chen",
"Siheng",
""
],
[
"Shao",
"Jing",
""
]
]
| TITLE: MAS-GPT: Training LLMs to Build LLM-based Multi-Agent Systems
ABSTRACT: LLM-based multi-agent systems (MAS) have shown significant potential in
tackling diverse tasks. However, to design effective MAS, existing approaches
heavily rely on manual configurations or multiple calls of advanced LLMs,
resulting in inadaptability and high inference costs. In this paper, we
simplify the process of building an MAS by reframing it as a generative
language task, where the input is a user query and the output is a
corresponding MAS. To address this novel task, we unify the representation of
MAS as executable code and propose a consistency-oriented data construction
pipeline to create a high-quality dataset comprising coherent and consistent
query-MAS pairs. Using this dataset, we train MAS-GPT, an open-source
medium-sized LLM that is capable of generating query-adaptive MAS within a
single LLM inference. The generated MAS can be seamlessly applied to process
user queries and deliver high-quality responses. Extensive experiments on 9
benchmarks and 5 LLMs show that the proposed MAS-GPT consistently outperforms
10+ baseline MAS methods on diverse settings, indicating MAS-GPT's high
effectiveness, efficiency and strong generalization ability. Code will be
available at https://github.com/rui-ye/MAS-GPT.
| new_dataset | 0.963403 |
2503.03689 | Zhao Yang | Zhao Yang, Zezhong Qian, Xiaofan Li, Weixiang Xu, Gongpeng Zhao,
Ruohong Yu, Lingsi Zhu and Longjun Liu | DualDiff+: Dual-Branch Diffusion for High-Fidelity Video Generation with
Reward Guidance | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate and high-fidelity driving scene reconstruction demands the effective
utilization of comprehensive scene information as conditional inputs. Existing
methods predominantly rely on 3D bounding boxes and BEV road maps for
foreground and background control, which fail to capture the full complexity of
driving scenes and adequately integrate multimodal information. In this work,
we present DualDiff, a dual-branch conditional diffusion model designed to
enhance driving scene generation across multiple views and video sequences.
Specifically, we introduce Occupancy Ray-shape Sampling (ORS) as a conditional
input, offering rich foreground and background semantics alongside 3D spatial
geometry to precisely control the generation of both elements. To improve the
synthesis of fine-grained foreground objects, particularly complex and distant
ones, we propose a Foreground-Aware Mask (FGM) denoising loss function.
Additionally, we develop the Semantic Fusion Attention (SFA) mechanism to
dynamically prioritize relevant information and suppress noise, enabling more
effective multimodal fusion. Finally, to ensure high-quality image-to-video
generation, we introduce the Reward-Guided Diffusion (RGD) framework, which
maintains global consistency and semantic coherence in generated videos.
Extensive experiments demonstrate that DualDiff achieves state-of-the-art
(SOTA) performance across multiple datasets. On the NuScenes dataset, DualDiff
reduces the FID score by 4.09% compared to the best baseline. In downstream
tasks, such as BEV segmentation, our method improves vehicle mIoU by 4.50% and
road mIoU by 1.70%, while in BEV 3D object detection, the foreground mAP
increases by 1.46%. Code will be made available at
https://github.com/yangzhaojason/DualDiff.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 17:31:45 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yang",
"Zhao",
""
],
[
"Qian",
"Zezhong",
""
],
[
"Li",
"Xiaofan",
""
],
[
"Xu",
"Weixiang",
""
],
[
"Zhao",
"Gongpeng",
""
],
[
"Yu",
"Ruohong",
""
],
[
"Zhu",
"Lingsi",
""
],
[
"Liu",
"Longjun",
""
]
]
| TITLE: DualDiff+: Dual-Branch Diffusion for High-Fidelity Video Generation with
Reward Guidance
ABSTRACT: Accurate and high-fidelity driving scene reconstruction demands the effective
utilization of comprehensive scene information as conditional inputs. Existing
methods predominantly rely on 3D bounding boxes and BEV road maps for
foreground and background control, which fail to capture the full complexity of
driving scenes and adequately integrate multimodal information. In this work,
we present DualDiff, a dual-branch conditional diffusion model designed to
enhance driving scene generation across multiple views and video sequences.
Specifically, we introduce Occupancy Ray-shape Sampling (ORS) as a conditional
input, offering rich foreground and background semantics alongside 3D spatial
geometry to precisely control the generation of both elements. To improve the
synthesis of fine-grained foreground objects, particularly complex and distant
ones, we propose a Foreground-Aware Mask (FGM) denoising loss function.
Additionally, we develop the Semantic Fusion Attention (SFA) mechanism to
dynamically prioritize relevant information and suppress noise, enabling more
effective multimodal fusion. Finally, to ensure high-quality image-to-video
generation, we introduce the Reward-Guided Diffusion (RGD) framework, which
maintains global consistency and semantic coherence in generated videos.
Extensive experiments demonstrate that DualDiff achieves state-of-the-art
(SOTA) performance across multiple datasets. On the NuScenes dataset, DualDiff
reduces the FID score by 4.09% compared to the best baseline. In downstream
tasks, such as BEV segmentation, our method improves vehicle mIoU by 4.50% and
road mIoU by 1.70%, while in BEV 3D object detection, the foreground mAP
increases by 1.46%. Code will be made available at
https://github.com/yangzhaojason/DualDiff.
| no_new_dataset | 0.951908 |
2503.03693 | Ungsik Kim | Ungsik Kim | ILLC: Iterative Layer-by-Layer Compression for Enhancing Structural
Faithfulness in SpArX | 8 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of Explainable Artificial Intelligence (XAI), argumentative XAI
approaches have been proposed to represent the internal reasoning process of
deep neural networks in a more transparent way by interpreting hidden nodes as
arguements. However, as the number of layers increases, existing compression
methods simplify all layers at once, which lead to high accumulative
information loss. To compensate for this, we propose an iterative
layer-by-layer compression technique in which each layer is compressed
separately and the reduction error in the next layer is immediately compensated
for, thereby improving the overall input-output and structural fidelity of the
model. Experiments on the Breast Cancer Diagnosis dataset show that, compared
to traditional compression, the method reduces input-output and structural
unfaithfulness, and maintains a more consistent attack-support relationship in
the Argumentative Explanation scheme. This is significant because it provides a
new way to make complex MLP models more compact while still conveying their
internal inference logic without distortion.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 17:43:49 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Kim",
"Ungsik",
""
]
]
| TITLE: ILLC: Iterative Layer-by-Layer Compression for Enhancing Structural
Faithfulness in SpArX
ABSTRACT: In the field of Explainable Artificial Intelligence (XAI), argumentative XAI
approaches have been proposed to represent the internal reasoning process of
deep neural networks in a more transparent way by interpreting hidden nodes as
arguements. However, as the number of layers increases, existing compression
methods simplify all layers at once, which lead to high accumulative
information loss. To compensate for this, we propose an iterative
layer-by-layer compression technique in which each layer is compressed
separately and the reduction error in the next layer is immediately compensated
for, thereby improving the overall input-output and structural fidelity of the
model. Experiments on the Breast Cancer Diagnosis dataset show that, compared
to traditional compression, the method reduces input-output and structural
unfaithfulness, and maintains a more consistent attack-support relationship in
the Argumentative Explanation scheme. This is significant because it provides a
new way to make complex MLP models more compact while still conveying their
internal inference logic without distortion.
| no_new_dataset | 0.944485 |
2503.03702 | Jiyue Jiang | Jiyue Jiang, Alfred Kar Yin Truong, Yanyu Chen, Qinghang Bao, Sheng
Wang, Pengan Chen, Jiuming Wang, Lingpeng Kong, Yu Li, Chuan Wu | Developing and Utilizing a Large-Scale Cantonese Dataset for
Multi-Tasking in Large Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-quality data resources play a crucial role in learning large language
models (LLMs), particularly for low-resource languages like Cantonese. Despite
having more than 85 million native speakers, Cantonese is still considered a
low-resource language in the field of natural language processing (NLP) due to
factors such as the dominance of Mandarin, lack of cohesion within the
Cantonese-speaking community, diversity in character encoding and input
methods, and the tendency of overseas Cantonese speakers to prefer using
English. In addition, rich colloquial vocabulary of Cantonese, English
loanwords, and code-switching characteristics add to the complexity of corpus
collection and processing. To address these challenges, we collect Cantonese
texts from a variety of sources, including open source corpora, Hong
Kong-specific forums, Wikipedia, and Common Crawl data. We conduct rigorous
data processing through language filtering, quality filtering, content
filtering, and de-duplication steps, successfully constructing a high-quality
Cantonese corpus of over 2 billion tokens for training large language models.
We further refined the model through supervised fine-tuning (SFT) on curated
Cantonese tasks, enhancing its ability to handle specific applications. Upon
completion of the training, the model achieves state-of-the-art (SOTA)
performance on four Cantonese benchmarks. After training on our dataset, the
model also exhibits improved performance on other mainstream language tasks.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 17:53:07 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Jiang",
"Jiyue",
""
],
[
"Truong",
"Alfred Kar Yin",
""
],
[
"Chen",
"Yanyu",
""
],
[
"Bao",
"Qinghang",
""
],
[
"Wang",
"Sheng",
""
],
[
"Chen",
"Pengan",
""
],
[
"Wang",
"Jiuming",
""
],
[
"Kong",
"Lingpeng",
""
],
[
"Li",
"Yu",
""
],
[
"Wu",
"Chuan",
""
]
]
| TITLE: Developing and Utilizing a Large-Scale Cantonese Dataset for
Multi-Tasking in Large Language Models
ABSTRACT: High-quality data resources play a crucial role in learning large language
models (LLMs), particularly for low-resource languages like Cantonese. Despite
having more than 85 million native speakers, Cantonese is still considered a
low-resource language in the field of natural language processing (NLP) due to
factors such as the dominance of Mandarin, lack of cohesion within the
Cantonese-speaking community, diversity in character encoding and input
methods, and the tendency of overseas Cantonese speakers to prefer using
English. In addition, rich colloquial vocabulary of Cantonese, English
loanwords, and code-switching characteristics add to the complexity of corpus
collection and processing. To address these challenges, we collect Cantonese
texts from a variety of sources, including open source corpora, Hong
Kong-specific forums, Wikipedia, and Common Crawl data. We conduct rigorous
data processing through language filtering, quality filtering, content
filtering, and de-duplication steps, successfully constructing a high-quality
Cantonese corpus of over 2 billion tokens for training large language models.
We further refined the model through supervised fine-tuning (SFT) on curated
Cantonese tasks, enhancing its ability to handle specific applications. Upon
completion of the training, the model achieves state-of-the-art (SOTA)
performance on four Cantonese benchmarks. After training on our dataset, the
model also exhibits improved performance on other mainstream language tasks.
| no_new_dataset | 0.603706 |
2503.03706 | Ruben Doste | Ruben Doste, Julia Camps, Zhinuo Jenny Wang, Lucas Arantes Berg, Maxx
Holmes, Hannah Smith, Marcel Beetz, Lei Li, Abhirup Banerjee, Vicente Grau,
Blanca Rodriguez | An Automated Computational Pipeline for Generating Large-Scale Cohorts
of Patient-Specific Ventricular Models in Electromechanical In Silico Trials | null | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | In recent years, human in silico trials have gained significant traction as a
powerful approach to evaluate the effects of drugs, clinical interventions, and
medical devices. In silico trials not only minimise patient risks but also
reduce reliance on animal testing. However, the implementation of in silico
trials presents several time-consuming challenges. It requires the creation of
large cohorts of virtual patients. Each virtual patient is described by their
anatomy with a volumetric mesh and electrophysiological and mechanical dynamics
through mathematical equations and parameters. Furthermore, simulated
conditions need definition including stimulation protocols and therapy
evaluation. For large virtual cohorts, this requires automatic and efficient
pipelines for generation of corresponding files. In this work, we present a
computational pipeline to automatically create large virtual patient cohort
files to conduct large-scale in silico trials through cardiac electromechanical
simulations. The pipeline generates the files describing meshes, labels, and
data required for the simulations directly from unprocessed surface meshes. We
applied the pipeline to generate over 100 virtual patients from various
datasets and performed simulations to demonstrate capacity to conduct in silico
trials for virtual patients using verified and validated electrophysiology and
electromechanics models for the context of use. The proposed pipeline is
adaptable to accommodate different types of ventricular geometries and mesh
processing tools, ensuring its versatility in handling diverse clinical
datasets. By establishing an automated framework for large scale simulation
studies as required for in silico trials and providing open-source code, our
work aims to support scalable, personalised cardiac simulations in research and
clinical applications.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 17:56:49 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Doste",
"Ruben",
""
],
[
"Camps",
"Julia",
""
],
[
"Wang",
"Zhinuo Jenny",
""
],
[
"Berg",
"Lucas Arantes",
""
],
[
"Holmes",
"Maxx",
""
],
[
"Smith",
"Hannah",
""
],
[
"Beetz",
"Marcel",
""
],
[
"Li",
"Lei",
""
],
[
"Banerjee",
"Abhirup",
""
],
[
"Grau",
"Vicente",
""
],
[
"Rodriguez",
"Blanca",
""
]
]
| TITLE: An Automated Computational Pipeline for Generating Large-Scale Cohorts
of Patient-Specific Ventricular Models in Electromechanical In Silico Trials
ABSTRACT: In recent years, human in silico trials have gained significant traction as a
powerful approach to evaluate the effects of drugs, clinical interventions, and
medical devices. In silico trials not only minimise patient risks but also
reduce reliance on animal testing. However, the implementation of in silico
trials presents several time-consuming challenges. It requires the creation of
large cohorts of virtual patients. Each virtual patient is described by their
anatomy with a volumetric mesh and electrophysiological and mechanical dynamics
through mathematical equations and parameters. Furthermore, simulated
conditions need definition including stimulation protocols and therapy
evaluation. For large virtual cohorts, this requires automatic and efficient
pipelines for generation of corresponding files. In this work, we present a
computational pipeline to automatically create large virtual patient cohort
files to conduct large-scale in silico trials through cardiac electromechanical
simulations. The pipeline generates the files describing meshes, labels, and
data required for the simulations directly from unprocessed surface meshes. We
applied the pipeline to generate over 100 virtual patients from various
datasets and performed simulations to demonstrate capacity to conduct in silico
trials for virtual patients using verified and validated electrophysiology and
electromechanics models for the context of use. The proposed pipeline is
adaptable to accommodate different types of ventricular geometries and mesh
processing tools, ensuring its versatility in handling diverse clinical
datasets. By establishing an automated framework for large scale simulation
studies as required for in silico trials and providing open-source code, our
work aims to support scalable, personalised cardiac simulations in research and
clinical applications.
| no_new_dataset | 0.949949 |
2503.03707 | Annie Chen | Annie S. Chen, Alec M. Lessing, Yuejiang Liu, Chelsea Finn | Curating Demonstrations using Online Experience | null | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Many robot demonstration datasets contain heterogeneous demonstrations of
varying quality. This heterogeneity may benefit policy pre-training, but can
hinder robot performance when used with a final imitation learning objective.
In particular, some strategies in the data may be less reliable than others or
may be underrepresented in the data, leading to poor performance when such
strategies are sampled at test time. Moreover, such unreliable or
underrepresented strategies can be difficult even for people to discern, and
sifting through demonstration datasets is time-consuming and costly. On the
other hand, policy performance when trained on such demonstrations can reflect
the reliability of different strategies. We thus propose for robots to
self-curate based on online robot experience (Demo-SCORE). More specifically,
we train and cross-validate a classifier to discern successful policy roll-outs
from unsuccessful ones and use the classifier to filter heterogeneous
demonstration datasets. Our experiments in simulation and the real world show
that Demo-SCORE can effectively identify suboptimal demonstrations without
manual curation. Notably, Demo-SCORE achieves over 15-35% higher absolute
success rate in the resulting policy compared to the base policy trained with
all original demonstrations.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 17:58:16 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Chen",
"Annie S.",
""
],
[
"Lessing",
"Alec M.",
""
],
[
"Liu",
"Yuejiang",
""
],
[
"Finn",
"Chelsea",
""
]
]
| TITLE: Curating Demonstrations using Online Experience
ABSTRACT: Many robot demonstration datasets contain heterogeneous demonstrations of
varying quality. This heterogeneity may benefit policy pre-training, but can
hinder robot performance when used with a final imitation learning objective.
In particular, some strategies in the data may be less reliable than others or
may be underrepresented in the data, leading to poor performance when such
strategies are sampled at test time. Moreover, such unreliable or
underrepresented strategies can be difficult even for people to discern, and
sifting through demonstration datasets is time-consuming and costly. On the
other hand, policy performance when trained on such demonstrations can reflect
the reliability of different strategies. We thus propose for robots to
self-curate based on online robot experience (Demo-SCORE). More specifically,
we train and cross-validate a classifier to discern successful policy roll-outs
from unsuccessful ones and use the classifier to filter heterogeneous
demonstration datasets. Our experiments in simulation and the real world show
that Demo-SCORE can effectively identify suboptimal demonstrations without
manual curation. Notably, Demo-SCORE achieves over 15-35% higher absolute
success rate in the resulting policy compared to the base policy trained with
all original demonstrations.
| no_new_dataset | 0.952175 |
2503.03726 | Jun Yang | Jun Yang, Wenjie Xue, Sahar Ghavidel, Steven L. Waslander | Active 6D Pose Estimation for Textureless Objects using Multi-View RGB
Frames | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Estimating the 6D pose of textureless objects from RBG images is an important
problem in robotics. Due to appearance ambiguities, rotational symmetries, and
severe occlusions, single-view based 6D pose estimators are still unable to
handle a wide range of objects, motivating research towards multi-view pose
estimation and next-best-view prediction that addresses these limitations. In
this work, we propose a comprehensive active perception framework for
estimating the 6D poses of textureless objects using only RGB images. Our
approach is built upon a key idea: decoupling the 6D pose estimation into a
sequential two-step process can greatly improve both accuracy and efficiency.
First, we estimate the 3D translation of each object, resolving scale and depth
ambiguities inherent to RGB images. These estimates are then used to simplify
the subsequent task of determining the 3D orientation, which we achieve through
canonical scale template matching. Building on this formulation, we then
introduce an active perception strategy that predicts the next best camera
viewpoint to capture an RGB image, effectively reducing object pose uncertainty
and enhancing pose accuracy. We evaluate our method on the public ROBI dataset
as well as on a transparent object dataset that we created. When evaluated
using the same camera viewpoints, our multi-view pose estimation significantly
outperforms state-of-the-art approaches. Furthermore, by leveraging our
next-best-view strategy, our method achieves high object pose accuracy with
substantially fewer viewpoints than heuristic-based policies.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 18:28:32 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yang",
"Jun",
""
],
[
"Xue",
"Wenjie",
""
],
[
"Ghavidel",
"Sahar",
""
],
[
"Waslander",
"Steven L.",
""
]
]
| TITLE: Active 6D Pose Estimation for Textureless Objects using Multi-View RGB
Frames
ABSTRACT: Estimating the 6D pose of textureless objects from RBG images is an important
problem in robotics. Due to appearance ambiguities, rotational symmetries, and
severe occlusions, single-view based 6D pose estimators are still unable to
handle a wide range of objects, motivating research towards multi-view pose
estimation and next-best-view prediction that addresses these limitations. In
this work, we propose a comprehensive active perception framework for
estimating the 6D poses of textureless objects using only RGB images. Our
approach is built upon a key idea: decoupling the 6D pose estimation into a
sequential two-step process can greatly improve both accuracy and efficiency.
First, we estimate the 3D translation of each object, resolving scale and depth
ambiguities inherent to RGB images. These estimates are then used to simplify
the subsequent task of determining the 3D orientation, which we achieve through
canonical scale template matching. Building on this formulation, we then
introduce an active perception strategy that predicts the next best camera
viewpoint to capture an RGB image, effectively reducing object pose uncertainty
and enhancing pose accuracy. We evaluate our method on the public ROBI dataset
as well as on a transparent object dataset that we created. When evaluated
using the same camera viewpoints, our multi-view pose estimation significantly
outperforms state-of-the-art approaches. Furthermore, by leveraging our
next-best-view strategy, our method achieves high object pose accuracy with
substantially fewer viewpoints than heuristic-based policies.
| new_dataset | 0.974965 |
2503.03729 | Sneh Pillai | Sneh Pillai | Graph-Augmented LSTM for Forecasting Sparse Anomalies in
Graph-Structured Time Series | 12 pages | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Detecting anomalies in time series data is a critical task across many
domains. The challenge intensifies when anomalies are sparse and the data are
multivariate with relational dependencies across sensors or nodes. Traditional
univariate anomaly detectors struggle to capture such cross-node dependencies,
particularly in sparse anomaly settings. To address this, we propose a
graph-augmented time series forecasting approach that explicitly integrates the
graph of relationships among time series into an LSTM forecasting model. This
enables the model to detect rare anomalies that might otherwise go unnoticed in
purely univariate approaches. We evaluate the approach on two benchmark
datasets - the Yahoo Webscope S5 anomaly dataset and the METR-LA traffic sensor
network - and compare the performance of the Graph-Augmented LSTM against
LSTM-only, ARIMA, and Prophet baselines. Results demonstrate that the
graph-augmented model achieves significantly higher precision and recall,
improving F1-score by up to 10% over the best baseline
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 18:37:52 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Pillai",
"Sneh",
""
]
]
| TITLE: Graph-Augmented LSTM for Forecasting Sparse Anomalies in
Graph-Structured Time Series
ABSTRACT: Detecting anomalies in time series data is a critical task across many
domains. The challenge intensifies when anomalies are sparse and the data are
multivariate with relational dependencies across sensors or nodes. Traditional
univariate anomaly detectors struggle to capture such cross-node dependencies,
particularly in sparse anomaly settings. To address this, we propose a
graph-augmented time series forecasting approach that explicitly integrates the
graph of relationships among time series into an LSTM forecasting model. This
enables the model to detect rare anomalies that might otherwise go unnoticed in
purely univariate approaches. We evaluate the approach on two benchmark
datasets - the Yahoo Webscope S5 anomaly dataset and the METR-LA traffic sensor
network - and compare the performance of the Graph-Augmented LSTM against
LSTM-only, ARIMA, and Prophet baselines. Results demonstrate that the
graph-augmented model achieves significantly higher precision and recall,
improving F1-score by up to 10% over the best baseline
| no_new_dataset | 0.943295 |
2503.03733 | Amal Shaheen Dr. | Amal Shaheena, Nairouz Mrabahb, Riadh Ksantinia, Abdulla Alqaddoumia | Rethinking Deep Clustering Paradigms: Self-Supervision Is All You Need | null | Volume 181, January 2025, 106773 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent advances in deep clustering have been made possible by significant
progress in self-supervised and pseudo-supervised learning. However, the
trade-off between self-supervision and pseudo-supervision can give rise to
three primary issues. The joint training causes Feature Randomness and Feature
Drift, whereas the independent training causes Feature Randomness and Feature
Twist. In essence, using pseudo-labels generates random and unreliable
features. The combination of pseudo-supervision and self-supervision drifts the
reliable clustering-oriented features. Moreover, moving from self-supervision
to pseudo-supervision can twist the curved latent manifolds. This paper
addresses the limitations of existing deep clustering paradigms concerning
Feature Randomness, Feature Drift, and Feature Twist. We propose a new paradigm
with a new strategy that replaces pseudo-supervision with a second round of
self-supervision training. The new strategy makes the transition between
instance-level self-supervision and neighborhood-level self-supervision
smoother and less abrupt. Moreover, it prevents the drifting effect that is
caused by the strong competition between instance-level self-supervision and
clustering-level pseudo-supervision. Moreover, the absence of the
pseudo-supervision prevents the risk of generating random features. With this
novel approach, our paper introduces a Rethinking of the Deep Clustering
Paradigms, denoted by R-DC. Our model is specifically designed to address three
primary challenges encountered in Deep Clustering: Feature Randomness, Feature
Drift, and Feature Twist. Experimental results conducted on six datasets have
shown that the two-level self-supervision training yields substantial
improvements.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 18:44:35 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Shaheena",
"Amal",
""
],
[
"Mrabahb",
"Nairouz",
""
],
[
"Ksantinia",
"Riadh",
""
],
[
"Alqaddoumia",
"Abdulla",
""
]
]
| TITLE: Rethinking Deep Clustering Paradigms: Self-Supervision Is All You Need
ABSTRACT: The recent advances in deep clustering have been made possible by significant
progress in self-supervised and pseudo-supervised learning. However, the
trade-off between self-supervision and pseudo-supervision can give rise to
three primary issues. The joint training causes Feature Randomness and Feature
Drift, whereas the independent training causes Feature Randomness and Feature
Twist. In essence, using pseudo-labels generates random and unreliable
features. The combination of pseudo-supervision and self-supervision drifts the
reliable clustering-oriented features. Moreover, moving from self-supervision
to pseudo-supervision can twist the curved latent manifolds. This paper
addresses the limitations of existing deep clustering paradigms concerning
Feature Randomness, Feature Drift, and Feature Twist. We propose a new paradigm
with a new strategy that replaces pseudo-supervision with a second round of
self-supervision training. The new strategy makes the transition between
instance-level self-supervision and neighborhood-level self-supervision
smoother and less abrupt. Moreover, it prevents the drifting effect that is
caused by the strong competition between instance-level self-supervision and
clustering-level pseudo-supervision. Moreover, the absence of the
pseudo-supervision prevents the risk of generating random features. With this
novel approach, our paper introduces a Rethinking of the Deep Clustering
Paradigms, denoted by R-DC. Our model is specifically designed to address three
primary challenges encountered in Deep Clustering: Feature Randomness, Feature
Drift, and Feature Twist. Experimental results conducted on six datasets have
shown that the two-level self-supervision training yields substantial
improvements.
| no_new_dataset | 0.954605 |
2503.03743 | Yuqi Zhou | Yuqi Zhou, Shuai Wang, Sunhao Dai, Qinglin Jia, Zhaocheng Du, Zhenhua
Dong and Jun Xu | CHOP: Mobile Operating Assistant with Constrained High-frequency
Optimized Subtask Planning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The advancement of visual language models (VLMs) has enhanced mobile device
operations, allowing simulated human-like actions to address user requirements.
Current VLM-based mobile operating assistants can be structured into three
levels: task, subtask, and action. The subtask level, linking high-level goals
with low-level executable actions, is crucial for task completion but faces two
challenges: ineffective subtasks that lower-level agent cannot execute and
inefficient subtasks that fail to contribute to the completion of the
higher-level task. These challenges stem from VLM's lack of experience in
decomposing subtasks within GUI scenarios in multi-agent architecture. To
address these, we propose a new mobile assistant architecture with constrained
high-frequency o}ptimized planning (CHOP). Our approach overcomes the VLM's
deficiency in GUI scenarios planning by using human-planned subtasks as the
basis vector. We evaluate our architecture in both English and Chinese contexts
across 20 Apps, demonstrating significant improvements in both effectiveness
and efficiency. Our dataset and code is available at
https://github.com/Yuqi-Zhou/CHOP
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 18:56:16 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhou",
"Yuqi",
""
],
[
"Wang",
"Shuai",
""
],
[
"Dai",
"Sunhao",
""
],
[
"Jia",
"Qinglin",
""
],
[
"Du",
"Zhaocheng",
""
],
[
"Dong",
"Zhenhua",
""
],
[
"Xu",
"Jun",
""
]
]
| TITLE: CHOP: Mobile Operating Assistant with Constrained High-frequency
Optimized Subtask Planning
ABSTRACT: The advancement of visual language models (VLMs) has enhanced mobile device
operations, allowing simulated human-like actions to address user requirements.
Current VLM-based mobile operating assistants can be structured into three
levels: task, subtask, and action. The subtask level, linking high-level goals
with low-level executable actions, is crucial for task completion but faces two
challenges: ineffective subtasks that lower-level agent cannot execute and
inefficient subtasks that fail to contribute to the completion of the
higher-level task. These challenges stem from VLM's lack of experience in
decomposing subtasks within GUI scenarios in multi-agent architecture. To
address these, we propose a new mobile assistant architecture with constrained
high-frequency o}ptimized planning (CHOP). Our approach overcomes the VLM's
deficiency in GUI scenarios planning by using human-planned subtasks as the
basis vector. We evaluate our architecture in both English and Chinese contexts
across 20 Apps, demonstrating significant improvements in both effectiveness
and efficiency. Our dataset and code is available at
https://github.com/Yuqi-Zhou/CHOP
| new_dataset | 0.950549 |
2011.13986 | Johannes Schneider | Johannes Schneider and Michalis Vlachos | Reflective-Net: Learning from Explanations | null | Data Mining and Knowledge Discovery, 1-22, 2023 | 10.1007/s10618-023-00920-0 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine whether data generated by explanation techniques, which promote a
process of self-reflection, can improve classifier performance. Our work is
based on the idea that humans have the ability to make quick, intuitive
decisions as well as to reflect on their own thinking and learn from
explanations. To the best of our knowledge, this is the first time that the
potential of mimicking this process by using explanations generated by
explainability methods has been explored. We found that combining explanations
with traditional labeled data leads to significant improvements in
classification accuracy and training efficiency across multiple image
classification datasets and convolutional neural network architectures. It is
worth noting that during training, we not only used explanations for the
correct or predicted class, but also for other classes. This serves multiple
purposes, including allowing for reflection on potential outcomes and enriching
the data through augmentation.
| [
{
"version": "v1",
"created": "Fri, 27 Nov 2020 20:40:45 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Feb 2023 22:11:22 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 06:42:03 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Schneider",
"Johannes",
""
],
[
"Vlachos",
"Michalis",
""
]
]
| TITLE: Reflective-Net: Learning from Explanations
ABSTRACT: We examine whether data generated by explanation techniques, which promote a
process of self-reflection, can improve classifier performance. Our work is
based on the idea that humans have the ability to make quick, intuitive
decisions as well as to reflect on their own thinking and learn from
explanations. To the best of our knowledge, this is the first time that the
potential of mimicking this process by using explanations generated by
explainability methods has been explored. We found that combining explanations
with traditional labeled data leads to significant improvements in
classification accuracy and training efficiency across multiple image
classification datasets and convolutional neural network architectures. It is
worth noting that during training, we not only used explanations for the
correct or predicted class, but also for other classes. This serves multiple
purposes, including allowing for reflection on potential outcomes and enriching
the data through augmentation.
| no_new_dataset | 0.955068 |
2205.07593 | Shreyas Pai | M\'elanie Cambus and Fabian Kuhn and Etna Lindy and Shreyas Pai and
Jara Uitto | A $(3+\varepsilon)$-Approximate Correlation Clustering Algorithm in
Dynamic Streams | 19 pages. This is the TheoretiCS journal version | TheoretiCS, Volume 4 (February 28, 2025) theoretics:13092 | 10.46298/theoretics.25.6 | null | cs.DS cs.DC | http://creativecommons.org/licenses/by/4.0/ | Grouping together similar elements in datasets is a common task in data
mining and machine learning. In this paper, we study streaming algorithms for
correlation clustering, where each pair of elements is labeled either similar
or dissimilar. The task is to partition the elements and the objective is to
minimize disagreements, that is, the number of dissimilar elements grouped
together and similar elements that get separated.
Our main contribution is a semi-streaming algorithm that achieves a $(3 +
\varepsilon)$-approximation to the minimum number of disagreements using a
single pass over the stream. In addition, the algorithm also works for dynamic
streams. Our approach builds on the analysis of the PIVOT algorithm by Ailon,
Charikar, and Newman [JACM'08] that obtains a $3$-approximation in the
centralized setting. Our design allows us to sparsify the input graph by
ignoring a large portion of the nodes and edges without a large extra cost as
compared to the analysis of PIVOT. This sparsification makes our technique
applicable in models such as semi-streaming, where sparse graphs can typically
be handled much more efficiently.
Our work improves on the approximation ratio of the recent single-pass
$5$-approximation algorithm and on the number of passes of the recent
$O(1/\varepsilon)$-pass $(3 + \varepsilon)$-approximation algorithm [Behnezhad,
Charikar, Ma, Tan FOCS'22, SODA'23]. Our algorithm is also more robust and can
be applied in dynamic streams. Furthermore, it is the first single pass $(3 +
\varepsilon)$-approximation algorithm that uses polynomial post-processing
time.
| [
{
"version": "v1",
"created": "Mon, 16 May 2022 11:51:48 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 13:26:59 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Oct 2022 13:25:07 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Apr 2023 17:50:57 GMT"
},
{
"version": "v5",
"created": "Wed, 1 Nov 2023 14:21:33 GMT"
},
{
"version": "v6",
"created": "Thu, 27 Feb 2025 13:38:45 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Cambus",
"Mélanie",
""
],
[
"Kuhn",
"Fabian",
""
],
[
"Lindy",
"Etna",
""
],
[
"Pai",
"Shreyas",
""
],
[
"Uitto",
"Jara",
""
]
]
| TITLE: A $(3+\varepsilon)$-Approximate Correlation Clustering Algorithm in
Dynamic Streams
ABSTRACT: Grouping together similar elements in datasets is a common task in data
mining and machine learning. In this paper, we study streaming algorithms for
correlation clustering, where each pair of elements is labeled either similar
or dissimilar. The task is to partition the elements and the objective is to
minimize disagreements, that is, the number of dissimilar elements grouped
together and similar elements that get separated.
Our main contribution is a semi-streaming algorithm that achieves a $(3 +
\varepsilon)$-approximation to the minimum number of disagreements using a
single pass over the stream. In addition, the algorithm also works for dynamic
streams. Our approach builds on the analysis of the PIVOT algorithm by Ailon,
Charikar, and Newman [JACM'08] that obtains a $3$-approximation in the
centralized setting. Our design allows us to sparsify the input graph by
ignoring a large portion of the nodes and edges without a large extra cost as
compared to the analysis of PIVOT. This sparsification makes our technique
applicable in models such as semi-streaming, where sparse graphs can typically
be handled much more efficiently.
Our work improves on the approximation ratio of the recent single-pass
$5$-approximation algorithm and on the number of passes of the recent
$O(1/\varepsilon)$-pass $(3 + \varepsilon)$-approximation algorithm [Behnezhad,
Charikar, Ma, Tan FOCS'22, SODA'23]. Our algorithm is also more robust and can
be applied in dynamic streams. Furthermore, it is the first single pass $(3 +
\varepsilon)$-approximation algorithm that uses polynomial post-processing
time.
| no_new_dataset | 0.945248 |
2211.10630 | Manxi Lin | Manxi Lin, Aasa Feragen, Kamil Mikolaj, Zahra Bashir, Martin
Gr{\o}nneb{\ae}k Tolsgaard, Anders Nymark Christensen | Explainable fetal ultrasound quality assessment with progressive concept
bottleneck models | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | The quality of fetal ultrasound screening scans directly influences the
precision of biometric measurements. However, acquiring high-quality scans is
labor-intensive and highly relies on the operator's skills. Considering the low
contrastiveness and imaging artifacts that widely exist in ultrasound, even a
dedicated deep-learning model can be vulnerable to learning from confounding
information in the image. In this paper, we propose a holistic and explainable
method for fetal ultrasound quality assessment, where we design a hierarchical
concept bottleneck model by introducing human-readable ``concepts" into the
task and imitating the sequential expert decision-making process. This
hierarchical information flow forces the model to learn concepts from
semantically meaningful areas: The model first passes through a layer of
visual, segmentation-based concepts, and next a second layer of property
concepts directly associated with the decision-making task. We consider the
quality assessment to be in a more challenging but more realistic setting, with
fine-grained image recognition. Experiments show that our model outperforms
equivalent concept-free models on an in-house dataset, and shows better
generalizability on two public benchmarks, one from Spain and one from Africa,
without any fine-tuning.
| [
{
"version": "v1",
"created": "Sat, 19 Nov 2022 09:31:19 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 02:39:27 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lin",
"Manxi",
""
],
[
"Feragen",
"Aasa",
""
],
[
"Mikolaj",
"Kamil",
""
],
[
"Bashir",
"Zahra",
""
],
[
"Tolsgaard",
"Martin Grønnebæk",
""
],
[
"Christensen",
"Anders Nymark",
""
]
]
| TITLE: Explainable fetal ultrasound quality assessment with progressive concept
bottleneck models
ABSTRACT: The quality of fetal ultrasound screening scans directly influences the
precision of biometric measurements. However, acquiring high-quality scans is
labor-intensive and highly relies on the operator's skills. Considering the low
contrastiveness and imaging artifacts that widely exist in ultrasound, even a
dedicated deep-learning model can be vulnerable to learning from confounding
information in the image. In this paper, we propose a holistic and explainable
method for fetal ultrasound quality assessment, where we design a hierarchical
concept bottleneck model by introducing human-readable ``concepts" into the
task and imitating the sequential expert decision-making process. This
hierarchical information flow forces the model to learn concepts from
semantically meaningful areas: The model first passes through a layer of
visual, segmentation-based concepts, and next a second layer of property
concepts directly associated with the decision-making task. We consider the
quality assessment to be in a more challenging but more realistic setting, with
fine-grained image recognition. Experiments show that our model outperforms
equivalent concept-free models on an in-house dataset, and shows better
generalizability on two public benchmarks, one from Spain and one from Africa,
without any fine-tuning.
| no_new_dataset | 0.949389 |
2303.11858 | Yunjie He | Yunjie He, Mojtaba Nayyeri, Bo Xiong, Yuqicheng Zhu, Evgeny Kharlamov,
Steffen Staab | Modeling Relational Patterns for Logical Query Answering over Knowledge
Graphs | The results reported in this paper are included in our accepted paper
arXiv:2407.09212 at ECAI 2024 | null | null | null | cs.DB cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Answering first-order logical (FOL) queries over knowledge graphs (KG)
remains a challenging task mainly due to KG incompleteness. Query embedding
approaches this problem by computing the low-dimensional vector representations
of entities, relations, and logical queries. KGs exhibit relational patterns
such as symmetry and composition and modeling the patterns can further enhance
the performance of query embedding models. However, the role of such patterns
in answering FOL queries by query embedding models has not been yet studied in
the literature. In this paper, we fill in this research gap and empower FOL
queries reasoning with pattern inference by introducing an inductive bias that
allows for learning relation patterns. To this end, we develop a novel query
embedding method, RoConE, that defines query regions as geometric cones and
algebraic query operators by rotations in complex space. RoConE combines the
advantages of Cone as a well-specified geometric representation for query
embedding, and also the rotation operator as a powerful algebraic operation for
pattern inference. Our experimental results on several benchmark datasets
confirm the advantage of relational patterns for enhancing logical query
answering task.
| [
{
"version": "v1",
"created": "Tue, 21 Mar 2023 13:59:15 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Jul 2024 13:57:25 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 15:03:02 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"He",
"Yunjie",
""
],
[
"Nayyeri",
"Mojtaba",
""
],
[
"Xiong",
"Bo",
""
],
[
"Zhu",
"Yuqicheng",
""
],
[
"Kharlamov",
"Evgeny",
""
],
[
"Staab",
"Steffen",
""
]
]
| TITLE: Modeling Relational Patterns for Logical Query Answering over Knowledge
Graphs
ABSTRACT: Answering first-order logical (FOL) queries over knowledge graphs (KG)
remains a challenging task mainly due to KG incompleteness. Query embedding
approaches this problem by computing the low-dimensional vector representations
of entities, relations, and logical queries. KGs exhibit relational patterns
such as symmetry and composition and modeling the patterns can further enhance
the performance of query embedding models. However, the role of such patterns
in answering FOL queries by query embedding models has not been yet studied in
the literature. In this paper, we fill in this research gap and empower FOL
queries reasoning with pattern inference by introducing an inductive bias that
allows for learning relation patterns. To this end, we develop a novel query
embedding method, RoConE, that defines query regions as geometric cones and
algebraic query operators by rotations in complex space. RoConE combines the
advantages of Cone as a well-specified geometric representation for query
embedding, and also the rotation operator as a powerful algebraic operation for
pattern inference. Our experimental results on several benchmark datasets
confirm the advantage of relational patterns for enhancing logical query
answering task.
| no_new_dataset | 0.943504 |
2304.02488 | Fan Yang | Fan Yang | SCB-dataset: A Dataset for Detecting Student Classroom Behavior | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using deep learning methods to detect the classroom behaviors of both
students and teachers is an effective way to automatically analyze classroom
performance and enhance teaching effectiveness. Then, there is still a scarcity
of publicly available high-quality datasets on student-teacher behaviors. Based
on the SCB-Dataset3 we proposed previously, we have introduced a larger, more
comprehensive, and higher-quality dataset of student-teacher classroom
behaviors, known as SCB-Dataset5. Our dataset comprises 7428 images and 106830
labels across 20 classes: hand-raising, read, write, bow head, turn head, talk,
guide, board writing, stand, answer, stage interaction, discuss, clap, yawn,
screen, blackboard, teacher, leaning on the desk, using the phone, using the
computer. We evaluated the dataset using the YOLOv7 series of algorithms We
believe that SCB-Dataset5 can provide a solid foundation for future
applications of artificial intelligence in education. Our SCB-Dataset5 can be
downloaded at the following lhttps://github.com/Whiffe/SCB-dataset
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2023 15:02:30 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Jul 2024 13:31:21 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Nov 2024 04:19:15 GMT"
},
{
"version": "v4",
"created": "Thu, 19 Dec 2024 13:00:35 GMT"
},
{
"version": "v5",
"created": "Tue, 21 Jan 2025 14:04:49 GMT"
},
{
"version": "v6",
"created": "Tue, 4 Mar 2025 02:52:24 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yang",
"Fan",
""
]
]
| TITLE: SCB-dataset: A Dataset for Detecting Student Classroom Behavior
ABSTRACT: Using deep learning methods to detect the classroom behaviors of both
students and teachers is an effective way to automatically analyze classroom
performance and enhance teaching effectiveness. Then, there is still a scarcity
of publicly available high-quality datasets on student-teacher behaviors. Based
on the SCB-Dataset3 we proposed previously, we have introduced a larger, more
comprehensive, and higher-quality dataset of student-teacher classroom
behaviors, known as SCB-Dataset5. Our dataset comprises 7428 images and 106830
labels across 20 classes: hand-raising, read, write, bow head, turn head, talk,
guide, board writing, stand, answer, stage interaction, discuss, clap, yawn,
screen, blackboard, teacher, leaning on the desk, using the phone, using the
computer. We evaluated the dataset using the YOLOv7 series of algorithms We
believe that SCB-Dataset5 can provide a solid foundation for future
applications of artificial intelligence in education. Our SCB-Dataset5 can be
downloaded at the following lhttps://github.com/Whiffe/SCB-dataset
| new_dataset | 0.966442 |
2307.07036 | Deressa Wodajo | Deressa Wodajo Deressa, Hannes Mareen, Peter Lambert, Solomon Atnafu,
Zahid Akhtar, Glenn Van Wallendael | GenConViT: Deepfake Video Detection Using Generative Convolutional
Vision Transformer | 11 pages, 4 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deepfakes have raised significant concerns due to their potential to spread
false information and compromise digital media integrity. Current deepfake
detection models often struggle to generalize across a diverse range of
deepfake generation techniques and video content. In this work, we propose a
Generative Convolutional Vision Transformer (GenConViT) for deepfake video
detection. Our model combines ConvNeXt and Swin Transformer models for feature
extraction, and it utilizes Autoencoder and Variational Autoencoder to learn
from the latent data distribution. By learning from the visual artifacts and
latent data distribution, GenConViT achieves improved performance in detecting
a wide range of deepfake videos. The model is trained and evaluated on DFDC,
FF++, TM, DeepfakeTIMIT, and Celeb-DF (v$2$) datasets. The proposed GenConViT
model demonstrates strong performance in deepfake video detection, achieving
high accuracy across the tested datasets. While our model shows promising
results in deepfake video detection by leveraging visual and latent features,
we demonstrate that further work is needed to improve its generalizability,
i.e., when encountering out-of-distribution data. Our model provides an
effective solution for identifying a wide range of fake videos while preserving
media integrity. The open-source code for GenConViT is available at
https://github.com/erprogs/GenConViT.
| [
{
"version": "v1",
"created": "Thu, 13 Jul 2023 19:27:40 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 10:43:51 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Deressa",
"Deressa Wodajo",
""
],
[
"Mareen",
"Hannes",
""
],
[
"Lambert",
"Peter",
""
],
[
"Atnafu",
"Solomon",
""
],
[
"Akhtar",
"Zahid",
""
],
[
"Van Wallendael",
"Glenn",
""
]
]
| TITLE: GenConViT: Deepfake Video Detection Using Generative Convolutional
Vision Transformer
ABSTRACT: Deepfakes have raised significant concerns due to their potential to spread
false information and compromise digital media integrity. Current deepfake
detection models often struggle to generalize across a diverse range of
deepfake generation techniques and video content. In this work, we propose a
Generative Convolutional Vision Transformer (GenConViT) for deepfake video
detection. Our model combines ConvNeXt and Swin Transformer models for feature
extraction, and it utilizes Autoencoder and Variational Autoencoder to learn
from the latent data distribution. By learning from the visual artifacts and
latent data distribution, GenConViT achieves improved performance in detecting
a wide range of deepfake videos. The model is trained and evaluated on DFDC,
FF++, TM, DeepfakeTIMIT, and Celeb-DF (v$2$) datasets. The proposed GenConViT
model demonstrates strong performance in deepfake video detection, achieving
high accuracy across the tested datasets. While our model shows promising
results in deepfake video detection by leveraging visual and latent features,
we demonstrate that further work is needed to improve its generalizability,
i.e., when encountering out-of-distribution data. Our model provides an
effective solution for identifying a wide range of fake videos while preserving
media integrity. The open-source code for GenConViT is available at
https://github.com/erprogs/GenConViT.
| no_new_dataset | 0.948775 |
2308.10373 | Hejia Geng | Hejia Geng, Peng Li | HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with
Adaptive Firing Thresholds | Accepted by TMLR | null | null | null | cs.NE cs.CR cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While spiking neural networks (SNNs) offer a promising neurally-inspired
model of computation, they are vulnerable to adversarial attacks. We present
the first study that draws inspiration from neural homeostasis to design a
threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model and utilize
TA-LIF neurons to construct the adversarially robust homeostatic SNNs (HoSNNs)
for improved robustness. The TA-LIF model incorporates a self-stabilizing
dynamic thresholding mechanism, offering a local feedback control solution to
the minimization of each neuron's membrane potential error caused by
adversarial disturbance. Theoretical analysis demonstrates favorable dynamic
properties of TA-LIF neurons in terms of the bounded-input bounded-output
stability and suppressed time growth of membrane potential error, underscoring
their superior robustness compared with the standard LIF neurons. When trained
with weak FGSM attacks (attack budget = 2/255) and tested with much stronger
PGD attacks (attack budget = 8/255), our HoSNNs significantly improve model
accuracy on several datasets: from 30.54% to 74.91% on FashionMNIST, from 0.44%
to 35.06% on SVHN, from 0.56% to 42.63% on CIFAR10, from 0.04% to 16.66% on
CIFAR100, over the conventional LIF-based SNNs.
| [
{
"version": "v1",
"created": "Sun, 20 Aug 2023 21:47:54 GMT"
},
{
"version": "v2",
"created": "Sun, 22 Oct 2023 19:48:02 GMT"
},
{
"version": "v3",
"created": "Fri, 31 May 2024 23:45:57 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Mar 2025 01:24:52 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Geng",
"Hejia",
""
],
[
"Li",
"Peng",
""
]
]
| TITLE: HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with
Adaptive Firing Thresholds
ABSTRACT: While spiking neural networks (SNNs) offer a promising neurally-inspired
model of computation, they are vulnerable to adversarial attacks. We present
the first study that draws inspiration from neural homeostasis to design a
threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model and utilize
TA-LIF neurons to construct the adversarially robust homeostatic SNNs (HoSNNs)
for improved robustness. The TA-LIF model incorporates a self-stabilizing
dynamic thresholding mechanism, offering a local feedback control solution to
the minimization of each neuron's membrane potential error caused by
adversarial disturbance. Theoretical analysis demonstrates favorable dynamic
properties of TA-LIF neurons in terms of the bounded-input bounded-output
stability and suppressed time growth of membrane potential error, underscoring
their superior robustness compared with the standard LIF neurons. When trained
with weak FGSM attacks (attack budget = 2/255) and tested with much stronger
PGD attacks (attack budget = 8/255), our HoSNNs significantly improve model
accuracy on several datasets: from 30.54% to 74.91% on FashionMNIST, from 0.44%
to 35.06% on SVHN, from 0.56% to 42.63% on CIFAR10, from 0.04% to 16.66% on
CIFAR100, over the conventional LIF-based SNNs.
| no_new_dataset | 0.953013 |
2309.02244 | Amelia Jim\'enez-S\'anchez | Veronika Cheplygina, Cathrine Damgaard, Trine Naja Eriksen, Dovile
Juodelyte, Amelia Jim\'enez-S\'anchez | Augmenting Chest X-ray Datasets with Non-Expert Annotations | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advancement of machine learning algorithms in medical image analysis
requires the expansion of training datasets. A popular and cost-effective
approach is automated annotation extraction from free-text medical reports,
primarily due to the high costs associated with expert clinicians annotating
medical images, such as chest X-rays. However, it has been shown that the
resulting datasets are susceptible to biases and shortcuts. Another strategy to
increase the size of a dataset is crowdsourcing, a widely adopted practice in
general computer vision with some success in medical image analysis. In a
similar vein to crowdsourcing, we enhance two publicly available chest X-ray
datasets by incorporating non-expert annotations. However, instead of using
diagnostic labels, we annotate shortcuts in the form of tubes. We collect 3.5k
chest drain annotations for NIH-CXR14, and 1k annotations for four different
tube types in PadChest, and create the Non-Expert Annotations of Tubes in
X-rays (NEATX) dataset. We train a chest drain detector with the non-expert
annotations that generalizes well to expert labels. Moreover, we compare our
annotations to those provided by experts and show "moderate" to "almost
perfect" agreement. Finally, we present a pathology agreement study to raise
awareness about the quality of ground truth annotations. We make our dataset
available at https://zenodo.org/records/14944064 and our code available at
https://github.com/purrlab/chestxr-label-reliability.
| [
{
"version": "v1",
"created": "Tue, 5 Sep 2023 13:52:43 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 13:04:45 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Cheplygina",
"Veronika",
""
],
[
"Damgaard",
"Cathrine",
""
],
[
"Eriksen",
"Trine Naja",
""
],
[
"Juodelyte",
"Dovile",
""
],
[
"Jiménez-Sánchez",
"Amelia",
""
]
]
| TITLE: Augmenting Chest X-ray Datasets with Non-Expert Annotations
ABSTRACT: The advancement of machine learning algorithms in medical image analysis
requires the expansion of training datasets. A popular and cost-effective
approach is automated annotation extraction from free-text medical reports,
primarily due to the high costs associated with expert clinicians annotating
medical images, such as chest X-rays. However, it has been shown that the
resulting datasets are susceptible to biases and shortcuts. Another strategy to
increase the size of a dataset is crowdsourcing, a widely adopted practice in
general computer vision with some success in medical image analysis. In a
similar vein to crowdsourcing, we enhance two publicly available chest X-ray
datasets by incorporating non-expert annotations. However, instead of using
diagnostic labels, we annotate shortcuts in the form of tubes. We collect 3.5k
chest drain annotations for NIH-CXR14, and 1k annotations for four different
tube types in PadChest, and create the Non-Expert Annotations of Tubes in
X-rays (NEATX) dataset. We train a chest drain detector with the non-expert
annotations that generalizes well to expert labels. Moreover, we compare our
annotations to those provided by experts and show "moderate" to "almost
perfect" agreement. Finally, we present a pathology agreement study to raise
awareness about the quality of ground truth annotations. We make our dataset
available at https://zenodo.org/records/14944064 and our code available at
https://github.com/purrlab/chestxr-label-reliability.
| new_dataset | 0.967318 |
2310.07584 | Laurenz Ruzicka | Laurenz Ruzicka and Bernhard Strobl and Bernhard Kohn and Clemens
Heitzinger | Centrality of the Fingerprint Core Location | null | In Proceedings of the 17th International Joint Conference on
Biomedical Engineering Systems and Technologie, 2024 | 10.5220/0012309300003657 | olume 1, pages 713-720 | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fingerprints have long been recognized as a unique and reliable means of
personal identification. Central to the analysis and enhancement of
fingerprints is the concept of the fingerprint core. Although the location of
the core is used in many applications, to the best of our knowledge, this study
is the first to investigate the empirical distribution of the core over a
large, combined dataset of rolled, as well as plain fingerprint recordings. We
identify and investigate the extent of incomplete rolling during the rolled
fingerprint acquisition and investigate the centrality of the core. After
correcting for the incomplete rolling, we find that the core deviates from the
fingerprint center by 5.7% $\pm$ 5.2% to 7.6% $\pm$ 6.9%, depending on the
finger. Additionally, we find that the assumption of normal distribution of the
core position of plain fingerprint recordings cannot be rejected, but for
rolled ones it can. Therefore, we use a multi-step process to find the
distribution of the rolled fingerprint recordings. The process consists of an
Anderson-Darling normality test, the Bayesian Information Criterion to reduce
the number of possible candidate distributions and finally a Generalized Monte
Carlo goodness-of-fit procedure to find the best fitting distribution. We find
the non-central Fischer distribution best describes the cores' horizontal
positions. Finally, we investigate the correlation between mean core position
offset and the NFIQ 2 score and find that the NFIQ 2 prefers rolled fingerprint
recordings where the core sits slightly below the fingerprint center.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 15:20:44 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ruzicka",
"Laurenz",
""
],
[
"Strobl",
"Bernhard",
""
],
[
"Kohn",
"Bernhard",
""
],
[
"Heitzinger",
"Clemens",
""
]
]
| TITLE: Centrality of the Fingerprint Core Location
ABSTRACT: Fingerprints have long been recognized as a unique and reliable means of
personal identification. Central to the analysis and enhancement of
fingerprints is the concept of the fingerprint core. Although the location of
the core is used in many applications, to the best of our knowledge, this study
is the first to investigate the empirical distribution of the core over a
large, combined dataset of rolled, as well as plain fingerprint recordings. We
identify and investigate the extent of incomplete rolling during the rolled
fingerprint acquisition and investigate the centrality of the core. After
correcting for the incomplete rolling, we find that the core deviates from the
fingerprint center by 5.7% $\pm$ 5.2% to 7.6% $\pm$ 6.9%, depending on the
finger. Additionally, we find that the assumption of normal distribution of the
core position of plain fingerprint recordings cannot be rejected, but for
rolled ones it can. Therefore, we use a multi-step process to find the
distribution of the rolled fingerprint recordings. The process consists of an
Anderson-Darling normality test, the Bayesian Information Criterion to reduce
the number of possible candidate distributions and finally a Generalized Monte
Carlo goodness-of-fit procedure to find the best fitting distribution. We find
the non-central Fischer distribution best describes the cores' horizontal
positions. Finally, we investigate the correlation between mean core position
offset and the NFIQ 2 score and find that the NFIQ 2 prefers rolled fingerprint
recordings where the core sits slightly below the fingerprint center.
| no_new_dataset | 0.94256 |
2310.10315 | Alberto Marchisio | Kamila Zaman and Alberto Marchisio and Muhammad Abdullah Hanif and
Muhammad Shafique | A Survey on Quantum Machine Learning: Current Trends, Challenges,
Opportunities, and the Road Ahead | null | null | null | null | quant-ph cs.LG | http://creativecommons.org/licenses/by/4.0/ | Quantum Computing (QC) claims to improve the efficiency of solving complex
problems, compared to classical computing. When QC is integrated with Machine
Learning (ML), it creates a Quantum Machine Learning (QML) system. This paper
aims to provide a thorough understanding of the foundational concepts of QC and
its notable advantages over classical computing. Following this, we delve into
the key aspects of QML in a detailed and comprehensive manner.
In this survey, we investigate a variety of QML algorithms, discussing their
applicability across different domains. We examine quantum datasets,
highlighting their unique characteristics and advantages. The survey also
covers the current state of hardware technologies, providing insights into the
latest advancements and their implications for QML. Additionally, we review the
software tools and simulators available for QML development, discussing their
features and usability.
Furthermore, we explore practical applications of QML, illustrating how it
can be leveraged to solve real-world problems more efficiently than classical
ML methods. This paper serves as a valuable resource for readers seeking to
understand the current state-of-the-art techniques in the QML field, offering a
solid foundation to embark on further exploration and development in this
rapidly evolving area.
| [
{
"version": "v1",
"created": "Mon, 16 Oct 2023 11:52:54 GMT"
},
{
"version": "v2",
"created": "Sat, 27 Jul 2024 08:08:45 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 07:25:39 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zaman",
"Kamila",
""
],
[
"Marchisio",
"Alberto",
""
],
[
"Hanif",
"Muhammad Abdullah",
""
],
[
"Shafique",
"Muhammad",
""
]
]
| TITLE: A Survey on Quantum Machine Learning: Current Trends, Challenges,
Opportunities, and the Road Ahead
ABSTRACT: Quantum Computing (QC) claims to improve the efficiency of solving complex
problems, compared to classical computing. When QC is integrated with Machine
Learning (ML), it creates a Quantum Machine Learning (QML) system. This paper
aims to provide a thorough understanding of the foundational concepts of QC and
its notable advantages over classical computing. Following this, we delve into
the key aspects of QML in a detailed and comprehensive manner.
In this survey, we investigate a variety of QML algorithms, discussing their
applicability across different domains. We examine quantum datasets,
highlighting their unique characteristics and advantages. The survey also
covers the current state of hardware technologies, providing insights into the
latest advancements and their implications for QML. Additionally, we review the
software tools and simulators available for QML development, discussing their
features and usability.
Furthermore, we explore practical applications of QML, illustrating how it
can be leveraged to solve real-world problems more efficiently than classical
ML methods. This paper serves as a valuable resource for readers seeking to
understand the current state-of-the-art techniques in the QML field, offering a
solid foundation to embark on further exploration and development in this
rapidly evolving area.
| no_new_dataset | 0.941385 |
2311.13121 | Yang Li | Yang Li, Qi'ao Zhao, Chen Lin, Zhenjie Zhang, Xiaomin Zhu, Jinsong Su | GENET: Unleashing the Power of Side Information for Recommendation via
Hypergraph Pre-training | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommendation with side information has drawn significant research interest
due to its potential to mitigate user feedback sparsity. However, existing
models struggle with generalization across diverse domains and types of side
information. In particular, three challenges have not been addressed, and they
are (1) the diverse formats of side information, including text sequences. (2)
The diverse semantics of side information that describes items and users from
multi-level in a context different from recommendation systems. (3) The diverse
correlations in side information to measure similarity over multiple objects
beyond pairwise relations. In this paper, we introduce GENET (Generalized
hypErgraph pretraiNing on sidE informaTion), which pre-trains user and item
representations on feedback-irrelevant side information and fine-tunes the
representations on user feedback data. GENET leverages pre-training as a means
to prevent side information from overshadowing critical ID features and
feedback signals. It employs a hypergraph framework to accommodate various
types of diverse side information. During pre-training, GENET integrates tasks
for hyperlink prediction and self-supervised contrast to capture fine-grained
semantics at both local and global levels. Additionally, it introduces a unique
strategy to enhance pre-training robustness by perturbing positive samples
while maintaining high-order relations. Extensive experiments demonstrate that
GENET exhibits strong generalization capabilities, outperforming the SOTA
method by up to 38% in TOP-N recommendation and Sequential recommendation tasks
on various datasets with different side information.
| [
{
"version": "v1",
"created": "Wed, 22 Nov 2023 02:49:14 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 12:17:16 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Yang",
""
],
[
"Zhao",
"Qi'ao",
""
],
[
"Lin",
"Chen",
""
],
[
"Zhang",
"Zhenjie",
""
],
[
"Zhu",
"Xiaomin",
""
],
[
"Su",
"Jinsong",
""
]
]
| TITLE: GENET: Unleashing the Power of Side Information for Recommendation via
Hypergraph Pre-training
ABSTRACT: Recommendation with side information has drawn significant research interest
due to its potential to mitigate user feedback sparsity. However, existing
models struggle with generalization across diverse domains and types of side
information. In particular, three challenges have not been addressed, and they
are (1) the diverse formats of side information, including text sequences. (2)
The diverse semantics of side information that describes items and users from
multi-level in a context different from recommendation systems. (3) The diverse
correlations in side information to measure similarity over multiple objects
beyond pairwise relations. In this paper, we introduce GENET (Generalized
hypErgraph pretraiNing on sidE informaTion), which pre-trains user and item
representations on feedback-irrelevant side information and fine-tunes the
representations on user feedback data. GENET leverages pre-training as a means
to prevent side information from overshadowing critical ID features and
feedback signals. It employs a hypergraph framework to accommodate various
types of diverse side information. During pre-training, GENET integrates tasks
for hyperlink prediction and self-supervised contrast to capture fine-grained
semantics at both local and global levels. Additionally, it introduces a unique
strategy to enhance pre-training robustness by perturbing positive samples
while maintaining high-order relations. Extensive experiments demonstrate that
GENET exhibits strong generalization capabilities, outperforming the SOTA
method by up to 38% in TOP-N recommendation and Sequential recommendation tasks
on various datasets with different side information.
| no_new_dataset | 0.941061 |
2401.02702 | Lin Liu | Ziying Song, Guoxin Zhang, Jun Xie, Lin Liu, Caiyan Jia, Shaoqing Xu,
Zhepeng Wang | VoxelNextFusion: A Simple, Unified and Effective Voxel Fusion Framework
for Multi-Modal 3D Object Detection | null | IEEE Transactions on Geoscience and Remote Sensing, vol. 61, 2023,
pp. 1-12 | 10.1109/TGRS.2023.3331893 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LiDAR-camera fusion can enhance the performance of 3D object detection by
utilizing complementary information between depth-aware LiDAR points and
semantically rich images. Existing voxel-based methods face significant
challenges when fusing sparse voxel features with dense image features in a
one-to-one manner, resulting in the loss of the advantages of images, including
semantic and continuity information, leading to sub-optimal detection
performance, especially at long distances. In this paper, we present
VoxelNextFusion, a multi-modal 3D object detection framework specifically
designed for voxel-based methods, which effectively bridges the gap between
sparse point clouds and dense images. In particular, we propose a voxel-based
image pipeline that involves projecting point clouds onto images to obtain both
pixel- and patch-level features. These features are then fused using a
self-attention to obtain a combined representation. Moreover, to address the
issue of background features present in patches, we propose a feature
importance module that effectively distinguishes between foreground and
background features, thus minimizing the impact of the background features.
Extensive experiments were conducted on the widely used KITTI and nuScenes 3D
object detection benchmarks. Notably, our VoxelNextFusion achieved around
+3.20% in [email protected] improvement for car detection in hard level compared to the
Voxel R-CNN baseline on the KITTI test dataset
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2024 08:10:49 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 03:16:54 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Song",
"Ziying",
""
],
[
"Zhang",
"Guoxin",
""
],
[
"Xie",
"Jun",
""
],
[
"Liu",
"Lin",
""
],
[
"Jia",
"Caiyan",
""
],
[
"Xu",
"Shaoqing",
""
],
[
"Wang",
"Zhepeng",
""
]
]
| TITLE: VoxelNextFusion: A Simple, Unified and Effective Voxel Fusion Framework
for Multi-Modal 3D Object Detection
ABSTRACT: LiDAR-camera fusion can enhance the performance of 3D object detection by
utilizing complementary information between depth-aware LiDAR points and
semantically rich images. Existing voxel-based methods face significant
challenges when fusing sparse voxel features with dense image features in a
one-to-one manner, resulting in the loss of the advantages of images, including
semantic and continuity information, leading to sub-optimal detection
performance, especially at long distances. In this paper, we present
VoxelNextFusion, a multi-modal 3D object detection framework specifically
designed for voxel-based methods, which effectively bridges the gap between
sparse point clouds and dense images. In particular, we propose a voxel-based
image pipeline that involves projecting point clouds onto images to obtain both
pixel- and patch-level features. These features are then fused using a
self-attention to obtain a combined representation. Moreover, to address the
issue of background features present in patches, we propose a feature
importance module that effectively distinguishes between foreground and
background features, thus minimizing the impact of the background features.
Extensive experiments were conducted on the widely used KITTI and nuScenes 3D
object detection benchmarks. Notably, our VoxelNextFusion achieved around
+3.20% in [email protected] improvement for car detection in hard level compared to the
Voxel R-CNN baseline on the KITTI test dataset
| no_new_dataset | 0.947575 |
2401.04720 | Benedikt Roth | Benedikt Roth, Valentin Koch, Sophia J. Wagner, Julia A. Schnabel,
Carsten Marr, Tingying Peng | Low-resource finetuning of foundation models beats state-of-the-art in
histopathology | null | 2024 IEEE International Symposium on Biomedical Imaging (ISBI) | 10.1109/ISBI56570.2024.10635695 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | To handle the large scale of whole slide images in computational pathology,
most approaches first tessellate the images into smaller patches, extract
features from these patches, and finally aggregate the feature vectors with
weakly-supervised learning. The performance of this workflow strongly depends
on the quality of the extracted features. Recently, foundation models in
computer vision showed that leveraging huge amounts of data through supervised
or self-supervised learning improves feature quality and generalizability for a
variety of tasks. In this study, we benchmark the most popular vision
foundation models as feature extractors for histopathology data. We evaluate
the models in two settings: slide-level classification and patch-level
classification. We show that foundation models are a strong baseline. Our
experiments demonstrate that by finetuning a foundation model on a single GPU
for only two hours or three days depending on the dataset, we can match or
outperform state-of-the-art feature extractors for computational pathology.
These findings imply that even with little resources one can finetune a feature
extractor tailored towards a specific downstream task and dataset. This is a
considerable shift from the current state, where only few institutions with
large amounts of resources and datasets are able to train a feature extractor.
We publish all code used for training and evaluation as well as the finetuned
models.
| [
{
"version": "v1",
"created": "Tue, 9 Jan 2024 18:46:59 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Roth",
"Benedikt",
""
],
[
"Koch",
"Valentin",
""
],
[
"Wagner",
"Sophia J.",
""
],
[
"Schnabel",
"Julia A.",
""
],
[
"Marr",
"Carsten",
""
],
[
"Peng",
"Tingying",
""
]
]
| TITLE: Low-resource finetuning of foundation models beats state-of-the-art in
histopathology
ABSTRACT: To handle the large scale of whole slide images in computational pathology,
most approaches first tessellate the images into smaller patches, extract
features from these patches, and finally aggregate the feature vectors with
weakly-supervised learning. The performance of this workflow strongly depends
on the quality of the extracted features. Recently, foundation models in
computer vision showed that leveraging huge amounts of data through supervised
or self-supervised learning improves feature quality and generalizability for a
variety of tasks. In this study, we benchmark the most popular vision
foundation models as feature extractors for histopathology data. We evaluate
the models in two settings: slide-level classification and patch-level
classification. We show that foundation models are a strong baseline. Our
experiments demonstrate that by finetuning a foundation model on a single GPU
for only two hours or three days depending on the dataset, we can match or
outperform state-of-the-art feature extractors for computational pathology.
These findings imply that even with little resources one can finetune a feature
extractor tailored towards a specific downstream task and dataset. This is a
considerable shift from the current state, where only few institutions with
large amounts of resources and datasets are able to train a feature extractor.
We publish all code used for training and evaluation as well as the finetuned
models.
| no_new_dataset | 0.946151 |
2402.11480 | Kun Ma | Kun Ma, Cong Xu, Zeyuan Chen, Wei Zhang | Pattern-wise Transparent Sequential Recommendation | This paper has been accepted by IEEE TKDE | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A transparent decision-making process is essential for developing reliable
and trustworthy recommender systems. For sequential recommendation, it means
that the model can identify key items that account for its recommendation
results. However, achieving both interpretability and recommendation
performance simultaneously is challenging, especially for models that take the
entire sequence of items as input without screening. In this paper, we propose
an interpretable framework (named PTSR) that enables a pattern-wise transparent
decision-making process without extra features. It breaks the sequence of items
into multi-level patterns that serve as atomic units throughout the
recommendation process. The contribution of each pattern to the outcome is
quantified in the probability space. With a carefully designed score correction
mechanism, the pattern contribution can be implicitly learned in the absence of
ground-truth key patterns. The final recommended items are those that most key
patterns strongly endorse. Extensive experiments on five public datasets
demonstrate remarkable recommendation performance, while statistical analysis
and case studies validate the model interpretability.
| [
{
"version": "v1",
"created": "Sun, 18 Feb 2024 07:06:17 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Feb 2024 13:03:36 GMT"
},
{
"version": "v3",
"created": "Sat, 9 Mar 2024 09:37:53 GMT"
},
{
"version": "v4",
"created": "Sun, 18 Aug 2024 15:36:17 GMT"
},
{
"version": "v5",
"created": "Tue, 4 Mar 2025 07:56:04 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ma",
"Kun",
""
],
[
"Xu",
"Cong",
""
],
[
"Chen",
"Zeyuan",
""
],
[
"Zhang",
"Wei",
""
]
]
| TITLE: Pattern-wise Transparent Sequential Recommendation
ABSTRACT: A transparent decision-making process is essential for developing reliable
and trustworthy recommender systems. For sequential recommendation, it means
that the model can identify key items that account for its recommendation
results. However, achieving both interpretability and recommendation
performance simultaneously is challenging, especially for models that take the
entire sequence of items as input without screening. In this paper, we propose
an interpretable framework (named PTSR) that enables a pattern-wise transparent
decision-making process without extra features. It breaks the sequence of items
into multi-level patterns that serve as atomic units throughout the
recommendation process. The contribution of each pattern to the outcome is
quantified in the probability space. With a carefully designed score correction
mechanism, the pattern contribution can be implicitly learned in the absence of
ground-truth key patterns. The final recommended items are those that most key
patterns strongly endorse. Extensive experiments on five public datasets
demonstrate remarkable recommendation performance, while statistical analysis
and case studies validate the model interpretability.
| no_new_dataset | 0.945147 |
2403.15422 | Xiaozhou Ye | Xiaozhou Ye, Kouichi Sakurai, Nirmal Nair, Kevin I-Kai Wang | Machine Learning Techniques for Sensor-based Human Activity Recognition
with Data Heterogeneity -- A Review | null | Sensors, 2024, 24(24), 7975 | 10.3390/s24247975 | null | eess.SP cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Sensor-based Human Activity Recognition (HAR) is crucial in ubiquitous
computing, analysing behaviours through multi-dimensional observations. Despite
research progress, HAR confronts challenges, particularly in data distribution
assumptions. Most studies often assume uniform data distributions across
datasets, contrasting with the varied nature of practical sensor data in human
activities. Addressing data heterogeneity issues can improve performance,
reduce computational costs, and aid in developing personalized, adaptive models
with less annotated data. This review investigates how machine learning
addresses data heterogeneity in HAR, by categorizing data heterogeneity types,
applying corresponding suitable machine learning methods, summarizing available
datasets, and discussing future challenges.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 22:22:14 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ye",
"Xiaozhou",
""
],
[
"Sakurai",
"Kouichi",
""
],
[
"Nair",
"Nirmal",
""
],
[
"Wang",
"Kevin I-Kai",
""
]
]
| TITLE: Machine Learning Techniques for Sensor-based Human Activity Recognition
with Data Heterogeneity -- A Review
ABSTRACT: Sensor-based Human Activity Recognition (HAR) is crucial in ubiquitous
computing, analysing behaviours through multi-dimensional observations. Despite
research progress, HAR confronts challenges, particularly in data distribution
assumptions. Most studies often assume uniform data distributions across
datasets, contrasting with the varied nature of practical sensor data in human
activities. Addressing data heterogeneity issues can improve performance,
reduce computational costs, and aid in developing personalized, adaptive models
with less annotated data. This review investigates how machine learning
addresses data heterogeneity in HAR, by categorizing data heterogeneity types,
applying corresponding suitable machine learning methods, summarizing available
datasets, and discussing future challenges.
| no_new_dataset | 0.949295 |
2403.15423 | Xiaozhou Ye | Xiaozhou Ye, Kevin I-Kai Wang | Cross-user activity recognition via temporal relation optimal transport | null | International Conference on Mobile and Ubiquitous Systems:
Computing, Networking, and Services, pp. 355-374. Cham: Springer Nature
Switzerland, 2023 | 10.1007/978-3-031-63989-0_18 | null | eess.SP cs.AI cs.CV cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Current research on human activity recognition (HAR) mainly assumes that
training and testing data are drawn from the same distribution to achieve a
generalised model, which means all the data are considered to be independent
and identically distributed $\displaystyle (i.i.d.) $. In many real-world
applications, this assumption does not hold, and collected training and target
testing datasets have non-uniform distribution, such as in the case of
cross-user HAR. Domain adaptation is a promising approach for cross-user HAR
tasks. Existing domain adaptation works based on the assumption that samples in
each domain are $\displaystyle i.i.d. $ and do not consider the knowledge of
temporal relation hidden in time series data for aligning data distribution.
This strong assumption of $\displaystyle i.i.d. $ may not be suitable for time
series-related domain adaptation methods because the samples formed by time
series segmentation and feature extraction techniques are only coarse
approximations to $\displaystyle i.i.d. $ assumption in each domain. In this
paper, we propose the temporal relation optimal transport (TROT) method to
utilise temporal relation and relax the $\displaystyle i.i.d. $ assumption for
the samples in each domain for accurate and efficient knowledge transfer. We
obtain the temporal relation representation and implement temporal relation
alignment of activities via the Hidden Markov model (HMM) and optimal transport
(OT) techniques. Besides, a new regularisation term that preserves temporal
relation order information for an improved optimal transport mapping is
proposed to enhance the domain adaptation performance. Comprehensive
experiments are conducted on three public activity recognition datasets (i.e.
OPPT, PAMAP2 and DSADS), demonstrating that TROT outperforms other
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 22:33:56 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ye",
"Xiaozhou",
""
],
[
"Wang",
"Kevin I-Kai",
""
]
]
| TITLE: Cross-user activity recognition via temporal relation optimal transport
ABSTRACT: Current research on human activity recognition (HAR) mainly assumes that
training and testing data are drawn from the same distribution to achieve a
generalised model, which means all the data are considered to be independent
and identically distributed $\displaystyle (i.i.d.) $. In many real-world
applications, this assumption does not hold, and collected training and target
testing datasets have non-uniform distribution, such as in the case of
cross-user HAR. Domain adaptation is a promising approach for cross-user HAR
tasks. Existing domain adaptation works based on the assumption that samples in
each domain are $\displaystyle i.i.d. $ and do not consider the knowledge of
temporal relation hidden in time series data for aligning data distribution.
This strong assumption of $\displaystyle i.i.d. $ may not be suitable for time
series-related domain adaptation methods because the samples formed by time
series segmentation and feature extraction techniques are only coarse
approximations to $\displaystyle i.i.d. $ assumption in each domain. In this
paper, we propose the temporal relation optimal transport (TROT) method to
utilise temporal relation and relax the $\displaystyle i.i.d. $ assumption for
the samples in each domain for accurate and efficient knowledge transfer. We
obtain the temporal relation representation and implement temporal relation
alignment of activities via the Hidden Markov model (HMM) and optimal transport
(OT) techniques. Besides, a new regularisation term that preserves temporal
relation order information for an improved optimal transport mapping is
proposed to enhance the domain adaptation performance. Comprehensive
experiments are conducted on three public activity recognition datasets (i.e.
OPPT, PAMAP2 and DSADS), demonstrating that TROT outperforms other
state-of-the-art methods.
| no_new_dataset | 0.950915 |
2403.17958 | Xiaozhou Ye | Xiaozhou Ye, Kevin I-Kai Wang | Deep Generative Domain Adaptation with Temporal Attention for Cross-User
Activity Recognition | null | Pattern Recognition, Volume 156, December 2024, 110811 | 10.1016/j.patcog.2024.110811 | null | cs.LG cs.AI cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | In Human Activity Recognition (HAR), a predominant assumption is that the
data utilized for training and evaluation purposes are drawn from the same
distribution. It is also assumed that all data samples are independent and
identically distributed ($\displaystyle i.i.d.$). Contrarily, practical
implementations often challenge this notion, manifesting data distribution
discrepancies, especially in scenarios such as cross-user HAR. Domain
adaptation is the promising approach to address these challenges inherent in
cross-user HAR tasks. However, a clear gap in domain adaptation techniques is
the neglect of the temporal relation embedded within time series data during
the phase of aligning data distributions. Addressing this oversight, our
research presents the Deep Generative Domain Adaptation with Temporal Attention
(DGDATA) method. This novel method uniquely recognises and integrates temporal
relations during the domain adaptation process. By synergizing the capabilities
of generative models with the Temporal Relation Attention mechanism, our method
improves the classification performance in cross-user HAR. A comprehensive
evaluation has been conducted on three public sensor-based HAR datasets
targeting different scenarios and applications to demonstrate the efficacy of
the proposed DGDATA method.
| [
{
"version": "v1",
"created": "Tue, 12 Mar 2024 22:45:05 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ye",
"Xiaozhou",
""
],
[
"Wang",
"Kevin I-Kai",
""
]
]
| TITLE: Deep Generative Domain Adaptation with Temporal Attention for Cross-User
Activity Recognition
ABSTRACT: In Human Activity Recognition (HAR), a predominant assumption is that the
data utilized for training and evaluation purposes are drawn from the same
distribution. It is also assumed that all data samples are independent and
identically distributed ($\displaystyle i.i.d.$). Contrarily, practical
implementations often challenge this notion, manifesting data distribution
discrepancies, especially in scenarios such as cross-user HAR. Domain
adaptation is the promising approach to address these challenges inherent in
cross-user HAR tasks. However, a clear gap in domain adaptation techniques is
the neglect of the temporal relation embedded within time series data during
the phase of aligning data distributions. Addressing this oversight, our
research presents the Deep Generative Domain Adaptation with Temporal Attention
(DGDATA) method. This novel method uniquely recognises and integrates temporal
relations during the domain adaptation process. By synergizing the capabilities
of generative models with the Temporal Relation Attention mechanism, our method
improves the classification performance in cross-user HAR. A comprehensive
evaluation has been conducted on three public sensor-based HAR datasets
targeting different scenarios and applications to demonstrate the efficacy of
the proposed DGDATA method.
| no_new_dataset | 0.9434 |
2403.18281 | Changkun Liu | Changkun Liu, Jianhao Jiao, Huajian Huang, Zhengyang Ma, Dimitrios
Kanoulas, Tristan Braud | AIR-HLoc: Adaptive Retrieved Images Selection for Efficient Visual
Localisation | Accepted to the 2025 IEEE International Conference on Robotics and
Automation (ICRA) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art hierarchical localisation pipelines (HLoc) employ image
retrieval (IR) to establish 2D-3D correspondences by selecting the top-$k$ most
similar images from a reference database. While increasing $k$ improves
localisation robustness, it also linearly increases computational cost and
runtime, creating a significant bottleneck. This paper investigates the
relationship between global and local descriptors, showing that greater
similarity between the global descriptors of query and database images
increases the proportion of feature matches. Low similarity queries
significantly benefit from increasing $k$, while high similarity queries
rapidly experience diminishing returns. Building on these observations, we
propose an adaptive strategy that adjusts $k$ based on the similarity between
the query's global descriptor and those in the database, effectively mitigating
the feature-matching bottleneck. Our approach optimizes processing time without
sacrificing accuracy. Experiments on three indoor and outdoor datasets show
that AIR-HLoc reduces feature matching time by up to 30\%, while preserving
state-of-the-art accuracy. The results demonstrate that AIR-HLoc facilitates a
latency-sensitive localisation system.
| [
{
"version": "v1",
"created": "Wed, 27 Mar 2024 06:17:21 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Sep 2024 03:09:15 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 04:31:55 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Liu",
"Changkun",
""
],
[
"Jiao",
"Jianhao",
""
],
[
"Huang",
"Huajian",
""
],
[
"Ma",
"Zhengyang",
""
],
[
"Kanoulas",
"Dimitrios",
""
],
[
"Braud",
"Tristan",
""
]
]
| TITLE: AIR-HLoc: Adaptive Retrieved Images Selection for Efficient Visual
Localisation
ABSTRACT: State-of-the-art hierarchical localisation pipelines (HLoc) employ image
retrieval (IR) to establish 2D-3D correspondences by selecting the top-$k$ most
similar images from a reference database. While increasing $k$ improves
localisation robustness, it also linearly increases computational cost and
runtime, creating a significant bottleneck. This paper investigates the
relationship between global and local descriptors, showing that greater
similarity between the global descriptors of query and database images
increases the proportion of feature matches. Low similarity queries
significantly benefit from increasing $k$, while high similarity queries
rapidly experience diminishing returns. Building on these observations, we
propose an adaptive strategy that adjusts $k$ based on the similarity between
the query's global descriptor and those in the database, effectively mitigating
the feature-matching bottleneck. Our approach optimizes processing time without
sacrificing accuracy. Experiments on three indoor and outdoor datasets show
that AIR-HLoc reduces feature matching time by up to 30\%, while preserving
state-of-the-art accuracy. The results demonstrate that AIR-HLoc facilitates a
latency-sensitive localisation system.
| no_new_dataset | 0.948394 |
2404.08254 | Zeyu Yang | Zeyu Yang, Han Yu, Peikun Guo, Khadija Zanna, Xiaoxue Yang, Akane Sano | Balanced Mixed-Type Tabular Data Synthesis with Diffusion Models | OpenReview: https://openreview.net/forum?id=dvRysCqmYQ | Transactions on Machine Learning Research, ISSN 2835-8856 (2025) | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Diffusion models have emerged as a robust framework for various generative
tasks, including tabular data synthesis. However, current tabular diffusion
models tend to inherit bias in the training dataset and generate biased
synthetic data, which may influence discriminatory actions. In this research,
we introduce a novel tabular diffusion model that incorporates sensitive
guidance to generate fair synthetic data with balanced joint distributions of
the target label and sensitive attributes, such as sex and race. The empirical
results demonstrate that our method effectively mitigates bias in training data
while maintaining the quality of the generated samples. Furthermore, we provide
evidence that our approach outperforms existing methods for synthesizing
tabular data on fairness metrics such as demographic parity ratio and equalized
odds ratio, achieving improvements of over $10\%$. Our implementation is
available at https://github.com/comp-well-org/fair-tab-diffusion.
| [
{
"version": "v1",
"created": "Fri, 12 Apr 2024 06:08:43 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Nov 2024 03:23:14 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 07:39:04 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yang",
"Zeyu",
""
],
[
"Yu",
"Han",
""
],
[
"Guo",
"Peikun",
""
],
[
"Zanna",
"Khadija",
""
],
[
"Yang",
"Xiaoxue",
""
],
[
"Sano",
"Akane",
""
]
]
| TITLE: Balanced Mixed-Type Tabular Data Synthesis with Diffusion Models
ABSTRACT: Diffusion models have emerged as a robust framework for various generative
tasks, including tabular data synthesis. However, current tabular diffusion
models tend to inherit bias in the training dataset and generate biased
synthetic data, which may influence discriminatory actions. In this research,
we introduce a novel tabular diffusion model that incorporates sensitive
guidance to generate fair synthetic data with balanced joint distributions of
the target label and sensitive attributes, such as sex and race. The empirical
results demonstrate that our method effectively mitigates bias in training data
while maintaining the quality of the generated samples. Furthermore, we provide
evidence that our approach outperforms existing methods for synthesizing
tabular data on fairness metrics such as demographic parity ratio and equalized
odds ratio, achieving improvements of over $10\%$. Our implementation is
available at https://github.com/comp-well-org/fair-tab-diffusion.
| no_new_dataset | 0.950778 |
2404.09299 | Dror Markus | Dror K. Markus, Effi Levi, Tamir Sheafer, and Shaul R. Shenhav | Reap the Wild Wind: Detecting Media Storms in Large-Scale News Corpora | This paper was accepted and published in Findings of EMNLP 2024. The
final version is available at:
https://aclanthology.org/2024.findings-emnlp.275/ | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Media Storms, dramatic outbursts of attention to a story, are central
components of media dynamics and the attention landscape. Despite their
significance, there has been little systematic and empirical research on this
concept due to issues of measurement and operationalization. We introduce an
iterative human-in-the-loop method to identify media storms in a large-scale
corpus of news articles. The text is first transformed into signals of
dispersion based on several textual characteristics. In each iteration, we
apply unsupervised anomaly detection to these signals; each anomaly is then
validated by an expert to confirm the presence of a storm, and those results
are then used to tune the anomaly detection in the next iteration. We
demonstrate the applicability of this method in two scenarios: first,
supplementing an initial list of media storms within a specific time frame; and
second, detecting media storms in new time periods. We make available a media
storm dataset compiled using both scenarios. Both the method and dataset offer
the basis for comprehensive empirical research into the concept of media
storms, including characterizing them and predicting their outbursts and
durations, in mainstream media or social media platforms.
| [
{
"version": "v1",
"created": "Sun, 14 Apr 2024 16:47:38 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 13:10:27 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Markus",
"Dror K.",
""
],
[
"Levi",
"Effi",
""
],
[
"Sheafer",
"Tamir",
""
],
[
"Shenhav",
"Shaul R.",
""
]
]
| TITLE: Reap the Wild Wind: Detecting Media Storms in Large-Scale News Corpora
ABSTRACT: Media Storms, dramatic outbursts of attention to a story, are central
components of media dynamics and the attention landscape. Despite their
significance, there has been little systematic and empirical research on this
concept due to issues of measurement and operationalization. We introduce an
iterative human-in-the-loop method to identify media storms in a large-scale
corpus of news articles. The text is first transformed into signals of
dispersion based on several textual characteristics. In each iteration, we
apply unsupervised anomaly detection to these signals; each anomaly is then
validated by an expert to confirm the presence of a storm, and those results
are then used to tune the anomaly detection in the next iteration. We
demonstrate the applicability of this method in two scenarios: first,
supplementing an initial list of media storms within a specific time frame; and
second, detecting media storms in new time periods. We make available a media
storm dataset compiled using both scenarios. Both the method and dataset offer
the basis for comprehensive empirical research into the concept of media
storms, including characterizing them and predicting their outbursts and
durations, in mainstream media or social media platforms.
| new_dataset | 0.957794 |
2404.15274 | Matt Cheung | Matt Y Cheung, Tucker J Netherton, Laurence E Court, Ashok
Veeraraghavan, Guha Balakrishnan | Metric-Guided Conformal Bounds for Probabilistic Image Reconstruction | 11 pages, 4 figures, 1 table, 2 algorithms. Code available at
https://github.com/matthewyccheung/conformal-metric. Previously titled
"Metric-guided Image Reconstruction Bounds via Conformal Prediction" | null | null | null | cs.LG cs.CV eess.IV physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Modern deep learning reconstruction algorithms generate impressively
realistic scans from sparse inputs, but can often produce significant
inaccuracies. This makes it difficult to provide statistically guaranteed
claims about the true state of a subject from scans reconstructed by these
algorithms. In this study, we propose a framework for computing provably valid
prediction bounds on claims derived from probabilistic black-box image
reconstruction algorithms. The key insights behind our framework are to
represent reconstructed scans with a derived clinical metric of interest, and
to calibrate bounds on the ground truth metric with conformal prediction (CP)
using a prior calibration dataset. These bounds convey interpretable feedback
about the subject's state, and can also be used to retrieve nearest-neighbor
reconstructed scans for visual inspection. We demonstrate the utility of this
framework on sparse-view computed tomography (CT) for fat mass quantification
and radiotherapy planning tasks. Results show that our framework produces
bounds with better semantical interpretation than conventional pixel-based
bounding approaches. Furthermore, we can flag dangerous outlier reconstructions
that look plausible but have statistically unlikely metric values.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 17:59:12 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Jul 2024 03:31:16 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 04:07:12 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Cheung",
"Matt Y",
""
],
[
"Netherton",
"Tucker J",
""
],
[
"Court",
"Laurence E",
""
],
[
"Veeraraghavan",
"Ashok",
""
],
[
"Balakrishnan",
"Guha",
""
]
]
| TITLE: Metric-Guided Conformal Bounds for Probabilistic Image Reconstruction
ABSTRACT: Modern deep learning reconstruction algorithms generate impressively
realistic scans from sparse inputs, but can often produce significant
inaccuracies. This makes it difficult to provide statistically guaranteed
claims about the true state of a subject from scans reconstructed by these
algorithms. In this study, we propose a framework for computing provably valid
prediction bounds on claims derived from probabilistic black-box image
reconstruction algorithms. The key insights behind our framework are to
represent reconstructed scans with a derived clinical metric of interest, and
to calibrate bounds on the ground truth metric with conformal prediction (CP)
using a prior calibration dataset. These bounds convey interpretable feedback
about the subject's state, and can also be used to retrieve nearest-neighbor
reconstructed scans for visual inspection. We demonstrate the utility of this
framework on sparse-view computed tomography (CT) for fat mass quantification
and radiotherapy planning tasks. Results show that our framework produces
bounds with better semantical interpretation than conventional pixel-based
bounding approaches. Furthermore, we can flag dangerous outlier reconstructions
that look plausible but have statistically unlikely metric values.
| no_new_dataset | 0.953923 |
2405.00200 | Amanda Bertsch | Amanda Bertsch, Maor Ivgi, Emily Xiao, Uri Alon, Jonathan Berant,
Matthew R. Gormley, Graham Neubig | In-Context Learning with Long-Context Models: An In-Depth Exploration | 32 pages; NAACL 2025 camera-ready | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As model context lengths continue to increase, the number of demonstrations
that can be provided in-context approaches the size of entire training
datasets. We study the behavior of in-context learning (ICL) at this extreme
scale on multiple datasets and models. We show that, for many datasets with
large label spaces, performance continues to increase with thousands of
demonstrations. We contrast this with example retrieval and finetuning: example
retrieval shows excellent performance at low context lengths but has diminished
gains with more demonstrations; finetuning is more data hungry than ICL but can
exceed long-context ICL performance with additional data. We use the ICL
setting to study several properties of both in-context learning and
long-context models. We show that long-context ICL is less sensitive to random
input shuffling than short-context ICL, that grouping of same-label examples
negatively impacts performance, and that the performance boosts do not arise
from cumulative gain from encoding many examples together. We conclude that
long-context ICL can be an effective tool, and may not require long-context for
encoding the demonstration set at all.
| [
{
"version": "v1",
"created": "Tue, 30 Apr 2024 21:06:52 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 19:53:28 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Bertsch",
"Amanda",
""
],
[
"Ivgi",
"Maor",
""
],
[
"Xiao",
"Emily",
""
],
[
"Alon",
"Uri",
""
],
[
"Berant",
"Jonathan",
""
],
[
"Gormley",
"Matthew R.",
""
],
[
"Neubig",
"Graham",
""
]
]
| TITLE: In-Context Learning with Long-Context Models: An In-Depth Exploration
ABSTRACT: As model context lengths continue to increase, the number of demonstrations
that can be provided in-context approaches the size of entire training
datasets. We study the behavior of in-context learning (ICL) at this extreme
scale on multiple datasets and models. We show that, for many datasets with
large label spaces, performance continues to increase with thousands of
demonstrations. We contrast this with example retrieval and finetuning: example
retrieval shows excellent performance at low context lengths but has diminished
gains with more demonstrations; finetuning is more data hungry than ICL but can
exceed long-context ICL performance with additional data. We use the ICL
setting to study several properties of both in-context learning and
long-context models. We show that long-context ICL is less sensitive to random
input shuffling than short-context ICL, that grouping of same-label examples
negatively impacts performance, and that the performance boosts do not arise
from cumulative gain from encoding many examples together. We conclude that
long-context ICL can be an effective tool, and may not require long-context for
encoding the demonstration set at all.
| no_new_dataset | 0.945147 |
2405.03714 | Devaansh Gupta | Siddhant Kharbanda, Devaansh Gupta, Gururaj K, Pankaj Malhotra, Amit
Singh, Cho-Jui Hsieh, Rohit Babbar | UniDEC : Unified Dual Encoder and Classifier Training for Extreme
Multi-Label Classification | null | In Proceedings of the ACM Web Conference 2025 (WWW 2025) | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extreme Multi-label Classification (XMC) involves predicting a subset of
relevant labels from an extremely large label space, given an input query and
labels with textual features. Models developed for this problem have
conventionally made use of dual encoder (DE) to embed the queries and label
texts and one-vs-all (OvA) classifiers to rerank the shortlisted labels by the
DE. While such methods have shown empirical success, a major drawback is their
computational cost, often requiring upto 16 GPUs to train on the largest public
dataset. Such a high cost is a consequence of calculating the loss over the
entire label space. While shortlisting strategies have been proposed for
classifiers, we aim to study such methods for the DE framework. In this work,
we develop UniDEC, a loss-independent, end-to-end trainable framework which
trains the DE and classifier together in a unified manner with a multi-class
loss, while reducing the computational cost by 4-16x. This is done via the
proposed pick-some-label (PSL) reduction, which aims to compute the loss on
only a subset of positive and negative labels. These labels are carefully
chosen in-batch so as to maximise their supervisory signals. Not only does the
proposed framework achieve state-of-the-art results on datasets with labels in
the order of millions, it is also computationally and resource efficient in
achieving this performance on a single GPU. Code is made available at
https://github.com/the-catalyst/UniDEC.
| [
{
"version": "v1",
"created": "Sat, 4 May 2024 17:27:51 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 19:29:02 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kharbanda",
"Siddhant",
""
],
[
"Gupta",
"Devaansh",
""
],
[
"K",
"Gururaj",
""
],
[
"Malhotra",
"Pankaj",
""
],
[
"Singh",
"Amit",
""
],
[
"Hsieh",
"Cho-Jui",
""
],
[
"Babbar",
"Rohit",
""
]
]
| TITLE: UniDEC : Unified Dual Encoder and Classifier Training for Extreme
Multi-Label Classification
ABSTRACT: Extreme Multi-label Classification (XMC) involves predicting a subset of
relevant labels from an extremely large label space, given an input query and
labels with textual features. Models developed for this problem have
conventionally made use of dual encoder (DE) to embed the queries and label
texts and one-vs-all (OvA) classifiers to rerank the shortlisted labels by the
DE. While such methods have shown empirical success, a major drawback is their
computational cost, often requiring upto 16 GPUs to train on the largest public
dataset. Such a high cost is a consequence of calculating the loss over the
entire label space. While shortlisting strategies have been proposed for
classifiers, we aim to study such methods for the DE framework. In this work,
we develop UniDEC, a loss-independent, end-to-end trainable framework which
trains the DE and classifier together in a unified manner with a multi-class
loss, while reducing the computational cost by 4-16x. This is done via the
proposed pick-some-label (PSL) reduction, which aims to compute the loss on
only a subset of positive and negative labels. These labels are carefully
chosen in-batch so as to maximise their supervisory signals. Not only does the
proposed framework achieve state-of-the-art results on datasets with labels in
the order of millions, it is also computationally and resource efficient in
achieving this performance on a single GPU. Code is made available at
https://github.com/the-catalyst/UniDEC.
| no_new_dataset | 0.947284 |
2405.04309 | Jiawei Shi | Jiawei Shi, Hui Deng, Yuchao Dai | Non-rigid Structure-from-Motion: Temporally-smooth Procrustean Alignment
and Spatially-variant Deformation Modeling | Accepted by CVPR 2024; The new version adds additional experiments
and corrects typos | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Even though Non-rigid Structure-from-Motion (NRSfM) has been extensively
studied and great progress has been made, there are still key challenges that
hinder their broad real-world applications: 1) the inherent motion/rotation
ambiguity requires either explicit camera motion recovery with extra constraint
or complex Procrustean Alignment; 2) existing low-rank modeling of the global
shape can over-penalize drastic deformations in the 3D shape sequence. This
paper proposes to resolve the above issues from a spatial-temporal modeling
perspective. First, we propose a novel Temporally-smooth Procrustean Alignment
module that estimates 3D deforming shapes and adjusts the camera motion by
aligning the 3D shape sequence consecutively. Our new alignment module remedies
the requirement of complex reference 3D shape during alignment, which is more
conductive to non-isotropic deformation modeling. Second, we propose a
spatial-weighted approach to enforce the low-rank constraint adaptively at
different locations to accommodate drastic spatially-variant deformation
reconstruction better. Our modeling outperform existing low-rank based methods,
and extensive experiments across different datasets validate the effectiveness
of our method.
| [
{
"version": "v1",
"created": "Tue, 7 May 2024 13:33:50 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jun 2024 01:30:48 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 08:37:43 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Shi",
"Jiawei",
""
],
[
"Deng",
"Hui",
""
],
[
"Dai",
"Yuchao",
""
]
]
| TITLE: Non-rigid Structure-from-Motion: Temporally-smooth Procrustean Alignment
and Spatially-variant Deformation Modeling
ABSTRACT: Even though Non-rigid Structure-from-Motion (NRSfM) has been extensively
studied and great progress has been made, there are still key challenges that
hinder their broad real-world applications: 1) the inherent motion/rotation
ambiguity requires either explicit camera motion recovery with extra constraint
or complex Procrustean Alignment; 2) existing low-rank modeling of the global
shape can over-penalize drastic deformations in the 3D shape sequence. This
paper proposes to resolve the above issues from a spatial-temporal modeling
perspective. First, we propose a novel Temporally-smooth Procrustean Alignment
module that estimates 3D deforming shapes and adjusts the camera motion by
aligning the 3D shape sequence consecutively. Our new alignment module remedies
the requirement of complex reference 3D shape during alignment, which is more
conductive to non-isotropic deformation modeling. Second, we propose a
spatial-weighted approach to enforce the low-rank constraint adaptively at
different locations to accommodate drastic spatially-variant deformation
reconstruction better. Our modeling outperform existing low-rank based methods,
and extensive experiments across different datasets validate the effectiveness
of our method.
| no_new_dataset | 0.948298 |
2405.05998 | Niki Kilbertus | Zhufeng Li and Sandeep S Cranganore and Nicholas Youngblut and Niki
Kilbertus | Whole Genome Transformer for Gene Interaction Effects in Microbiome
Habitat Specificity | published at AAAI 2025 | null | null | null | q-bio.GN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Leveraging the vast genetic diversity within microbiomes offers unparalleled
insights into complex phenotypes, yet the task of accurately predicting and
understanding such traits from genomic data remains challenging. We propose a
framework taking advantage of existing large models for gene vectorization to
predict habitat specificity from entire microbial genome sequences. Based on
our model, we develop attribution techniques to elucidate gene interaction
effects that drive microbial adaptation to diverse environments. We train and
validate our approach on a large dataset of high quality microbiome genomes
from different habitats. We not only demonstrate solid predictive performance,
but also how sequence-level information of entire genomes allows us to identify
gene associations underlying complex phenotypes. Our attribution recovers known
important interaction networks and proposes new candidates for experimental
follow up.
| [
{
"version": "v1",
"created": "Thu, 9 May 2024 09:34:51 GMT"
},
{
"version": "v2",
"created": "Tue, 28 May 2024 10:59:16 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 21:31:23 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Li",
"Zhufeng",
""
],
[
"Cranganore",
"Sandeep S",
""
],
[
"Youngblut",
"Nicholas",
""
],
[
"Kilbertus",
"Niki",
""
]
]
| TITLE: Whole Genome Transformer for Gene Interaction Effects in Microbiome
Habitat Specificity
ABSTRACT: Leveraging the vast genetic diversity within microbiomes offers unparalleled
insights into complex phenotypes, yet the task of accurately predicting and
understanding such traits from genomic data remains challenging. We propose a
framework taking advantage of existing large models for gene vectorization to
predict habitat specificity from entire microbial genome sequences. Based on
our model, we develop attribution techniques to elucidate gene interaction
effects that drive microbial adaptation to diverse environments. We train and
validate our approach on a large dataset of high quality microbiome genomes
from different habitats. We not only demonstrate solid predictive performance,
but also how sequence-level information of entire genomes allows us to identify
gene associations underlying complex phenotypes. Our attribution recovers known
important interaction networks and proposes new candidates for experimental
follow up.
| no_new_dataset | 0.941169 |
2405.10822 | Samantha J. Fournier | Samantha J. Fournier, Pierfrancesco Urbani | Generative modeling through internal high-dimensional chaotic activity | null | null | null | null | cs.LG cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative modeling aims at producing new datapoints whose statistical
properties resemble the ones in a training dataset. In recent years, there has
been a burst of machine learning techniques and settings that can achieve this
goal with remarkable performances. In most of these settings, one uses the
training dataset in conjunction with noise, which is added as a source of
statistical variability and is essential for the generative task. Here, we
explore the idea of using internal chaotic dynamics in high-dimensional chaotic
systems as a way to generate new datapoints from a training dataset. We show
that simple learning rules can achieve this goal within a set of vanilla
architectures and characterize the quality of the generated datapoints through
standard accuracy measures.
| [
{
"version": "v1",
"created": "Fri, 17 May 2024 14:43:30 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 11:17:59 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Fournier",
"Samantha J.",
""
],
[
"Urbani",
"Pierfrancesco",
""
]
]
| TITLE: Generative modeling through internal high-dimensional chaotic activity
ABSTRACT: Generative modeling aims at producing new datapoints whose statistical
properties resemble the ones in a training dataset. In recent years, there has
been a burst of machine learning techniques and settings that can achieve this
goal with remarkable performances. In most of these settings, one uses the
training dataset in conjunction with noise, which is added as a source of
statistical variability and is essential for the generative task. Here, we
explore the idea of using internal chaotic dynamics in high-dimensional chaotic
systems as a way to generate new datapoints from a training dataset. We show
that simple learning rules can achieve this goal within a set of vanilla
architectures and characterize the quality of the generated datapoints through
standard accuracy measures.
| no_new_dataset | 0.955569 |
2405.13152 | Shiji Huang | Shiji Huang, Lei Ye, Min Chen, Wenhai Luo, Dihong Wang, Chenqi Xu,
Deyuan Liang | Interpretable Interaction Modeling for Trajectory Prediction via Agent
Selection and Physical Coefficient | code:https://github.com/kkk00714/ASPILin | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | A thorough understanding of the interaction between the target agent and
surrounding agents is a prerequisite for accurate trajectory prediction.
Although many methods have been explored, they assign correlation coefficients
to surrounding agents in a purely learning-based manner. In this study, we
present ASPILin, which manually selects interacting agents and replaces the
attention scores in Transformer with a newly computed physical correlation
coefficient, enhancing the interpretability of interaction modeling.
Surprisingly, these simple modifications can significantly improve prediction
performance and substantially reduce computational costs. We intentionally
simplified our model in other aspects, such as map encoding. Remarkably,
experiments conducted on the INTERACTION, highD, and CitySim datasets
demonstrate that our method is efficient and straightforward, outperforming
other state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 21 May 2024 18:45:18 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Oct 2024 19:40:39 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Oct 2024 12:56:05 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Mar 2025 13:07:09 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Huang",
"Shiji",
""
],
[
"Ye",
"Lei",
""
],
[
"Chen",
"Min",
""
],
[
"Luo",
"Wenhai",
""
],
[
"Wang",
"Dihong",
""
],
[
"Xu",
"Chenqi",
""
],
[
"Liang",
"Deyuan",
""
]
]
| TITLE: Interpretable Interaction Modeling for Trajectory Prediction via Agent
Selection and Physical Coefficient
ABSTRACT: A thorough understanding of the interaction between the target agent and
surrounding agents is a prerequisite for accurate trajectory prediction.
Although many methods have been explored, they assign correlation coefficients
to surrounding agents in a purely learning-based manner. In this study, we
present ASPILin, which manually selects interacting agents and replaces the
attention scores in Transformer with a newly computed physical correlation
coefficient, enhancing the interpretability of interaction modeling.
Surprisingly, these simple modifications can significantly improve prediction
performance and substantially reduce computational costs. We intentionally
simplified our model in other aspects, such as map encoding. Remarkably,
experiments conducted on the INTERACTION, highD, and CitySim datasets
demonstrate that our method is efficient and straightforward, outperforming
other state-of-the-art methods.
| no_new_dataset | 0.945751 |
2405.14093 | Yueen Ma | Yueen Ma, Zixing Song, Yuzheng Zhuang, Jianye Hao, Irwin King | A Survey on Vision-Language-Action Models for Embodied AI | Project page: https://github.com/yueen-ma/Awesome-VLA | null | null | null | cs.RO cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Embodied AI is widely recognized as a key element of artificial general
intelligence because it involves controlling embodied agents to perform tasks
in the physical world. Building on the success of large language models and
vision-language models, a new category of multimodal models -- referred to as
vision-language-action models (VLAs) -- has emerged to address
language-conditioned robotic tasks in embodied AI by leveraging their distinct
ability to generate actions. In recent years, a myriad of VLAs have been
developed, making it imperative to capture the rapidly evolving landscape
through a comprehensive survey. To this end, we present the first survey on
VLAs for embodied AI. This work provides a detailed taxonomy of VLAs, organized
into three major lines of research. The first line focuses on individual
components of VLAs. The second line is dedicated to developing control policies
adept at predicting low-level actions. The third line comprises high-level task
planners capable of decomposing long-horizon tasks into a sequence of subtasks,
thereby guiding VLAs to follow more general user instructions. Furthermore, we
provide an extensive summary of relevant resources, including datasets,
simulators, and benchmarks. Finally, we discuss the challenges faced by VLAs
and outline promising future directions in embodied AI. We have created a
project associated with this survey, which is available at
https://github.com/yueen-ma/Awesome-VLA.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 01:43:54 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Nov 2024 09:18:10 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 03:19:31 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Mar 2025 08:24:20 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ma",
"Yueen",
""
],
[
"Song",
"Zixing",
""
],
[
"Zhuang",
"Yuzheng",
""
],
[
"Hao",
"Jianye",
""
],
[
"King",
"Irwin",
""
]
]
| TITLE: A Survey on Vision-Language-Action Models for Embodied AI
ABSTRACT: Embodied AI is widely recognized as a key element of artificial general
intelligence because it involves controlling embodied agents to perform tasks
in the physical world. Building on the success of large language models and
vision-language models, a new category of multimodal models -- referred to as
vision-language-action models (VLAs) -- has emerged to address
language-conditioned robotic tasks in embodied AI by leveraging their distinct
ability to generate actions. In recent years, a myriad of VLAs have been
developed, making it imperative to capture the rapidly evolving landscape
through a comprehensive survey. To this end, we present the first survey on
VLAs for embodied AI. This work provides a detailed taxonomy of VLAs, organized
into three major lines of research. The first line focuses on individual
components of VLAs. The second line is dedicated to developing control policies
adept at predicting low-level actions. The third line comprises high-level task
planners capable of decomposing long-horizon tasks into a sequence of subtasks,
thereby guiding VLAs to follow more general user instructions. Furthermore, we
provide an extensive summary of relevant resources, including datasets,
simulators, and benchmarks. Finally, we discuss the challenges faced by VLAs
and outline promising future directions in embodied AI. We have created a
project associated with this survey, which is available at
https://github.com/yueen-ma/Awesome-VLA.
| no_new_dataset | 0.947575 |
2405.16792 | Eric Mugnier | Eric Mugnier, Emmanuel Anaya Gonzalez, Ranjit Jhala, Nadia
Polikarpova, Yuanyuan Zhou | Laurel: Unblocking Automated Verification with Large Language Models | 34 pages, accepted at OOPSLA 25 | null | null | null | cs.LO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Program verifiers such as Dafny automate proofs by outsourcing them to an SMT
solver. This automation is not perfect, however, and the solver often requires
hints in the form of assertions, creating a burden for the proof engineer. In
this paper, we propose Laurel, a tool that alleviates this burden by
automatically generating assertions using large language models (LLMs). To
improve the success rate of LLMs in this task, we design two domain-specific
prompting techniques. First, we help the LLM determine the location of the
missing assertion by analyzing the verifier's error message and inserting an
assertion placeholder at that location. Second, we provide the LLM with example
assertions from the same codebase, which we select based on a new proof
similarity metric. We evaluate our techniques on our new benchmark DafnyGym, a
dataset of complex lemmas we extracted from three real-world Dafny codebases.
Our evaluation shows that Laurel is able to generate over 56.6\% of the
required assertions given only a few attempts, making LLMs an affordable tool
for unblocking program verifiers without human intervention.
| [
{
"version": "v1",
"created": "Mon, 27 May 2024 03:26:01 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 22:24:37 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Mugnier",
"Eric",
""
],
[
"Gonzalez",
"Emmanuel Anaya",
""
],
[
"Jhala",
"Ranjit",
""
],
[
"Polikarpova",
"Nadia",
""
],
[
"Zhou",
"Yuanyuan",
""
]
]
| TITLE: Laurel: Unblocking Automated Verification with Large Language Models
ABSTRACT: Program verifiers such as Dafny automate proofs by outsourcing them to an SMT
solver. This automation is not perfect, however, and the solver often requires
hints in the form of assertions, creating a burden for the proof engineer. In
this paper, we propose Laurel, a tool that alleviates this burden by
automatically generating assertions using large language models (LLMs). To
improve the success rate of LLMs in this task, we design two domain-specific
prompting techniques. First, we help the LLM determine the location of the
missing assertion by analyzing the verifier's error message and inserting an
assertion placeholder at that location. Second, we provide the LLM with example
assertions from the same codebase, which we select based on a new proof
similarity metric. We evaluate our techniques on our new benchmark DafnyGym, a
dataset of complex lemmas we extracted from three real-world Dafny codebases.
Our evaluation shows that Laurel is able to generate over 56.6\% of the
required assertions given only a few attempts, making LLMs an affordable tool
for unblocking program verifiers without human intervention.
| new_dataset | 0.961353 |
2406.00783 | Li Lin | Li Lin, Santosh, Mingyang Wu, Xin Wang, Shu Hu | AI-Face: A Million-Scale Demographically Annotated AI-Generated Face
Dataset and Fairness Benchmark | This paper has been accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AI-generated faces have enriched human life, such as entertainment,
education, and art. However, they also pose misuse risks. Therefore, detecting
AI-generated faces becomes crucial, yet current detectors show biased
performance across different demographic groups. Mitigating biases can be done
by designing algorithmic fairness methods, which usually require
demographically annotated face datasets for model training. However, no
existing dataset encompasses both demographic attributes and diverse generative
methods simultaneously, which hinders the development of fair detectors for
AI-generated faces. In this work, we introduce the AI-Face dataset, the first
million-scale demographically annotated AI-generated face image dataset,
including real faces, faces from deepfake videos, and faces generated by
Generative Adversarial Networks and Diffusion Models. Based on this dataset, we
conduct the first comprehensive fairness benchmark to assess various AI face
detectors and provide valuable insights and findings to promote the future fair
design of AI face detectors. Our AI-Face dataset and benchmark code are
publicly available at https://github.com/Purdue-M2/AI-Face-FairnessBench
| [
{
"version": "v1",
"created": "Sun, 2 Jun 2024 15:51:33 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jun 2024 16:08:07 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 22:38:01 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lin",
"Li",
""
],
[
"Santosh",
"",
""
],
[
"Wu",
"Mingyang",
""
],
[
"Wang",
"Xin",
""
],
[
"Hu",
"Shu",
""
]
]
| TITLE: AI-Face: A Million-Scale Demographically Annotated AI-Generated Face
Dataset and Fairness Benchmark
ABSTRACT: AI-generated faces have enriched human life, such as entertainment,
education, and art. However, they also pose misuse risks. Therefore, detecting
AI-generated faces becomes crucial, yet current detectors show biased
performance across different demographic groups. Mitigating biases can be done
by designing algorithmic fairness methods, which usually require
demographically annotated face datasets for model training. However, no
existing dataset encompasses both demographic attributes and diverse generative
methods simultaneously, which hinders the development of fair detectors for
AI-generated faces. In this work, we introduce the AI-Face dataset, the first
million-scale demographically annotated AI-generated face image dataset,
including real faces, faces from deepfake videos, and faces generated by
Generative Adversarial Networks and Diffusion Models. Based on this dataset, we
conduct the first comprehensive fairness benchmark to assess various AI face
detectors and provide valuable insights and findings to promote the future fair
design of AI face detectors. Our AI-Face dataset and benchmark code are
publicly available at https://github.com/Purdue-M2/AI-Face-FairnessBench
| new_dataset | 0.959307 |
2406.04412 | Jaehyung Kim | Dongyoung Kim, Kimin Lee, Jinwoo Shin, Jaehyung Kim | Spread Preference Annotation: Direct Preference Judgment for Efficient
LLM Alignment | ICLR 2025 Oral Presentation, 22 pages | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Aligning large language models (LLMs) with human preferences becomes a key
component to obtaining state-of-the-art performance, but it yields a huge cost
to construct a large human-annotated preference dataset. To tackle this
problem, we propose a new framework, Spread Preference Annotation with direct
preference judgment (SPA), that boosts the alignment of LLMs using only a very
small amount of human-annotated preference data. Our key idea is leveraging the
human prior knowledge within the small (seed) data and progressively improving
the alignment of LLM, by iteratively generating the responses and learning from
them with the self-annotated preference data. To be specific, we propose to
derive the preference label from the logits of LLM to explicitly extract the
model's inherent preference. Compared to the previous approaches using external
reward models or implicit in-context learning, we observe that the proposed
approach is significantly more effective. In addition, we introduce a
noise-aware preference learning algorithm to mitigate the risk of low quality
within generated preference data. Our experimental results demonstrate that the
proposed framework significantly boosts the alignment of LLMs. For example, we
achieve superior alignment performance on AlpacaEval 2.0 with only 3.3% of the
ground-truth preference labels in the Ultrafeedback data compared to the cases
using the entire data or state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Thu, 6 Jun 2024 18:01:02 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 00:04:24 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kim",
"Dongyoung",
""
],
[
"Lee",
"Kimin",
""
],
[
"Shin",
"Jinwoo",
""
],
[
"Kim",
"Jaehyung",
""
]
]
| TITLE: Spread Preference Annotation: Direct Preference Judgment for Efficient
LLM Alignment
ABSTRACT: Aligning large language models (LLMs) with human preferences becomes a key
component to obtaining state-of-the-art performance, but it yields a huge cost
to construct a large human-annotated preference dataset. To tackle this
problem, we propose a new framework, Spread Preference Annotation with direct
preference judgment (SPA), that boosts the alignment of LLMs using only a very
small amount of human-annotated preference data. Our key idea is leveraging the
human prior knowledge within the small (seed) data and progressively improving
the alignment of LLM, by iteratively generating the responses and learning from
them with the self-annotated preference data. To be specific, we propose to
derive the preference label from the logits of LLM to explicitly extract the
model's inherent preference. Compared to the previous approaches using external
reward models or implicit in-context learning, we observe that the proposed
approach is significantly more effective. In addition, we introduce a
noise-aware preference learning algorithm to mitigate the risk of low quality
within generated preference data. Our experimental results demonstrate that the
proposed framework significantly boosts the alignment of LLMs. For example, we
achieve superior alignment performance on AlpacaEval 2.0 with only 3.3% of the
ground-truth preference labels in the Ultrafeedback data compared to the cases
using the entire data or state-of-the-art baselines.
| no_new_dataset | 0.938067 |
2406.06419 | Ramses Sanchez | David Berghaus, Kostadin Cvejoski, Patrick Seifner, Cesar Ojeda,
Ramses J. Sanchez | Foundation Inference Models for Markov Jump Processes | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Markov jump processes are continuous-time stochastic processes which describe
dynamical systems evolving in discrete state spaces. These processes find wide
application in the natural sciences and machine learning, but their inference
is known to be far from trivial. In this work we introduce a methodology for
zero-shot inference of Markov jump processes (MJPs), on bounded state spaces,
from noisy and sparse observations, which consists of two components. First, a
broad probability distribution over families of MJPs, as well as over possible
observation times and noise mechanisms, with which we simulate a synthetic
dataset of hidden MJPs and their noisy observation process. Second, a neural
network model that processes subsets of the simulated observations, and that is
trained to output the initial condition and rate matrix of the target MJP in a
supervised way. We empirically demonstrate that one and the same (pretrained)
model can infer, in a zero-shot fashion, hidden MJPs evolving in state spaces
of different dimensionalities. Specifically, we infer MJPs which describe (i)
discrete flashing ratchet systems, which are a type of Brownian motors, and the
conformational dynamics in (ii) molecular simulations, (iii) experimental ion
channel data and (iv) simple protein folding models. What is more, we show that
our model performs on par with state-of-the-art models which are finetuned to
the target datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Jun 2024 16:12:00 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Oct 2024 08:16:30 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 11:26:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Berghaus",
"David",
""
],
[
"Cvejoski",
"Kostadin",
""
],
[
"Seifner",
"Patrick",
""
],
[
"Ojeda",
"Cesar",
""
],
[
"Sanchez",
"Ramses J.",
""
]
]
| TITLE: Foundation Inference Models for Markov Jump Processes
ABSTRACT: Markov jump processes are continuous-time stochastic processes which describe
dynamical systems evolving in discrete state spaces. These processes find wide
application in the natural sciences and machine learning, but their inference
is known to be far from trivial. In this work we introduce a methodology for
zero-shot inference of Markov jump processes (MJPs), on bounded state spaces,
from noisy and sparse observations, which consists of two components. First, a
broad probability distribution over families of MJPs, as well as over possible
observation times and noise mechanisms, with which we simulate a synthetic
dataset of hidden MJPs and their noisy observation process. Second, a neural
network model that processes subsets of the simulated observations, and that is
trained to output the initial condition and rate matrix of the target MJP in a
supervised way. We empirically demonstrate that one and the same (pretrained)
model can infer, in a zero-shot fashion, hidden MJPs evolving in state spaces
of different dimensionalities. Specifically, we infer MJPs which describe (i)
discrete flashing ratchet systems, which are a type of Brownian motors, and the
conformational dynamics in (ii) molecular simulations, (iii) experimental ion
channel data and (iv) simple protein folding models. What is more, we show that
our model performs on par with state-of-the-art models which are finetuned to
the target datasets.
| no_new_dataset | 0.944791 |
2406.15044 | Adnan Ali | Adnan Ali, Jinlong Li, Huanhuan Chen, Ali Kashif Bashir | From Overfitting to Robustness: Quantity, Quality, and Variety Oriented
Negative Sample Selection in Graph Contrastive Learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph contrastive learning (GCL) aims to contrast positive-negative
counterparts to learn the node embeddings, whereas graph data augmentation
methods are employed to generate these positive-negative samples. The
variation, quantity, and quality of negative samples compared to positive
samples play crucial roles in learning meaningful embeddings for node
classification downstream tasks. Less variation, excessive quantity, and
low-quality negative samples cause the model to be overfitted for particular
nodes, resulting in less robust models. To solve the overfitting problem in the
GCL paradigm, this study proposes a novel Cumulative Sample Selection (CSS)
algorithm by comprehensively considering negative samples' quality, variations,
and quantity. Initially, three negative sample pools are constructed: easy,
medium, and hard negative samples, which contain 25%, 50%, and 25% of the total
available negative samples, respectively. Then, 10% negative samples are
selected from each of these three negative sample pools for training the model.
After that, a decision agent module evaluates model training results and
decides whether to explore more negative samples from three negative sample
pools by increasing the ratio or keep exploiting the current sampling ratio.
The proposed algorithm is integrated into a proposed graph contrastive learning
framework named NegAmplify. NegAmplify is compared with the SOTA methods on
nine graph node classification datasets, with seven achieving better node
classification accuracy with up to 2.86% improvement.
| [
{
"version": "v1",
"created": "Fri, 21 Jun 2024 10:47:26 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ali",
"Adnan",
""
],
[
"Li",
"Jinlong",
""
],
[
"Chen",
"Huanhuan",
""
],
[
"Bashir",
"Ali Kashif",
""
]
]
| TITLE: From Overfitting to Robustness: Quantity, Quality, and Variety Oriented
Negative Sample Selection in Graph Contrastive Learning
ABSTRACT: Graph contrastive learning (GCL) aims to contrast positive-negative
counterparts to learn the node embeddings, whereas graph data augmentation
methods are employed to generate these positive-negative samples. The
variation, quantity, and quality of negative samples compared to positive
samples play crucial roles in learning meaningful embeddings for node
classification downstream tasks. Less variation, excessive quantity, and
low-quality negative samples cause the model to be overfitted for particular
nodes, resulting in less robust models. To solve the overfitting problem in the
GCL paradigm, this study proposes a novel Cumulative Sample Selection (CSS)
algorithm by comprehensively considering negative samples' quality, variations,
and quantity. Initially, three negative sample pools are constructed: easy,
medium, and hard negative samples, which contain 25%, 50%, and 25% of the total
available negative samples, respectively. Then, 10% negative samples are
selected from each of these three negative sample pools for training the model.
After that, a decision agent module evaluates model training results and
decides whether to explore more negative samples from three negative sample
pools by increasing the ratio or keep exploiting the current sampling ratio.
The proposed algorithm is integrated into a proposed graph contrastive learning
framework named NegAmplify. NegAmplify is compared with the SOTA methods on
nine graph node classification datasets, with seven achieving better node
classification accuracy with up to 2.86% improvement.
| no_new_dataset | 0.955899 |
2406.16135 | Chulin Xie | Lynn Chua, Badih Ghazi, Yangsibo Huang, Pritish Kamath, Ravi Kumar,
Pasin Manurangsi, Amer Sinha, Chulin Xie, Chiyuan Zhang | Crosslingual Capabilities and Knowledge Barriers in Multilingual Large
Language Models | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) are typically multilingual due to pretraining on
diverse multilingual corpora. But can these models relate corresponding
concepts across languages, i.e., be crosslingual? This study evaluates
state-of-the-art LLMs on inherently crosslingual tasks. We observe that while
these models show promising surface-level crosslingual abilities on machine
translation and embedding space analyses, they struggle with deeper
crosslingual knowledge transfer, revealing a crosslingual knowledge barrier in
both general (MMLU benchmark) and domain-specific (Harry Potter quiz and TOFU
benchmark) contexts. Since simple inference-time mitigation methods offer only
limited improvement, we propose fine-tuning of LLMs on mixed-language data,
which effectively reduces these gaps, even when using out-of-domain datasets
like WikiText. Our findings suggest the need for explicit optimization to
unlock the full crosslingual potential of LLMs. Our code is publicly available
at https://github.com/google-research/crosslingual-knowledge-barriers.
| [
{
"version": "v1",
"created": "Sun, 23 Jun 2024 15:15:17 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 07:00:10 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Chua",
"Lynn",
""
],
[
"Ghazi",
"Badih",
""
],
[
"Huang",
"Yangsibo",
""
],
[
"Kamath",
"Pritish",
""
],
[
"Kumar",
"Ravi",
""
],
[
"Manurangsi",
"Pasin",
""
],
[
"Sinha",
"Amer",
""
],
[
"Xie",
"Chulin",
""
],
[
"Zhang",
"Chiyuan",
""
]
]
| TITLE: Crosslingual Capabilities and Knowledge Barriers in Multilingual Large
Language Models
ABSTRACT: Large language models (LLMs) are typically multilingual due to pretraining on
diverse multilingual corpora. But can these models relate corresponding
concepts across languages, i.e., be crosslingual? This study evaluates
state-of-the-art LLMs on inherently crosslingual tasks. We observe that while
these models show promising surface-level crosslingual abilities on machine
translation and embedding space analyses, they struggle with deeper
crosslingual knowledge transfer, revealing a crosslingual knowledge barrier in
both general (MMLU benchmark) and domain-specific (Harry Potter quiz and TOFU
benchmark) contexts. Since simple inference-time mitigation methods offer only
limited improvement, we propose fine-tuning of LLMs on mixed-language data,
which effectively reduces these gaps, even when using out-of-domain datasets
like WikiText. Our findings suggest the need for explicit optimization to
unlock the full crosslingual potential of LLMs. Our code is publicly available
at https://github.com/google-research/crosslingual-knowledge-barriers.
| no_new_dataset | 0.944022 |
2406.16783 | Vikas Yadav | Rishabh Maheshwary and Vikas Yadav and Hoang Nguyen and Khyati Mahajan
and Sathwik Tejaswi Madhusudhan | M2Lingual: Enhancing Multilingual, Multi-Turn Instruction Alignment in
Large Language Models | 39 pages | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Instruction finetuning (IFT) is critical for aligning Large Language Models
(LLMs) to follow instructions. While many effective IFT datasets have been
introduced recently, they predominantly focus on high-resource languages like
English. To better align LLMs across a broad spectrum of languages and tasks,
we propose a fully synthetic, novel taxonomy (Evol) guided Multilingual,
Multi-turn instruction finetuning dataset, called M2Lingual. It is constructed
by first selecting a diverse set of seed examples and then utilizing the
proposed Evol taxonomy to convert these seeds into complex and challenging
multi-turn instructions. We demonstrate the effectiveness of M2Lingual by
training LLMs of varying sizes and showcasing the enhanced performance across a
diverse set of languages. We contribute the 2 step Evol taxonomy with the
guided generation code: https://github.com/ServiceNow/M2Lingual, as well as the
first fully synthetic, general and task-oriented, multi-turn, multilingual
dataset built with Evol - M2Lingual:
https://huggingface.co/datasets/ServiceNow-AI/ M2Lingual - containing 182K
total IFT pairs, covering 70 languages and 17+ NLP tasks.
| [
{
"version": "v1",
"created": "Mon, 24 Jun 2024 16:45:13 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jun 2024 10:14:53 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 07:56:00 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Maheshwary",
"Rishabh",
""
],
[
"Yadav",
"Vikas",
""
],
[
"Nguyen",
"Hoang",
""
],
[
"Mahajan",
"Khyati",
""
],
[
"Madhusudhan",
"Sathwik Tejaswi",
""
]
]
| TITLE: M2Lingual: Enhancing Multilingual, Multi-Turn Instruction Alignment in
Large Language Models
ABSTRACT: Instruction finetuning (IFT) is critical for aligning Large Language Models
(LLMs) to follow instructions. While many effective IFT datasets have been
introduced recently, they predominantly focus on high-resource languages like
English. To better align LLMs across a broad spectrum of languages and tasks,
we propose a fully synthetic, novel taxonomy (Evol) guided Multilingual,
Multi-turn instruction finetuning dataset, called M2Lingual. It is constructed
by first selecting a diverse set of seed examples and then utilizing the
proposed Evol taxonomy to convert these seeds into complex and challenging
multi-turn instructions. We demonstrate the effectiveness of M2Lingual by
training LLMs of varying sizes and showcasing the enhanced performance across a
diverse set of languages. We contribute the 2 step Evol taxonomy with the
guided generation code: https://github.com/ServiceNow/M2Lingual, as well as the
first fully synthetic, general and task-oriented, multi-turn, multilingual
dataset built with Evol - M2Lingual:
https://huggingface.co/datasets/ServiceNow-AI/ M2Lingual - containing 182K
total IFT pairs, covering 70 languages and 17+ NLP tasks.
| new_dataset | 0.956309 |
2407.01574 | Gabriel Ducrocq | Gabriel Ducrocq, Lukas Grunewald, Sebastian Westenhoff, Fredrik
Lindsten | cryoSPHERE: Single-particle heterogeneous reconstruction from cryo EM | null | International Conference on Learning Representations (ICLR), 2025 | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | The three-dimensional structure of proteins plays a crucial role in
determining their function. Protein structure prediction methods, like
AlphaFold, offer rapid access to a protein structure. However, large protein
complexes cannot be reliably predicted, and proteins are dynamic, making it
important to resolve their full conformational distribution. Single-particle
cryo-electron microscopy (cryo-EM) is a powerful tool for determining the
structures of large protein complexes. Importantly, the numerous images of a
given protein contain underutilized information about conformational
heterogeneity. These images are very noisy projections of the protein, and
traditional methods for cryo-EM reconstruction are limited to recovering only
one or a few consensus conformations. In this paper, we introduce cryoSPHERE,
which is a deep learning method that uses a nominal protein structure (e.g.,
from AlphaFold) as input, learns how to divide it into segments, and moves
these segments as approximately rigid bodies to fit the different conformations
present in the cryo-EM dataset. This approach provides enough constraints to
enable meaningful reconstructions of single protein structural ensembles. We
demonstrate this with two synthetic datasets featuring varying levels of noise,
as well as two real dataset. We show that cryoSPHERE is very resilient to the
high levels of noise typically encountered in experiments, where we see
consistent improvements over the current state-of-the-art for heterogeneous
reconstruction.
| [
{
"version": "v1",
"created": "Wed, 29 May 2024 15:12:19 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 06:16:45 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Ducrocq",
"Gabriel",
""
],
[
"Grunewald",
"Lukas",
""
],
[
"Westenhoff",
"Sebastian",
""
],
[
"Lindsten",
"Fredrik",
""
]
]
| TITLE: cryoSPHERE: Single-particle heterogeneous reconstruction from cryo EM
ABSTRACT: The three-dimensional structure of proteins plays a crucial role in
determining their function. Protein structure prediction methods, like
AlphaFold, offer rapid access to a protein structure. However, large protein
complexes cannot be reliably predicted, and proteins are dynamic, making it
important to resolve their full conformational distribution. Single-particle
cryo-electron microscopy (cryo-EM) is a powerful tool for determining the
structures of large protein complexes. Importantly, the numerous images of a
given protein contain underutilized information about conformational
heterogeneity. These images are very noisy projections of the protein, and
traditional methods for cryo-EM reconstruction are limited to recovering only
one or a few consensus conformations. In this paper, we introduce cryoSPHERE,
which is a deep learning method that uses a nominal protein structure (e.g.,
from AlphaFold) as input, learns how to divide it into segments, and moves
these segments as approximately rigid bodies to fit the different conformations
present in the cryo-EM dataset. This approach provides enough constraints to
enable meaningful reconstructions of single protein structural ensembles. We
demonstrate this with two synthetic datasets featuring varying levels of noise,
as well as two real dataset. We show that cryoSPHERE is very resilient to the
high levels of noise typically encountered in experiments, where we see
consistent improvements over the current state-of-the-art for heterogeneous
reconstruction.
| no_new_dataset | 0.939248 |
2407.03153 | MingYu Lu | Chris Lin, Mingyu Lu, Chanwoo Kim, Su-In Lee | An Efficient Framework for Crediting Data Contributors of Diffusion
Models | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | As diffusion models are deployed in real-world settings, and their
performance is driven by training data, appraising the contribution of data
contributors is crucial to creating incentives for sharing quality data and to
implementing policies for data compensation. Depending on the use case, model
performance corresponds to various global properties of the distribution
learned by a diffusion model (e.g., overall aesthetic quality). Hence, here we
address the problem of attributing global properties of diffusion models to
data contributors. The Shapley value provides a principled approach to
valuation by uniquely satisfying game-theoretic axioms of fairness. However,
estimating Shapley values for diffusion models is computationally impractical
because it requires retraining on many training data subsets corresponding to
different contributors and rerunning inference. We introduce a method to
efficiently retrain and rerun inference for Shapley value estimation, by
leveraging model pruning and fine-tuning. We evaluate the utility of our method
with three use cases: (i) image quality for a DDPM trained on a CIFAR dataset,
(ii) demographic diversity for an LDM trained on CelebA-HQ, and (iii) aesthetic
quality for a Stable Diffusion model LoRA-finetuned on Post-Impressionist
artworks. Our results empirically demonstrate that our framework can identify
important data contributors across models' global properties, outperforming
existing attribution methods for diffusion models.
| [
{
"version": "v1",
"created": "Sun, 9 Jun 2024 17:42:09 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jan 2025 18:21:13 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 19:46:45 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lin",
"Chris",
""
],
[
"Lu",
"Mingyu",
""
],
[
"Kim",
"Chanwoo",
""
],
[
"Lee",
"Su-In",
""
]
]
| TITLE: An Efficient Framework for Crediting Data Contributors of Diffusion
Models
ABSTRACT: As diffusion models are deployed in real-world settings, and their
performance is driven by training data, appraising the contribution of data
contributors is crucial to creating incentives for sharing quality data and to
implementing policies for data compensation. Depending on the use case, model
performance corresponds to various global properties of the distribution
learned by a diffusion model (e.g., overall aesthetic quality). Hence, here we
address the problem of attributing global properties of diffusion models to
data contributors. The Shapley value provides a principled approach to
valuation by uniquely satisfying game-theoretic axioms of fairness. However,
estimating Shapley values for diffusion models is computationally impractical
because it requires retraining on many training data subsets corresponding to
different contributors and rerunning inference. We introduce a method to
efficiently retrain and rerun inference for Shapley value estimation, by
leveraging model pruning and fine-tuning. We evaluate the utility of our method
with three use cases: (i) image quality for a DDPM trained on a CIFAR dataset,
(ii) demographic diversity for an LDM trained on CelebA-HQ, and (iii) aesthetic
quality for a Stable Diffusion model LoRA-finetuned on Post-Impressionist
artworks. Our results empirically demonstrate that our framework can identify
important data contributors across models' global properties, outperforming
existing attribution methods for diffusion models.
| no_new_dataset | 0.94743 |
2407.03157 | Zhenyu He | Zhenyu He, Jun Zhang, Shengjie Luo, Jingjing Xu, Zhi Zhang, Di He | Let the Code LLM Edit Itself When You Edit the Code | ICLR 2025 Camera Ready | null | null | null | cs.CL cs.AI cs.LG cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we investigate a typical scenario in code generation where a
developer edits existing code in real time and requests a code assistant, e.g.,
a large language model, to re-predict the next token or next line on the fly.
Naively, the LLM needs to re-encode the entire KV cache to provide an accurate
prediction. However, this process is computationally expensive, especially when
the sequence length is long. Simply encoding the edited subsequence and
integrating it to the original KV cache meets the temporal confusion problem,
leading to significantly worse performance. We address this efficiency and
accuracy trade-off by introducing \underline{\textbf{Positional
\textbf{I}ntegrity \textbf{E}ncoding} (PIE). Building upon the rotary
positional encoding, PIE first removes the rotary matrices in the Key cache
that introduce temporal confusion and then reapplies the correct rotary
matrices. This process ensures that positional relationships between tokens are
correct and requires only a single round of matrix multiplication. We validate
the effectiveness of PIE through extensive experiments on the RepoBench-C-8k
dataset, utilizing DeepSeek-Coder models with 1.3B, 6.7B, and 33B parameters.
Our evaluation includes three real-world coding tasks: code insertion, code
deletion, and multi-place code editing. Results demonstrate that PIE reduces
computational overhead by over 85% compared to the standard full recomputation
approach across all model sizes and tasks while well approximating the model
performance.
| [
{
"version": "v1",
"created": "Wed, 3 Jul 2024 14:34:03 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 13:01:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"He",
"Zhenyu",
""
],
[
"Zhang",
"Jun",
""
],
[
"Luo",
"Shengjie",
""
],
[
"Xu",
"Jingjing",
""
],
[
"Zhang",
"Zhi",
""
],
[
"He",
"Di",
""
]
]
| TITLE: Let the Code LLM Edit Itself When You Edit the Code
ABSTRACT: In this work, we investigate a typical scenario in code generation where a
developer edits existing code in real time and requests a code assistant, e.g.,
a large language model, to re-predict the next token or next line on the fly.
Naively, the LLM needs to re-encode the entire KV cache to provide an accurate
prediction. However, this process is computationally expensive, especially when
the sequence length is long. Simply encoding the edited subsequence and
integrating it to the original KV cache meets the temporal confusion problem,
leading to significantly worse performance. We address this efficiency and
accuracy trade-off by introducing \underline{\textbf{Positional
\textbf{I}ntegrity \textbf{E}ncoding} (PIE). Building upon the rotary
positional encoding, PIE first removes the rotary matrices in the Key cache
that introduce temporal confusion and then reapplies the correct rotary
matrices. This process ensures that positional relationships between tokens are
correct and requires only a single round of matrix multiplication. We validate
the effectiveness of PIE through extensive experiments on the RepoBench-C-8k
dataset, utilizing DeepSeek-Coder models with 1.3B, 6.7B, and 33B parameters.
Our evaluation includes three real-world coding tasks: code insertion, code
deletion, and multi-place code editing. Results demonstrate that PIE reduces
computational overhead by over 85% compared to the standard full recomputation
approach across all model sizes and tasks while well approximating the model
performance.
| no_new_dataset | 0.939637 |
2408.01262 | Kunlun Zhu | Kunlun Zhu, Yifan Luo, Dingling Xu, Yukun Yan, Zhenghao Liu, Shi Yu,
Ruobing Wang, Shuo Wang, Yishan Li, Nan Zhang, Xu Han, Zhiyuan Liu, Maosong
Sun | RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework | https://github.com/OpenBMB/RAGEval | null | null | null | cs.CL cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Retrieval-Augmented Generation (RAG) is a powerful approach that enables
large language models (LLMs) to incorporate external knowledge. However,
evaluating the effectiveness of RAG systems in specialized scenarios remains
challenging due to the high costs of data construction and the lack of suitable
evaluation metrics. This paper introduces RAGEval, a framework designed to
assess RAG systems across diverse scenarios by generating high-quality
documents, questions, answers, and references through a schema-based pipeline.
With a focus on factual accuracy, we propose three novel metrics: Completeness,
Hallucination, and Irrelevance to evaluate LLM generated responses rigorously.
Experimental results show that RAGEval outperforms zero-shot and one-shot
methods in terms of clarity, safety, conformity, and richness of generated
samples. Furthermore, the use of LLMs for scoring the proposed metrics
demonstrates a high level of consistency with human evaluations. RAGEval
establishes a new paradigm for evaluating RAG systems in real-world
applications. The code and dataset are released at
https://github.com/OpenBMB/RAGEval.
| [
{
"version": "v1",
"created": "Fri, 2 Aug 2024 13:35:11 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Aug 2024 15:48:02 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Aug 2024 03:13:50 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Oct 2024 02:20:47 GMT"
},
{
"version": "v5",
"created": "Mon, 3 Mar 2025 22:45:57 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhu",
"Kunlun",
""
],
[
"Luo",
"Yifan",
""
],
[
"Xu",
"Dingling",
""
],
[
"Yan",
"Yukun",
""
],
[
"Liu",
"Zhenghao",
""
],
[
"Yu",
"Shi",
""
],
[
"Wang",
"Ruobing",
""
],
[
"Wang",
"Shuo",
""
],
[
"Li",
"Yishan",
""
],
[
"Zhang",
"Nan",
""
],
[
"Han",
"Xu",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
]
]
| TITLE: RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework
ABSTRACT: Retrieval-Augmented Generation (RAG) is a powerful approach that enables
large language models (LLMs) to incorporate external knowledge. However,
evaluating the effectiveness of RAG systems in specialized scenarios remains
challenging due to the high costs of data construction and the lack of suitable
evaluation metrics. This paper introduces RAGEval, a framework designed to
assess RAG systems across diverse scenarios by generating high-quality
documents, questions, answers, and references through a schema-based pipeline.
With a focus on factual accuracy, we propose three novel metrics: Completeness,
Hallucination, and Irrelevance to evaluate LLM generated responses rigorously.
Experimental results show that RAGEval outperforms zero-shot and one-shot
methods in terms of clarity, safety, conformity, and richness of generated
samples. Furthermore, the use of LLMs for scoring the proposed metrics
demonstrates a high level of consistency with human evaluations. RAGEval
establishes a new paradigm for evaluating RAG systems in real-world
applications. The code and dataset are released at
https://github.com/OpenBMB/RAGEval.
| new_dataset | 0.592991 |
2408.12136 | Weiqin Chen | Weiqin Chen, Sandipan Mishra and Santiago Paternain | Domain Adaptation for Offline Reinforcement Learning with Limited
Samples | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Offline reinforcement learning (RL) learns effective policies from a static
target dataset. The performance of state-of-the-art offline RL algorithms
notwithstanding, it relies on the quality and size of the target dataset and it
degrades if limited samples in the target dataset are available, which is often
the case in real-world applications. To address this issue, domain adaptation
that leverages auxiliary samples from related source datasets (such as
simulators) can be beneficial. However, establishing the optimal way to trade
off the source and target datasets while ensuring provably theoretical
guarantees remains an open challenge. To the best of our knowledge, this paper
proposes the first framework that theoretically explores the impact of the
weights assigned to each dataset on the performance of offline RL. In
particular, we establish performance bounds and the existence of an optimal
weight, which can be computed in closed form under simplifying assumptions. We
also provide algorithmic guarantees in terms of convergence to a neighborhood
of the optimum. Notably, these results depend on the quality of the source
dataset and the number of samples from the target dataset. Our empirical
results on the well-known Procgen benchmark substantiate our theoretical
contributions.
| [
{
"version": "v1",
"created": "Thu, 22 Aug 2024 05:38:48 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Nov 2024 21:28:34 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 05:21:05 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Chen",
"Weiqin",
""
],
[
"Mishra",
"Sandipan",
""
],
[
"Paternain",
"Santiago",
""
]
]
| TITLE: Domain Adaptation for Offline Reinforcement Learning with Limited
Samples
ABSTRACT: Offline reinforcement learning (RL) learns effective policies from a static
target dataset. The performance of state-of-the-art offline RL algorithms
notwithstanding, it relies on the quality and size of the target dataset and it
degrades if limited samples in the target dataset are available, which is often
the case in real-world applications. To address this issue, domain adaptation
that leverages auxiliary samples from related source datasets (such as
simulators) can be beneficial. However, establishing the optimal way to trade
off the source and target datasets while ensuring provably theoretical
guarantees remains an open challenge. To the best of our knowledge, this paper
proposes the first framework that theoretically explores the impact of the
weights assigned to each dataset on the performance of offline RL. In
particular, we establish performance bounds and the existence of an optimal
weight, which can be computed in closed form under simplifying assumptions. We
also provide algorithmic guarantees in terms of convergence to a neighborhood
of the optimum. Notably, these results depend on the quality of the source
dataset and the number of samples from the target dataset. Our empirical
results on the well-known Procgen benchmark substantiate our theoretical
contributions.
| no_new_dataset | 0.944587 |
2408.14608 | Lazar Atanackovic | Lazar Atanackovic, Xi Zhang, Brandon Amos, Mathieu Blanchette, Leo J.
Lee, Yoshua Bengio, Alexander Tong, Kirill Neklyudov | Meta Flow Matching: Integrating Vector Fields on the Wasserstein
Manifold | Accepted to ICLR 2025 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous biological and physical processes can be modeled as systems of
interacting entities evolving continuously over time, e.g. the dynamics of
communicating cells or physical particles. Learning the dynamics of such
systems is essential for predicting the temporal evolution of populations
across novel samples and unseen environments. Flow-based models allow for
learning these dynamics at the population level - they model the evolution of
the entire distribution of samples. However, current flow-based models are
limited to a single initial population and a set of predefined conditions which
describe different dynamics. We argue that multiple processes in natural
sciences have to be represented as vector fields on the Wasserstein manifold of
probability densities. That is, the change of the population at any moment in
time depends on the population itself due to the interactions between samples.
In particular, this is crucial for personalized medicine where the development
of diseases and their respective treatment response depend on the
microenvironment of cells specific to each patient. We propose Meta Flow
Matching (MFM), a practical approach to integrate along these vector fields on
the Wasserstein manifold by amortizing the flow model over the initial
populations. Namely, we embed the population of samples using a Graph Neural
Network (GNN) and use these embeddings to train a Flow Matching model. This
gives MFM the ability to generalize over the initial distributions, unlike
previously proposed methods. We demonstrate the ability of MFM to improve the
prediction of individual treatment responses on a large-scale multi-patient
single-cell drug screen dataset.
| [
{
"version": "v1",
"created": "Mon, 26 Aug 2024 20:05:31 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 23:31:16 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Atanackovic",
"Lazar",
""
],
[
"Zhang",
"Xi",
""
],
[
"Amos",
"Brandon",
""
],
[
"Blanchette",
"Mathieu",
""
],
[
"Lee",
"Leo J.",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Tong",
"Alexander",
""
],
[
"Neklyudov",
"Kirill",
""
]
]
| TITLE: Meta Flow Matching: Integrating Vector Fields on the Wasserstein
Manifold
ABSTRACT: Numerous biological and physical processes can be modeled as systems of
interacting entities evolving continuously over time, e.g. the dynamics of
communicating cells or physical particles. Learning the dynamics of such
systems is essential for predicting the temporal evolution of populations
across novel samples and unseen environments. Flow-based models allow for
learning these dynamics at the population level - they model the evolution of
the entire distribution of samples. However, current flow-based models are
limited to a single initial population and a set of predefined conditions which
describe different dynamics. We argue that multiple processes in natural
sciences have to be represented as vector fields on the Wasserstein manifold of
probability densities. That is, the change of the population at any moment in
time depends on the population itself due to the interactions between samples.
In particular, this is crucial for personalized medicine where the development
of diseases and their respective treatment response depend on the
microenvironment of cells specific to each patient. We propose Meta Flow
Matching (MFM), a practical approach to integrate along these vector fields on
the Wasserstein manifold by amortizing the flow model over the initial
populations. Namely, we embed the population of samples using a Graph Neural
Network (GNN) and use these embeddings to train a Flow Matching model. This
gives MFM the ability to generalize over the initial distributions, unlike
previously proposed methods. We demonstrate the ability of MFM to improve the
prediction of individual treatment responses on a large-scale multi-patient
single-cell drug screen dataset.
| no_new_dataset | 0.94887 |
2408.14769 | Yixuan Huang | Yixuan Huang, Christopher Agia, Jimmy Wu, Tucker Hermans, Jeannette
Bohg | Points2Plans: From Point Clouds to Long-Horizon Plans with Composable
Relational Dynamics | Project page: https://sites.google.com/stanford.edu/points2plans. 23
pages, 11 figures. Accepted to the IEEE International Conference on Robotics
and Automation (ICRA) 2025 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Points2Plans, a framework for composable planning with a
relational dynamics model that enables robots to solve long-horizon
manipulation tasks from partial-view point clouds. Given a language instruction
and a point cloud of the scene, our framework initiates a hierarchical planning
procedure, whereby a language model generates a high-level plan and a
sampling-based planner produces constraint-satisfying continuous parameters for
manipulation primitives sequenced according to the high-level plan. Key to our
approach is the use of a relational dynamics model as a unifying interface
between the continuous and symbolic representations of states and actions, thus
facilitating language-driven planning from high-dimensional perceptual input
such as point clouds. Whereas previous relational dynamics models require
training on datasets of multi-step manipulation scenarios that align with the
intended test scenarios, Points2Plans uses only single-step simulated training
data while generalizing zero-shot to a variable number of steps during
real-world evaluations. We evaluate our approach on tasks involving geometric
reasoning, multi-object interactions, and occluded object reasoning in both
simulated and real-world settings. Results demonstrate that Points2Plans offers
strong generalization to unseen long-horizon tasks in the real world, where it
solves over 85% of evaluated tasks while the next best baseline solves only
50%.
| [
{
"version": "v1",
"created": "Tue, 27 Aug 2024 04:10:22 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 02:53:51 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Huang",
"Yixuan",
""
],
[
"Agia",
"Christopher",
""
],
[
"Wu",
"Jimmy",
""
],
[
"Hermans",
"Tucker",
""
],
[
"Bohg",
"Jeannette",
""
]
]
| TITLE: Points2Plans: From Point Clouds to Long-Horizon Plans with Composable
Relational Dynamics
ABSTRACT: We present Points2Plans, a framework for composable planning with a
relational dynamics model that enables robots to solve long-horizon
manipulation tasks from partial-view point clouds. Given a language instruction
and a point cloud of the scene, our framework initiates a hierarchical planning
procedure, whereby a language model generates a high-level plan and a
sampling-based planner produces constraint-satisfying continuous parameters for
manipulation primitives sequenced according to the high-level plan. Key to our
approach is the use of a relational dynamics model as a unifying interface
between the continuous and symbolic representations of states and actions, thus
facilitating language-driven planning from high-dimensional perceptual input
such as point clouds. Whereas previous relational dynamics models require
training on datasets of multi-step manipulation scenarios that align with the
intended test scenarios, Points2Plans uses only single-step simulated training
data while generalizing zero-shot to a variable number of steps during
real-world evaluations. We evaluate our approach on tasks involving geometric
reasoning, multi-object interactions, and occluded object reasoning in both
simulated and real-world settings. Results demonstrate that Points2Plans offers
strong generalization to unseen long-horizon tasks in the real world, where it
solves over 85% of evaluated tasks while the next best baseline solves only
50%.
| no_new_dataset | 0.954223 |
2408.16498 | Zhengran Zeng | Liguo Chen, Qi Guo, Hongrui Jia, Zhengran Zeng, Xin Wang, Yijiang Xu,
Jian Wu, Yidong Wang, Qing Gao, Jindong Wang, Wei Ye, Shikun Zhang | A Survey on Evaluating Large Language Models in Code Generation Tasks | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | This paper provides a comprehensive review of the current methods and metrics
used to evaluate the performance of Large Language Models (LLMs) in code
generation tasks. With the rapid growth in demand for automated software
development, LLMs have demonstrated significant potential in the field of code
generation. The paper begins by reviewing the historical development of LLMs
and their applications in code generation. Next, it details various methods and
metrics for assessing the code generation capabilities of LLMs, including code
correctness, efficiency, readability, and evaluation methods based on expert
review and user experience. The paper also evaluates the widely used benchmark
datasets, identifying their limitations and proposing directions for future
improvements. Specifically, the paper analyzes the performance of code
generation models across different tasks by combining multiple evaluation
metrics, such as code compilation/interpretation success rates, unit test pass
rates, and performance and efficiency metrics, to comprehensively assess the
practical application of LLMs in code generation. Finally, the paper discusses
the challenges faced in evaluating LLMs in code generation, particularly how to
ensure the comprehensiveness and accuracy of evaluation methods and how to
adapt to the evolving practices of software development. These analyses and
discussions provide valuable insights for further optimizing and improving the
application of LLMs in code generation tasks.
| [
{
"version": "v1",
"created": "Thu, 29 Aug 2024 12:56:06 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 09:13:23 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Chen",
"Liguo",
""
],
[
"Guo",
"Qi",
""
],
[
"Jia",
"Hongrui",
""
],
[
"Zeng",
"Zhengran",
""
],
[
"Wang",
"Xin",
""
],
[
"Xu",
"Yijiang",
""
],
[
"Wu",
"Jian",
""
],
[
"Wang",
"Yidong",
""
],
[
"Gao",
"Qing",
""
],
[
"Wang",
"Jindong",
""
],
[
"Ye",
"Wei",
""
],
[
"Zhang",
"Shikun",
""
]
]
| TITLE: A Survey on Evaluating Large Language Models in Code Generation Tasks
ABSTRACT: This paper provides a comprehensive review of the current methods and metrics
used to evaluate the performance of Large Language Models (LLMs) in code
generation tasks. With the rapid growth in demand for automated software
development, LLMs have demonstrated significant potential in the field of code
generation. The paper begins by reviewing the historical development of LLMs
and their applications in code generation. Next, it details various methods and
metrics for assessing the code generation capabilities of LLMs, including code
correctness, efficiency, readability, and evaluation methods based on expert
review and user experience. The paper also evaluates the widely used benchmark
datasets, identifying their limitations and proposing directions for future
improvements. Specifically, the paper analyzes the performance of code
generation models across different tasks by combining multiple evaluation
metrics, such as code compilation/interpretation success rates, unit test pass
rates, and performance and efficiency metrics, to comprehensively assess the
practical application of LLMs in code generation. Finally, the paper discusses
the challenges faced in evaluating LLMs in code generation, particularly how to
ensure the comprehensiveness and accuracy of evaluation methods and how to
adapt to the evolving practices of software development. These analyses and
discussions provide valuable insights for further optimizing and improving the
application of LLMs in code generation tasks.
| no_new_dataset | 0.949669 |
2409.01115 | Gildas Morvan | Mouhamadou Mansour Lo, Gildas Morvan, Mathieu Rossi, Fabrice Morganti,
David Mercier | Time series classification with random convolution kernels: pooling
operators and input representations matter | v1: initial version, incorrect evaluation. v2: Method improved,
evaluation corrected, title simplified | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article presents a new approach based on MiniRocket, called SelF-Rocket,
for fast time series classification (TSC). Unlike existing approaches based on
random convolution kernels, it dynamically selects the best couple of input
representations and pooling operator during the training process. SelF-Rocket
achieves state-of-the-art accuracy on the University of California Riverside
(UCR) TSC benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 2 Sep 2024 09:42:17 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 07:52:43 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lo",
"Mouhamadou Mansour",
""
],
[
"Morvan",
"Gildas",
""
],
[
"Rossi",
"Mathieu",
""
],
[
"Morganti",
"Fabrice",
""
],
[
"Mercier",
"David",
""
]
]
| TITLE: Time series classification with random convolution kernels: pooling
operators and input representations matter
ABSTRACT: This article presents a new approach based on MiniRocket, called SelF-Rocket,
for fast time series classification (TSC). Unlike existing approaches based on
random convolution kernels, it dynamically selects the best couple of input
representations and pooling operator during the training process. SelF-Rocket
achieves state-of-the-art accuracy on the University of California Riverside
(UCR) TSC benchmark datasets.
| no_new_dataset | 0.956553 |
2409.01348 | Guanglei Zhou | Guanglei Zhou, Bhargav Korrapati, Gaurav Rajavendra Reddy, Chen-Chia
Chang, Jingyu Pan, Jiang Hu, Yiran Chen and Dipto G. Thakurta | PatternPaint: Practical Layout Pattern Generation Using Diffusion-Based
Inpainting | null | null | null | null | cs.CV cs.CE cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Generating diverse VLSI layout patterns is essential for various downstream
tasks in design for manufacturing, as design rules continually evolve during
the development of new technology nodes. However, existing training-based
methods for layout pattern generation rely on large datasets. In practical
scenarios, especially when developing a new technology node, obtaining such
extensive layout data is challenging. Consequently, training models with large
datasets becomes impractical, limiting the scalability and adaptability of
prior approaches. To this end, we propose PatternPaint, a diffusion-based
framework capable of generating legal patterns with limited
design-rule-compliant training samples. PatternPaint simplifies complex layout
pattern generation into a series of inpainting processes with a template-based
denoising scheme. Furthermore, we perform few-shot finetuning on a pretrained
image foundation model with only 20 design-rule-compliant samples. Experimental
results show that using a sub-3nm technology node (Intel 18A), our model is the
only one that can generate legal patterns in complex 2D metal interconnect
design rule settings among all previous works and achieves a high diversity
score. Additionally, our few-shot finetuning can boost the legality rate with
1.87X improvement compared to the original pretrained model. As a result, we
demonstrate a production-ready approach for layout pattern generation in
developing new technology nodes.
| [
{
"version": "v1",
"created": "Mon, 2 Sep 2024 16:02:26 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Oct 2024 23:24:03 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 05:29:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhou",
"Guanglei",
""
],
[
"Korrapati",
"Bhargav",
""
],
[
"Reddy",
"Gaurav Rajavendra",
""
],
[
"Chang",
"Chen-Chia",
""
],
[
"Pan",
"Jingyu",
""
],
[
"Hu",
"Jiang",
""
],
[
"Chen",
"Yiran",
""
],
[
"Thakurta",
"Dipto G.",
""
]
]
| TITLE: PatternPaint: Practical Layout Pattern Generation Using Diffusion-Based
Inpainting
ABSTRACT: Generating diverse VLSI layout patterns is essential for various downstream
tasks in design for manufacturing, as design rules continually evolve during
the development of new technology nodes. However, existing training-based
methods for layout pattern generation rely on large datasets. In practical
scenarios, especially when developing a new technology node, obtaining such
extensive layout data is challenging. Consequently, training models with large
datasets becomes impractical, limiting the scalability and adaptability of
prior approaches. To this end, we propose PatternPaint, a diffusion-based
framework capable of generating legal patterns with limited
design-rule-compliant training samples. PatternPaint simplifies complex layout
pattern generation into a series of inpainting processes with a template-based
denoising scheme. Furthermore, we perform few-shot finetuning on a pretrained
image foundation model with only 20 design-rule-compliant samples. Experimental
results show that using a sub-3nm technology node (Intel 18A), our model is the
only one that can generate legal patterns in complex 2D metal interconnect
design rule settings among all previous works and achieves a high diversity
score. Additionally, our few-shot finetuning can boost the legality rate with
1.87X improvement compared to the original pretrained model. As a result, we
demonstrate a production-ready approach for layout pattern generation in
developing new technology nodes.
| no_new_dataset | 0.948965 |
2409.04429 | Yecheng Wu | Yecheng Wu, Zhuoyang Zhang, Junyu Chen, Haotian Tang, Dacheng Li,
Yunhao Fang, Ligeng Zhu, Enze Xie, Hongxu Yin, Li Yi, Song Han, Yao Lu | VILA-U: a Unified Foundation Model Integrating Visual Understanding and
Generation | Code: https://github.com/mit-han-lab/vila-u. The first two authors
contributed equally to this work | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | VILA-U is a Unified foundation model that integrates Video, Image, Language
understanding and generation. Traditional visual language models (VLMs) use
separate modules for understanding and generating visual content, which can
lead to misalignment and increased complexity. In contrast, VILA-U employs a
single autoregressive next-token prediction framework for both tasks,
eliminating the need for additional components like diffusion models. This
approach not only simplifies the model but also achieves near state-of-the-art
performance in visual language understanding and generation. The success of
VILA-U is attributed to two main factors: the unified vision tower that aligns
discrete visual tokens with textual inputs during pretraining, which enhances
visual perception, and autoregressive image generation can achieve similar
quality as diffusion models with high-quality dataset. This allows VILA-U to
perform comparably to more complex models using a fully token-based
autoregressive framework.
| [
{
"version": "v1",
"created": "Fri, 6 Sep 2024 17:49:56 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Oct 2024 16:42:06 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 16:31:57 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Wu",
"Yecheng",
""
],
[
"Zhang",
"Zhuoyang",
""
],
[
"Chen",
"Junyu",
""
],
[
"Tang",
"Haotian",
""
],
[
"Li",
"Dacheng",
""
],
[
"Fang",
"Yunhao",
""
],
[
"Zhu",
"Ligeng",
""
],
[
"Xie",
"Enze",
""
],
[
"Yin",
"Hongxu",
""
],
[
"Yi",
"Li",
""
],
[
"Han",
"Song",
""
],
[
"Lu",
"Yao",
""
]
]
| TITLE: VILA-U: a Unified Foundation Model Integrating Visual Understanding and
Generation
ABSTRACT: VILA-U is a Unified foundation model that integrates Video, Image, Language
understanding and generation. Traditional visual language models (VLMs) use
separate modules for understanding and generating visual content, which can
lead to misalignment and increased complexity. In contrast, VILA-U employs a
single autoregressive next-token prediction framework for both tasks,
eliminating the need for additional components like diffusion models. This
approach not only simplifies the model but also achieves near state-of-the-art
performance in visual language understanding and generation. The success of
VILA-U is attributed to two main factors: the unified vision tower that aligns
discrete visual tokens with textual inputs during pretraining, which enhances
visual perception, and autoregressive image generation can achieve similar
quality as diffusion models with high-quality dataset. This allows VILA-U to
perform comparably to more complex models using a fully token-based
autoregressive framework.
| no_new_dataset | 0.95418 |
2409.06948 | Anbo Tao | Anbo Tao, Yarong Luo, Chunxi Xia, Chi Guo and Xingxing Li | Equivariant Filter for Tightly Coupled LiDAR-Inertial Odometry | Accepted by ICRA 2025 | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pose estimation is a crucial problem in simultaneous localization and mapping
(SLAM). However, developing a robust and consistent state estimator remains a
significant challenge, as the traditional extended Kalman filter (EKF)
struggles to handle the model nonlinearity, especially for inertial measurement
unit (IMU) and light detection and ranging (LiDAR). To provide a consistent and
efficient solution of pose estimation, we propose Eq-LIO, a robust state
estimator for tightly coupled LIO systems based on an equivariant filter (EqF).
Compared with the invariant Kalman filter based on the $\SE_2(3)$ group
structure, the EqF uses the symmetry of the semi-direct product group to couple
the system state including IMU bias, navigation state and LiDAR extrinsic
calibration state, thereby suppressing linearization error and improving the
behavior of the estimator in the event of unexpected state changes. The
proposed Eq-LIO owns natural consistency and higher robustness, which is
theoretically proven with mathematical derivation and experimentally verified
through a series of tests on both public and private datasets.
| [
{
"version": "v1",
"created": "Wed, 11 Sep 2024 02:00:54 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 10:38:01 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Tao",
"Anbo",
""
],
[
"Luo",
"Yarong",
""
],
[
"Xia",
"Chunxi",
""
],
[
"Guo",
"Chi",
""
],
[
"Li",
"Xingxing",
""
]
]
| TITLE: Equivariant Filter for Tightly Coupled LiDAR-Inertial Odometry
ABSTRACT: Pose estimation is a crucial problem in simultaneous localization and mapping
(SLAM). However, developing a robust and consistent state estimator remains a
significant challenge, as the traditional extended Kalman filter (EKF)
struggles to handle the model nonlinearity, especially for inertial measurement
unit (IMU) and light detection and ranging (LiDAR). To provide a consistent and
efficient solution of pose estimation, we propose Eq-LIO, a robust state
estimator for tightly coupled LIO systems based on an equivariant filter (EqF).
Compared with the invariant Kalman filter based on the $\SE_2(3)$ group
structure, the EqF uses the symmetry of the semi-direct product group to couple
the system state including IMU bias, navigation state and LiDAR extrinsic
calibration state, thereby suppressing linearization error and improving the
behavior of the estimator in the event of unexpected state changes. The
proposed Eq-LIO owns natural consistency and higher robustness, which is
theoretically proven with mathematical derivation and experimentally verified
through a series of tests on both public and private datasets.
| no_new_dataset | 0.942454 |
2409.10095 | Huy-Dung Nguyen | Huy-Dung Nguyen, Anass Bairouk, Mirjana Maras, Wei Xiao, Tsun-Hsuan
Wang, Patrick Chareyre, Ramin Hasani, Marc Blanchon, Daniela Rus | Human Insights Driven Latent Space for Different Driving Perspectives: A
Unified Encoder for Efficient Multi-Task Inference | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous driving systems require a comprehensive understanding of the
environment, achieved by extracting visual features essential for perception,
planning, and control. However, models trained solely on single-task objectives
or generic datasets often lack the contextual information needed for robust
performance in complex driving scenarios. In this work, we propose a unified
encoder trained on multiple computer vision tasks crucial for urban driving,
including depth, pose, and 3D scene flow estimation, as well as semantic,
instance, panoptic, and motion segmentation. By integrating these diverse
visual cues-similar to human perceptual mechanisms-the encoder captures rich
features that enhance navigation-related predictions. We evaluate the model on
steering estimation as a downstream task, leveraging its dense latent space. To
ensure efficient multi-task learning, we introduce a multi-scale feature
network for pose estimation and apply knowledge distillation from a
multi-backbone teacher model. Our findings highlight two key findings: (1) the
unified encoder achieves competitive performance across all visual perception
tasks, demonstrating strong generalization capabilities; and (2) for steering
estimation, the frozen unified encoder-leveraging dense latent
representations-outperforms both its fine-tuned counterpart and the same frozen
model pretrained on generic datasets like ImageNet. These results underline the
significance of task-specific visual features and demonstrate the promise of
multi-task learning in advancing autonomous driving systems. More details and
the pretrained model are available at
https://hi-computervision.github.io/uni-encoder/.
| [
{
"version": "v1",
"created": "Mon, 16 Sep 2024 08:54:03 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 09:35:01 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Nguyen",
"Huy-Dung",
""
],
[
"Bairouk",
"Anass",
""
],
[
"Maras",
"Mirjana",
""
],
[
"Xiao",
"Wei",
""
],
[
"Wang",
"Tsun-Hsuan",
""
],
[
"Chareyre",
"Patrick",
""
],
[
"Hasani",
"Ramin",
""
],
[
"Blanchon",
"Marc",
""
],
[
"Rus",
"Daniela",
""
]
]
| TITLE: Human Insights Driven Latent Space for Different Driving Perspectives: A
Unified Encoder for Efficient Multi-Task Inference
ABSTRACT: Autonomous driving systems require a comprehensive understanding of the
environment, achieved by extracting visual features essential for perception,
planning, and control. However, models trained solely on single-task objectives
or generic datasets often lack the contextual information needed for robust
performance in complex driving scenarios. In this work, we propose a unified
encoder trained on multiple computer vision tasks crucial for urban driving,
including depth, pose, and 3D scene flow estimation, as well as semantic,
instance, panoptic, and motion segmentation. By integrating these diverse
visual cues-similar to human perceptual mechanisms-the encoder captures rich
features that enhance navigation-related predictions. We evaluate the model on
steering estimation as a downstream task, leveraging its dense latent space. To
ensure efficient multi-task learning, we introduce a multi-scale feature
network for pose estimation and apply knowledge distillation from a
multi-backbone teacher model. Our findings highlight two key findings: (1) the
unified encoder achieves competitive performance across all visual perception
tasks, demonstrating strong generalization capabilities; and (2) for steering
estimation, the frozen unified encoder-leveraging dense latent
representations-outperforms both its fine-tuned counterpart and the same frozen
model pretrained on generic datasets like ImageNet. These results underline the
significance of task-specific visual features and demonstrate the promise of
multi-task learning in advancing autonomous driving systems. More details and
the pretrained model are available at
https://hi-computervision.github.io/uni-encoder/.
| no_new_dataset | 0.947284 |
2409.13112 | Mostafa Rahimi Azghadi | Adrian Langley, Matthew Lonergan, Tao Huang, Mostafa Rahimi Azghadi | Analyzing mixed construction and demolition waste in material recovery
facilities: evolution, challenges, and applications of computer vision and
deep learning | null | Resources, Conservation and Recycling Volume 217, May 2025, 108218 | 10.1016/j.resconrec.2025.108218 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Improving the automatic and timely recognition of construction and demolition
waste composition is crucial for enhancing business returns, economic outcomes
and sustainability. While deep learning models show promise in recognizing and
classifying homogenous materials, the current literature lacks research
assessing their performance for mixed, contaminated material in commercial
material recycling facility settings. Despite the increasing numbers of deep
learning models and datasets generated in this area, the sub-domain of deep
learning analysis of construction and demolition waste piles remains
underexplored. To address this gap, recent deep learning algorithms and
techniques were explored. This review examines the progression in datasets,
sensors and the evolution from object detection towards real-time segmentation
models. It also synthesizes research from the past five years on deep learning
for construction and demolition waste management, highlighting recent
advancements while acknowledging limitations that hinder widespread commercial
adoption. The analysis underscores the critical requirement for diverse and
high-fidelity datasets, advanced sensor technologies, and robust algorithmic
frameworks to facilitate the effective integration of deep learning
methodologies into construction and demolition waste management systems. This
integration is envisioned to contribute significantly towards the advancement
of a more sustainable and circular economic model.
| [
{
"version": "v1",
"created": "Thu, 19 Sep 2024 22:38:26 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 20:48:28 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Langley",
"Adrian",
""
],
[
"Lonergan",
"Matthew",
""
],
[
"Huang",
"Tao",
""
],
[
"Azghadi",
"Mostafa Rahimi",
""
]
]
| TITLE: Analyzing mixed construction and demolition waste in material recovery
facilities: evolution, challenges, and applications of computer vision and
deep learning
ABSTRACT: Improving the automatic and timely recognition of construction and demolition
waste composition is crucial for enhancing business returns, economic outcomes
and sustainability. While deep learning models show promise in recognizing and
classifying homogenous materials, the current literature lacks research
assessing their performance for mixed, contaminated material in commercial
material recycling facility settings. Despite the increasing numbers of deep
learning models and datasets generated in this area, the sub-domain of deep
learning analysis of construction and demolition waste piles remains
underexplored. To address this gap, recent deep learning algorithms and
techniques were explored. This review examines the progression in datasets,
sensors and the evolution from object detection towards real-time segmentation
models. It also synthesizes research from the past five years on deep learning
for construction and demolition waste management, highlighting recent
advancements while acknowledging limitations that hinder widespread commercial
adoption. The analysis underscores the critical requirement for diverse and
high-fidelity datasets, advanced sensor technologies, and robust algorithmic
frameworks to facilitate the effective integration of deep learning
methodologies into construction and demolition waste management systems. This
integration is envisioned to contribute significantly towards the advancement
of a more sustainable and circular economic model.
| no_new_dataset | 0.945901 |
2409.14623 | Clementine Domine | Cl\'ementine C. J. Domin\'e, and Nicolas Anguita, and Alexandra M.
Proca, and Lukas Braun, and Daniel Kunin, and Pedro A. M. Mediano, and Andrew
M. Saxe | From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks | 10 pages, 8 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological and artificial neural networks develop internal representations
that enable them to perform complex tasks. In artificial networks, the
effectiveness of these models relies on their ability to build task specific
representation, a process influenced by interactions among datasets,
architectures, initialization strategies, and optimization algorithms. Prior
studies highlight that different initializations can place networks in either a
lazy regime, where representations remain static, or a rich/feature learning
regime, where representations evolve dynamically. Here, we examine how
initialization influences learning dynamics in deep linear neural networks,
deriving exact solutions for lambda-balanced initializations-defined by the
relative scale of weights across layers. These solutions capture the evolution
of representations and the Neural Tangent Kernel across the spectrum from the
rich to the lazy regimes. Our findings deepen the theoretical understanding of
the impact of weight initialization on learning regimes, with implications for
continual learning, reversal learning, and transfer learning, relevant to both
neuroscience and practical applications.
| [
{
"version": "v1",
"created": "Sun, 22 Sep 2024 23:19:04 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 11:18:33 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Dominé",
"Clémentine C. J.",
""
],
[
"Anguita",
"Nicolas",
""
],
[
"Proca",
"Alexandra M.",
""
],
[
"Braun",
"Lukas",
""
],
[
"Kunin",
"Daniel",
""
],
[
"Mediano",
"Pedro A. M.",
""
],
[
"Saxe",
"Andrew M.",
""
]
]
| TITLE: From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks
ABSTRACT: Biological and artificial neural networks develop internal representations
that enable them to perform complex tasks. In artificial networks, the
effectiveness of these models relies on their ability to build task specific
representation, a process influenced by interactions among datasets,
architectures, initialization strategies, and optimization algorithms. Prior
studies highlight that different initializations can place networks in either a
lazy regime, where representations remain static, or a rich/feature learning
regime, where representations evolve dynamically. Here, we examine how
initialization influences learning dynamics in deep linear neural networks,
deriving exact solutions for lambda-balanced initializations-defined by the
relative scale of weights across layers. These solutions capture the evolution
of representations and the Neural Tangent Kernel across the spectrum from the
rich to the lazy regimes. Our findings deepen the theoretical understanding of
the impact of weight initialization on learning regimes, with implications for
continual learning, reversal learning, and transfer learning, relevant to both
neuroscience and practical applications.
| no_new_dataset | 0.946646 |
2409.15374 | Suryansh Vidya | Suryansh Vidya, Kush Gupta, Amir Aly, Andy Wills, Emmanuel Ifeachor
and Rohit Shankar | Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions
Using fMRI Data | This work has been submitted to the IEEE for possible publication | null | null | null | eess.IV cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Early diagnosis and intervention for Autism Spectrum Disorder (ASD) has been
shown to significantly improve the quality of life of autistic individuals.
However, diagnostics methods for ASD rely on assessments based on clinical
presentation that are prone to bias and can be challenging to arrive at an
early diagnosis. There is a need for objective biomarkers of ASD which can help
improve diagnostic accuracy. Deep learning (DL) has achieved outstanding
performance in diagnosing diseases and conditions from medical imaging data.
Extensive research has been conducted on creating models that classify ASD
using resting-state functional Magnetic Resonance Imaging (fMRI) data. However,
existing models lack interpretability. This research aims to improve the
accuracy and interpretability of ASD diagnosis by creating a DL model that can
not only accurately classify ASD but also provide explainable insights into its
working. The dataset used is a preprocessed version of the Autism Brain Imaging
Data Exchange (ABIDE) with 884 samples. Our findings show a model that can
accurately classify ASD and highlight critical brain regions differing between
ASD and typical controls, with potential implications for early diagnosis and
understanding of the neural basis of ASD. These findings are validated by
studies in the literature that use different datasets and modalities,
confirming that the model actually learned characteristics of ASD and not just
the dataset. This study advances the field of explainable AI in medical imaging
by providing a robust and interpretable model, thereby contributing to a future
with objective and reliable ASD diagnostics.
| [
{
"version": "v1",
"created": "Thu, 19 Sep 2024 23:08:09 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 00:46:19 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Vidya",
"Suryansh",
""
],
[
"Gupta",
"Kush",
""
],
[
"Aly",
"Amir",
""
],
[
"Wills",
"Andy",
""
],
[
"Ifeachor",
"Emmanuel",
""
],
[
"Shankar",
"Rohit",
""
]
]
| TITLE: Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions
Using fMRI Data
ABSTRACT: Early diagnosis and intervention for Autism Spectrum Disorder (ASD) has been
shown to significantly improve the quality of life of autistic individuals.
However, diagnostics methods for ASD rely on assessments based on clinical
presentation that are prone to bias and can be challenging to arrive at an
early diagnosis. There is a need for objective biomarkers of ASD which can help
improve diagnostic accuracy. Deep learning (DL) has achieved outstanding
performance in diagnosing diseases and conditions from medical imaging data.
Extensive research has been conducted on creating models that classify ASD
using resting-state functional Magnetic Resonance Imaging (fMRI) data. However,
existing models lack interpretability. This research aims to improve the
accuracy and interpretability of ASD diagnosis by creating a DL model that can
not only accurately classify ASD but also provide explainable insights into its
working. The dataset used is a preprocessed version of the Autism Brain Imaging
Data Exchange (ABIDE) with 884 samples. Our findings show a model that can
accurately classify ASD and highlight critical brain regions differing between
ASD and typical controls, with potential implications for early diagnosis and
understanding of the neural basis of ASD. These findings are validated by
studies in the literature that use different datasets and modalities,
confirming that the model actually learned characteristics of ASD and not just
the dataset. This study advances the field of explainable AI in medical imaging
by providing a robust and interpretable model, thereby contributing to a future
with objective and reliable ASD diagnostics.
| no_new_dataset | 0.941493 |
2409.16850 | Chun-Jung Lin | Chun-Jung Lin, Sourav Garg, Tat-Jun Chin, Feras Dayoub | Robust Scene Change Detection Using Visual Foundation Models and
Cross-Attention Mechanisms | 7 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel method for scene change detection that leverages the
robust feature extraction capabilities of a visual foundational model, DINOv2,
and integrates full-image cross-attention to address key challenges such as
varying lighting, seasonal variations, and viewpoint differences. In order to
effectively learn correspondences and mis-correspondences between an image pair
for the change detection task, we propose to a) ``freeze'' the backbone in
order to retain the generality of dense foundation features, and b) employ
``full-image'' cross-attention to better tackle the viewpoint variations
between the image pair. We evaluate our approach on two benchmark datasets,
VL-CMU-CD and PSCD, along with their viewpoint-varied versions. Our experiments
demonstrate significant improvements in F1-score, particularly in scenarios
involving geometric changes between image pairs. The results indicate our
method's superior generalization capabilities over existing state-of-the-art
approaches, showing robustness against photometric and geometric variations as
well as better overall generalization when fine-tuned to adapt to new
environments. Detailed ablation studies further validate the contributions of
each component in our architecture. Our source code is available at:
https://github.com/ChadLin9596/Robust-Scene-Change-Detection.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 11:55:27 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Feb 2025 06:25:58 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 02:16:30 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lin",
"Chun-Jung",
""
],
[
"Garg",
"Sourav",
""
],
[
"Chin",
"Tat-Jun",
""
],
[
"Dayoub",
"Feras",
""
]
]
| TITLE: Robust Scene Change Detection Using Visual Foundation Models and
Cross-Attention Mechanisms
ABSTRACT: We present a novel method for scene change detection that leverages the
robust feature extraction capabilities of a visual foundational model, DINOv2,
and integrates full-image cross-attention to address key challenges such as
varying lighting, seasonal variations, and viewpoint differences. In order to
effectively learn correspondences and mis-correspondences between an image pair
for the change detection task, we propose to a) ``freeze'' the backbone in
order to retain the generality of dense foundation features, and b) employ
``full-image'' cross-attention to better tackle the viewpoint variations
between the image pair. We evaluate our approach on two benchmark datasets,
VL-CMU-CD and PSCD, along with their viewpoint-varied versions. Our experiments
demonstrate significant improvements in F1-score, particularly in scenarios
involving geometric changes between image pairs. The results indicate our
method's superior generalization capabilities over existing state-of-the-art
approaches, showing robustness against photometric and geometric variations as
well as better overall generalization when fine-tuned to adapt to new
environments. Detailed ablation studies further validate the contributions of
each component in our architecture. Our source code is available at:
https://github.com/ChadLin9596/Robust-Scene-Change-Detection.
| no_new_dataset | 0.949248 |
2409.20356 | Pablo Rodriguez-Grasa | Pablo Rodriguez-Grasa, Robert Farzan-Rodriguez, Gabriele Novelli, Yue
Ban, Mikel Sanz | Satellite image classification with neural quantum kernels | null | Machine Learning: Science and Technology, 6(1), 015043, 2025 | 10.1088/2632-2153/ada86c | null | quant-ph cs.LG | http://creativecommons.org/licenses/by/4.0/ | Achieving practical applications of quantum machine learning for real-world
scenarios remains challenging despite significant theoretical progress. This
paper proposes a novel approach for classifying satellite images, a task of
particular relevance to the earth observation (EO) industry, using quantum
machine learning techniques. Specifically, we focus on classifying images that
contain solar panels, addressing a complex real-world classification problem.
Our approach begins with classical pre-processing to reduce the dimensionality
of the satellite image dataset. We then apply neural quantum kernels
(NQKs)-quantum kernels derived from trained quantum neural networks (QNNs)-for
classification. We evaluate several strategies within this framework,
demonstrating results that are competitive with the best classical methods. Key
findings include the robustness of or results and their scalability, with
successful performance achieved up to 8 qubits.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2024 14:52:00 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 08:26:23 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Rodriguez-Grasa",
"Pablo",
""
],
[
"Farzan-Rodriguez",
"Robert",
""
],
[
"Novelli",
"Gabriele",
""
],
[
"Ban",
"Yue",
""
],
[
"Sanz",
"Mikel",
""
]
]
| TITLE: Satellite image classification with neural quantum kernels
ABSTRACT: Achieving practical applications of quantum machine learning for real-world
scenarios remains challenging despite significant theoretical progress. This
paper proposes a novel approach for classifying satellite images, a task of
particular relevance to the earth observation (EO) industry, using quantum
machine learning techniques. Specifically, we focus on classifying images that
contain solar panels, addressing a complex real-world classification problem.
Our approach begins with classical pre-processing to reduce the dimensionality
of the satellite image dataset. We then apply neural quantum kernels
(NQKs)-quantum kernels derived from trained quantum neural networks (QNNs)-for
classification. We evaluate several strategies within this framework,
demonstrating results that are competitive with the best classical methods. Key
findings include the robustness of or results and their scalability, with
successful performance achieved up to 8 qubits.
| no_new_dataset | 0.945298 |
2410.00911 | Da-Wei Zhou | Da-Wei Zhou, Zi-Wen Cai, Han-Jia Ye, Lijun Zhang, De-Chuan Zhan | Dual Consolidation for Pre-Trained Model-Based Domain-Incremental
Learning | Accepted to CVPR 2025. Code is available at
https://github.com/Estrella-fugaz/CVPR25-Duct | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain-Incremental Learning (DIL) involves the progressive adaptation of a
model to new concepts across different domains. While recent advances in
pre-trained models provide a solid foundation for DIL, learning new concepts
often results in the catastrophic forgetting of pre-trained knowledge.
Specifically, sequential model updates can overwrite both the representation
and the classifier with knowledge from the latest domain. Thus, it is crucial
to develop a representation and corresponding classifier that accommodate all
seen domains throughout the learning process. To this end, we propose DUal
ConsolidaTion (Duct) to unify and consolidate historical knowledge at both the
representation and classifier levels. By merging the backbone of different
stages, we create a representation space suitable for multiple domains
incrementally. The merged representation serves as a balanced intermediary that
captures task-specific features from all seen domains. Additionally, to address
the mismatch between consolidated embeddings and the classifier, we introduce
an extra classifier consolidation process. Leveraging class-wise semantic
information, we estimate the classifier weights of old domains within the
latest embedding space. By merging historical and estimated classifiers, we
align them with the consolidated embedding space, facilitating incremental
classification. Extensive experimental results on four benchmark datasets
demonstrate Duct's state-of-the-art performance. Code is available at
https://github.com/Estrella-fugaz/CVPR25-Duct
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 17:58:06 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 12:45:15 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Zhou",
"Da-Wei",
""
],
[
"Cai",
"Zi-Wen",
""
],
[
"Ye",
"Han-Jia",
""
],
[
"Zhang",
"Lijun",
""
],
[
"Zhan",
"De-Chuan",
""
]
]
| TITLE: Dual Consolidation for Pre-Trained Model-Based Domain-Incremental
Learning
ABSTRACT: Domain-Incremental Learning (DIL) involves the progressive adaptation of a
model to new concepts across different domains. While recent advances in
pre-trained models provide a solid foundation for DIL, learning new concepts
often results in the catastrophic forgetting of pre-trained knowledge.
Specifically, sequential model updates can overwrite both the representation
and the classifier with knowledge from the latest domain. Thus, it is crucial
to develop a representation and corresponding classifier that accommodate all
seen domains throughout the learning process. To this end, we propose DUal
ConsolidaTion (Duct) to unify and consolidate historical knowledge at both the
representation and classifier levels. By merging the backbone of different
stages, we create a representation space suitable for multiple domains
incrementally. The merged representation serves as a balanced intermediary that
captures task-specific features from all seen domains. Additionally, to address
the mismatch between consolidated embeddings and the classifier, we introduce
an extra classifier consolidation process. Leveraging class-wise semantic
information, we estimate the classifier weights of old domains within the
latest embedding space. By merging historical and estimated classifiers, we
align them with the consolidated embedding space, facilitating incremental
classification. Extensive experimental results on four benchmark datasets
demonstrate Duct's state-of-the-art performance. Code is available at
https://github.com/Estrella-fugaz/CVPR25-Duct
| no_new_dataset | 0.949856 |
2410.02712 | Tianyi Xiong | Tianyi Xiong, Xiyao Wang, Dong Guo, Qinghao Ye, Haoqi Fan, Quanquan
Gu, Heng Huang, Chunyuan Li | LLaVA-Critic: Learning to Evaluate Multimodal Models | Accepted by CVPR 2025; Project Page:
https://llava-vl.github.io/blog/2024-10-03-llava-critic | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce LLaVA-Critic, the first open-source large multimodal model (LMM)
designed as a generalist evaluator to assess performance across a wide range of
multimodal tasks. LLaVA-Critic is trained using a high-quality critic
instruction-following dataset that incorporates diverse evaluation criteria and
scenarios. Our experiments demonstrate the model's effectiveness in two key
areas: (1) LMM-as-a-Judge, where LLaVA-Critic provides reliable evaluation
scores, performing on par with or surpassing GPT models on multiple evaluation
benchmarks; and (2) Preference Learning, where it generates reward signals for
preference learning, enhancing model alignment capabilities. This work
underscores the potential of open-source LMMs in self-critique and evaluation,
setting the stage for future research into scalable, superhuman alignment
feedback mechanisms for LMMs.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 17:36:33 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 00:49:07 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Xiong",
"Tianyi",
""
],
[
"Wang",
"Xiyao",
""
],
[
"Guo",
"Dong",
""
],
[
"Ye",
"Qinghao",
""
],
[
"Fan",
"Haoqi",
""
],
[
"Gu",
"Quanquan",
""
],
[
"Huang",
"Heng",
""
],
[
"Li",
"Chunyuan",
""
]
]
| TITLE: LLaVA-Critic: Learning to Evaluate Multimodal Models
ABSTRACT: We introduce LLaVA-Critic, the first open-source large multimodal model (LMM)
designed as a generalist evaluator to assess performance across a wide range of
multimodal tasks. LLaVA-Critic is trained using a high-quality critic
instruction-following dataset that incorporates diverse evaluation criteria and
scenarios. Our experiments demonstrate the model's effectiveness in two key
areas: (1) LMM-as-a-Judge, where LLaVA-Critic provides reliable evaluation
scores, performing on par with or surpassing GPT models on multiple evaluation
benchmarks; and (2) Preference Learning, where it generates reward signals for
preference learning, enhancing model alignment capabilities. This work
underscores the potential of open-source LMMs in self-critique and evaluation,
setting the stage for future research into scalable, superhuman alignment
feedback mechanisms for LMMs.
| no_new_dataset | 0.931213 |
2410.05472 | Andrey Grabovoy | Alidar Asvarov and Andrey Grabovoy | Neural machine translation system for Lezgian, Russian and Azerbaijani
languages | null | null | 10.1109/ISPRAS64596.2024.10899143 | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | We release the first neural machine translation system for translation
between Russian, Azerbaijani and the endangered Lezgian languages, as well as
monolingual and parallel datasets collected and aligned for training and
evaluating the system. Multiple experiments are conducted to identify how
different sets of training language pairs and data domains can influence the
resulting translation quality. We achieve BLEU scores of 26.14 for
Lezgian-Azerbaijani, 22.89 for Azerbaijani-Lezgian, 29.48 for Lezgian-Russian
and 24.25 for Russian-Lezgian pairs. The quality of zero-shot translation is
assessed on a Large Language Model, showing its high level of fluency in
Lezgian. However, the model often refuses to translate, justifying itself with
its incompetence. We contribute our translation model along with the collected
parallel and monolingual corpora and sentence encoder for the Lezgian language.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 20:08:10 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Asvarov",
"Alidar",
""
],
[
"Grabovoy",
"Andrey",
""
]
]
| TITLE: Neural machine translation system for Lezgian, Russian and Azerbaijani
languages
ABSTRACT: We release the first neural machine translation system for translation
between Russian, Azerbaijani and the endangered Lezgian languages, as well as
monolingual and parallel datasets collected and aligned for training and
evaluating the system. Multiple experiments are conducted to identify how
different sets of training language pairs and data domains can influence the
resulting translation quality. We achieve BLEU scores of 26.14 for
Lezgian-Azerbaijani, 22.89 for Azerbaijani-Lezgian, 29.48 for Lezgian-Russian
and 24.25 for Russian-Lezgian pairs. The quality of zero-shot translation is
assessed on a Large Language Model, showing its high level of fluency in
Lezgian. However, the model often refuses to translate, justifying itself with
its incompetence. We contribute our translation model along with the collected
parallel and monolingual corpora and sentence encoder for the Lezgian language.
| no_new_dataset | 0.889721 |
2410.05500 | Ray Congrui Yu | Ray Congrui Yu, Sherry Wu, Jiang Gui | Residual Kolmogorov-Arnold Network for Enhanced Deep Learning | Code is available at https://github.com/withray/residualKAN.git | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite their immense success, deep neural networks (CNNs) are costly to
train, while modern architectures can retain hundreds of convolutional layers
in network depth. Standard convolutional operations are fundamentally limited
by their linear nature along with fixed activations, where multiple layers are
needed to learn complex patterns, making this approach computationally
inefficient and prone to optimization difficulties. As a result, we introduce
RKAN (Residual Kolmogorov-Arnold Network), which could be easily implemented
into stages of traditional networks, such as ResNet. The module also integrates
polynomial feature transformation that provides the expressive power of many
convolutional layers through learnable, non-linear feature refinement. Our
proposed RKAN module offers consistent improvements over the base models on
various well-known benchmark datasets, such as CIFAR-100, Food-101, and
ImageNet.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 21:12:32 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 06:34:37 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Yu",
"Ray Congrui",
""
],
[
"Wu",
"Sherry",
""
],
[
"Gui",
"Jiang",
""
]
]
| TITLE: Residual Kolmogorov-Arnold Network for Enhanced Deep Learning
ABSTRACT: Despite their immense success, deep neural networks (CNNs) are costly to
train, while modern architectures can retain hundreds of convolutional layers
in network depth. Standard convolutional operations are fundamentally limited
by their linear nature along with fixed activations, where multiple layers are
needed to learn complex patterns, making this approach computationally
inefficient and prone to optimization difficulties. As a result, we introduce
RKAN (Residual Kolmogorov-Arnold Network), which could be easily implemented
into stages of traditional networks, such as ResNet. The module also integrates
polynomial feature transformation that provides the expressive power of many
convolutional layers through learnable, non-linear feature refinement. Our
proposed RKAN module offers consistent improvements over the base models on
various well-known benchmark datasets, such as CIFAR-100, Food-101, and
ImageNet.
| no_new_dataset | 0.946349 |
2410.09418 | Yi-Fan Lu | Yi-Fan Lu, Xian-Ling Mao, Tian Lan, Heyan Huang, Chen Xu, Xiaoyan Gao | Beyond Exact Match: Semantically Reassessing Event Extraction by Large
Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Event extraction has gained extensive research attention due to its broad
range of applications. However, the current mainstream evaluation method for
event extraction relies on token-level exact match, which misjudges numerous
semantic-level correct cases. This reliance leads to a significant discrepancy
between the evaluated performance of models under exact match criteria and
their real performance. To address this problem, we propose a reliable and
semantic evaluation framework for event extraction, named RAEE, which
accurately assesses extraction results at semantic-level instead of
token-level. Specifically, RAEE leverages large language models (LLMs) as
evaluation agents, incorporating an adaptive mechanism to achieve adaptive
evaluations for precision and recall of triggers and arguments. Extensive
experiments demonstrate that: (1) RAEE achieves a very strong correlation with
human judgments; (2) after reassessing 14 models, including advanced LLMs, on
10 datasets, there is a significant performance gap between exact match and
RAEE. The exact match evaluation significantly underestimates the performance
of existing event extraction models, and in particular underestimates the
capabilities of LLMs; (3) fine-grained analysis under RAEE evaluation reveals
insightful phenomena worth further exploration. The evaluation toolkit of our
proposed RAEE is publicly released.
| [
{
"version": "v1",
"created": "Sat, 12 Oct 2024 07:54:01 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 07:06:43 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lu",
"Yi-Fan",
""
],
[
"Mao",
"Xian-Ling",
""
],
[
"Lan",
"Tian",
""
],
[
"Huang",
"Heyan",
""
],
[
"Xu",
"Chen",
""
],
[
"Gao",
"Xiaoyan",
""
]
]
| TITLE: Beyond Exact Match: Semantically Reassessing Event Extraction by Large
Language Models
ABSTRACT: Event extraction has gained extensive research attention due to its broad
range of applications. However, the current mainstream evaluation method for
event extraction relies on token-level exact match, which misjudges numerous
semantic-level correct cases. This reliance leads to a significant discrepancy
between the evaluated performance of models under exact match criteria and
their real performance. To address this problem, we propose a reliable and
semantic evaluation framework for event extraction, named RAEE, which
accurately assesses extraction results at semantic-level instead of
token-level. Specifically, RAEE leverages large language models (LLMs) as
evaluation agents, incorporating an adaptive mechanism to achieve adaptive
evaluations for precision and recall of triggers and arguments. Extensive
experiments demonstrate that: (1) RAEE achieves a very strong correlation with
human judgments; (2) after reassessing 14 models, including advanced LLMs, on
10 datasets, there is a significant performance gap between exact match and
RAEE. The exact match evaluation significantly underestimates the performance
of existing event extraction models, and in particular underestimates the
capabilities of LLMs; (3) fine-grained analysis under RAEE evaluation reveals
insightful phenomena worth further exploration. The evaluation toolkit of our
proposed RAEE is publicly released.
| no_new_dataset | 0.943712 |
2410.11325 | Wenda Xu | Wenda Xu, Rujun Han, Zifeng Wang, Long T. Le, Dhruv Madeka, Lei Li,
William Yang Wang, Rishabh Agarwal, Chen-Yu Lee, Tomas Pfister | Speculative Knowledge Distillation: Bridging the Teacher-Student Gap
Through Interleaved Sampling | ICLR2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advances in knowledge distillation (KD) have enabled smaller student
models to approach the performance of larger teacher models. However, popular
methods such as supervised KD and on-policy KD, are adversely impacted by the
knowledge gaps between teacher-student in practical scenarios. Supervised KD
suffers from a distribution mismatch between training with a static dataset and
inference over final student-generated outputs. Conversely, on-policy KD, which
uses student-generated samples for training, can suffer from low-quality
training examples with which teacher models are not familiar, resulting in
inaccurate teacher feedback. To address these limitations, we introduce
Speculative Knowledge Distillation (SKD), a novel approach that leverages
cooperation between student and teacher models to generate high-quality
training data on-the-fly while aligning with the student's inference-time
distribution. In SKD, the student proposes tokens, and the teacher replaces
poorly ranked ones based on its own distribution, transferring high-quality
knowledge adaptively. We evaluate SKD on various text generation tasks,
including translation, summarization, math, and instruction following, and show
that SKD consistently outperforms existing KD methods across different domains,
data sizes, and model initialization strategies.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 06:51:25 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 19:24:41 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Xu",
"Wenda",
""
],
[
"Han",
"Rujun",
""
],
[
"Wang",
"Zifeng",
""
],
[
"Le",
"Long T.",
""
],
[
"Madeka",
"Dhruv",
""
],
[
"Li",
"Lei",
""
],
[
"Wang",
"William Yang",
""
],
[
"Agarwal",
"Rishabh",
""
],
[
"Lee",
"Chen-Yu",
""
],
[
"Pfister",
"Tomas",
""
]
]
| TITLE: Speculative Knowledge Distillation: Bridging the Teacher-Student Gap
Through Interleaved Sampling
ABSTRACT: Recent advances in knowledge distillation (KD) have enabled smaller student
models to approach the performance of larger teacher models. However, popular
methods such as supervised KD and on-policy KD, are adversely impacted by the
knowledge gaps between teacher-student in practical scenarios. Supervised KD
suffers from a distribution mismatch between training with a static dataset and
inference over final student-generated outputs. Conversely, on-policy KD, which
uses student-generated samples for training, can suffer from low-quality
training examples with which teacher models are not familiar, resulting in
inaccurate teacher feedback. To address these limitations, we introduce
Speculative Knowledge Distillation (SKD), a novel approach that leverages
cooperation between student and teacher models to generate high-quality
training data on-the-fly while aligning with the student's inference-time
distribution. In SKD, the student proposes tokens, and the teacher replaces
poorly ranked ones based on its own distribution, transferring high-quality
knowledge adaptively. We evaluate SKD on various text generation tasks,
including translation, summarization, math, and instruction following, and show
that SKD consistently outperforms existing KD methods across different domains,
data sizes, and model initialization strategies.
| no_new_dataset | 0.94625 |
2410.11841 | Fei Tang | Fei Tang, Yongliang Shen, Hang Zhang, Zeqi Tan, Wenqi Zhang, Zhibiao
Huang, Kaitao Song, Weiming Lu, Yueting Zhuang | GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable
Recommendation | null | null | null | null | cs.IR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language model-based explainable recommendation (LLM-based ER) systems
show promise in generating human-like explanations for recommendations.
However, they face challenges in modeling user-item collaborative preferences,
personalizing explanations, and handling sparse user-item interactions. To
address these issues, we propose GaVaMoE, a novel Gaussian-Variational Gated
Mixture of Experts framework for explainable recommendation. GaVaMoE introduces
two key components: (1) a rating reconstruction module that employs Variational
Autoencoder (VAE) with a Gaussian Mixture Model (GMM) to capture complex
user-item collaborative preferences, serving as a pre-trained multi-gating
mechanism; and (2) a set of fine-grained expert models coupled with the
multi-gating mechanism for generating highly personalized explanations. The VAE
component models latent factors in user-item interactions, while the GMM
clusters users with similar behaviors. Each cluster corresponds to a gate in
the multi-gating mechanism, routing user-item pairs to appropriate expert
models. This architecture enables GaVaMoE to generate tailored explanations for
specific user types and preferences, mitigating data sparsity by leveraging
user similarities. Extensive experiments on three real-world datasets
demonstrate that GaVaMoE significantly outperforms existing methods in
explanation quality, personalization, and consistency. Notably, GaVaMoE
exhibits robust performance in scenarios with sparse user-item interactions,
maintaining high-quality explanations even for users with limited historical
data.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 17:59:30 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 01:02:11 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Tang",
"Fei",
""
],
[
"Shen",
"Yongliang",
""
],
[
"Zhang",
"Hang",
""
],
[
"Tan",
"Zeqi",
""
],
[
"Zhang",
"Wenqi",
""
],
[
"Huang",
"Zhibiao",
""
],
[
"Song",
"Kaitao",
""
],
[
"Lu",
"Weiming",
""
],
[
"Zhuang",
"Yueting",
""
]
]
| TITLE: GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable
Recommendation
ABSTRACT: Large language model-based explainable recommendation (LLM-based ER) systems
show promise in generating human-like explanations for recommendations.
However, they face challenges in modeling user-item collaborative preferences,
personalizing explanations, and handling sparse user-item interactions. To
address these issues, we propose GaVaMoE, a novel Gaussian-Variational Gated
Mixture of Experts framework for explainable recommendation. GaVaMoE introduces
two key components: (1) a rating reconstruction module that employs Variational
Autoencoder (VAE) with a Gaussian Mixture Model (GMM) to capture complex
user-item collaborative preferences, serving as a pre-trained multi-gating
mechanism; and (2) a set of fine-grained expert models coupled with the
multi-gating mechanism for generating highly personalized explanations. The VAE
component models latent factors in user-item interactions, while the GMM
clusters users with similar behaviors. Each cluster corresponds to a gate in
the multi-gating mechanism, routing user-item pairs to appropriate expert
models. This architecture enables GaVaMoE to generate tailored explanations for
specific user types and preferences, mitigating data sparsity by leveraging
user similarities. Extensive experiments on three real-world datasets
demonstrate that GaVaMoE significantly outperforms existing methods in
explanation quality, personalization, and consistency. Notably, GaVaMoE
exhibits robust performance in scenarios with sparse user-item interactions,
maintaining high-quality explanations even for users with limited historical
data.
| no_new_dataset | 0.946695 |
2410.12346 | Guanzhou Lan | Guanzhou Lan, Qianli Ma, Yuqi Yang, Zhigang Wang, Dong Wang, Xuelong
Li, Bin Zhao | Efficient Diffusion as Low Light Enhancer | 8 pages | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The computational burden of the iterative sampling process remains a major
challenge in diffusion-based Low-Light Image Enhancement (LLIE). Current
acceleration methods, whether training-based or training-free, often lead to
significant performance degradation, highlighting the trade-off between
performance and efficiency. In this paper, we identify two primary factors
contributing to performance degradation: fitting errors and the inference gap.
Our key insight is that fitting errors can be mitigated by linearly
extrapolating the incorrect score functions, while the inference gap can be
reduced by shifting the Gaussian flow to a reflectance-aware residual space.
Based on the above insights, we design Reflectance-Aware Trajectory Refinement
(RATR) module, a simple yet effective module to refine the teacher trajectory
using the reflectance component of images. Following this, we introduce
\textbf{Re}flectance-aware \textbf{D}iffusion with \textbf{Di}stilled
\textbf{T}rajectory (\textbf{ReDDiT}), an efficient and flexible distillation
framework tailored for LLIE. Our framework achieves comparable performance to
previous diffusion-based methods with redundant steps in just 2 steps while
establishing new state-of-the-art (SOTA) results with 8 or 4 steps.
Comprehensive experimental evaluations on 10 benchmark datasets validate the
effectiveness of our method, consistently outperforming existing SOTA methods.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 08:07:18 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Nov 2024 08:20:04 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Lan",
"Guanzhou",
""
],
[
"Ma",
"Qianli",
""
],
[
"Yang",
"Yuqi",
""
],
[
"Wang",
"Zhigang",
""
],
[
"Wang",
"Dong",
""
],
[
"Li",
"Xuelong",
""
],
[
"Zhao",
"Bin",
""
]
]
| TITLE: Efficient Diffusion as Low Light Enhancer
ABSTRACT: The computational burden of the iterative sampling process remains a major
challenge in diffusion-based Low-Light Image Enhancement (LLIE). Current
acceleration methods, whether training-based or training-free, often lead to
significant performance degradation, highlighting the trade-off between
performance and efficiency. In this paper, we identify two primary factors
contributing to performance degradation: fitting errors and the inference gap.
Our key insight is that fitting errors can be mitigated by linearly
extrapolating the incorrect score functions, while the inference gap can be
reduced by shifting the Gaussian flow to a reflectance-aware residual space.
Based on the above insights, we design Reflectance-Aware Trajectory Refinement
(RATR) module, a simple yet effective module to refine the teacher trajectory
using the reflectance component of images. Following this, we introduce
\textbf{Re}flectance-aware \textbf{D}iffusion with \textbf{Di}stilled
\textbf{T}rajectory (\textbf{ReDDiT}), an efficient and flexible distillation
framework tailored for LLIE. Our framework achieves comparable performance to
previous diffusion-based methods with redundant steps in just 2 steps while
establishing new state-of-the-art (SOTA) results with 8 or 4 steps.
Comprehensive experimental evaluations on 10 benchmark datasets validate the
effectiveness of our method, consistently outperforming existing SOTA methods.
| no_new_dataset | 0.947914 |
2410.14431 | Mourad Oulghelou | Mourad Oulghelou, Soufiane Cherroud, Xavier Merle, Paola Cinnella | Machine-learning-assisted Blending of Data-Driven Turbulence Models | null | null | null | null | physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | We present a machine learning-based framework for blending data-driven
turbulent closures in the Reynolds-Averaged Navier-Stokes (RANS) equations,
aimed at improving their generalizability across diverse flow regimes.
Specialized models (hereafter referred to as experts) are trained via sparse
Bayesian learning and symbolic regression for distinct flow classes, including
turbulent channel flows, separated flows, and a near-sonic axisymmetric jet.
These experts are then combined intrusively within the RANS equations using
weighting functions, initially derived via a Gaussian kernel on a dataset
spanning equilibrium shear conditions to separated flows. Finally, a Random
Forest Regressor is trained to map local physical features to these weighting
functions, enabling deployment in previously unseen scenarios. We evaluate the
resulting blended model on three representative test cases: a turbulent
zero-pressure-gradient flat plate, a wall-mounted hump, and a NACA0012 airfoil
at various angles of attack, ranging from fully attached to near-stall
conditions. Results for these 2D flows show that the proposed strategy adapts
to local flow characteristics, effectively leveraging the strengths of
individual models and consistently selecting the most suitable expert in each
region. Notably, the blended model also demonstrates robustness for flow
configurations not included in the training set, underscoring its potential as
a practical and generalizable framework for RANS turbulence modeling.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 12:50:20 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 21:06:18 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Oulghelou",
"Mourad",
""
],
[
"Cherroud",
"Soufiane",
""
],
[
"Merle",
"Xavier",
""
],
[
"Cinnella",
"Paola",
""
]
]
| TITLE: Machine-learning-assisted Blending of Data-Driven Turbulence Models
ABSTRACT: We present a machine learning-based framework for blending data-driven
turbulent closures in the Reynolds-Averaged Navier-Stokes (RANS) equations,
aimed at improving their generalizability across diverse flow regimes.
Specialized models (hereafter referred to as experts) are trained via sparse
Bayesian learning and symbolic regression for distinct flow classes, including
turbulent channel flows, separated flows, and a near-sonic axisymmetric jet.
These experts are then combined intrusively within the RANS equations using
weighting functions, initially derived via a Gaussian kernel on a dataset
spanning equilibrium shear conditions to separated flows. Finally, a Random
Forest Regressor is trained to map local physical features to these weighting
functions, enabling deployment in previously unseen scenarios. We evaluate the
resulting blended model on three representative test cases: a turbulent
zero-pressure-gradient flat plate, a wall-mounted hump, and a NACA0012 airfoil
at various angles of attack, ranging from fully attached to near-stall
conditions. Results for these 2D flows show that the proposed strategy adapts
to local flow characteristics, effectively leveraging the strengths of
individual models and consistently selecting the most suitable expert in each
region. Notably, the blended model also demonstrates robustness for flow
configurations not included in the training set, underscoring its potential as
a practical and generalizable framework for RANS turbulence modeling.
| no_new_dataset | 0.953144 |
2410.23825 | Amir Hossein Kargaran | Amir Hossein Kargaran, Fran\c{c}ois Yvon, Hinrich Sch\"utze | GlotCC: An Open Broad-Coverage CommonCrawl Corpus and Pipeline for
Minority Languages | NeurIPS 2024 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The need for large text corpora has increased with the advent of pretrained
language models and, in particular, the discovery of scaling laws for these
models. Most available corpora have sufficient data only for languages with
large dominant communities. However, there is no corpus available that (i)
covers a wide range of minority languages; (ii) is generated by an open-source
reproducible pipeline; and (iii) is rigorously cleaned from noise, making it
trustworthy to use. We present GlotCC, a clean, document-level, 2TB general
domain corpus derived from CommonCrawl, covering more than 1000 languages. We
make GlotCC and the system used to generate it - including the pipeline,
language identification model, and filters - available to the research
community. Corpus v. 1.0 https://huggingface.co/datasets/cis-lmu/GlotCC-v1,
Pipeline v. 3.0 https://github.com/cisnlp/GlotCC.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 11:14:12 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 21:51:52 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Kargaran",
"Amir Hossein",
""
],
[
"Yvon",
"François",
""
],
[
"Schütze",
"Hinrich",
""
]
]
| TITLE: GlotCC: An Open Broad-Coverage CommonCrawl Corpus and Pipeline for
Minority Languages
ABSTRACT: The need for large text corpora has increased with the advent of pretrained
language models and, in particular, the discovery of scaling laws for these
models. Most available corpora have sufficient data only for languages with
large dominant communities. However, there is no corpus available that (i)
covers a wide range of minority languages; (ii) is generated by an open-source
reproducible pipeline; and (iii) is rigorously cleaned from noise, making it
trustworthy to use. We present GlotCC, a clean, document-level, 2TB general
domain corpus derived from CommonCrawl, covering more than 1000 languages. We
make GlotCC and the system used to generate it - including the pipeline,
language identification model, and filters - available to the research
community. Corpus v. 1.0 https://huggingface.co/datasets/cis-lmu/GlotCC-v1,
Pipeline v. 3.0 https://github.com/cisnlp/GlotCC.
| new_dataset | 0.609045 |
2411.00476 | Ren Xin | Ren Xin, Jie Cheng, Hongji Liu, Jun Ma | PlanScope: Learning to Plan Within Decision Scope Does Matter | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of autonomous driving, learning-based methods have been
promising for the development of planning modules. During the training process
of planning modules, directly minimizing the discrepancy between expert-driving
logs and planning output is widely deployed. In general, driving logs consist
of suddenly appearing obstacles or swiftly changing traffic signals, which
typically necessitate swift and nuanced adjustments in driving maneuvers.
Concurrently, future trajectories of the vehicles exhibit their long-term
decisions, such as adhering to a reference lane or circumventing stationary
obstacles. Due to the unpredictable influence of future events in driving logs,
reasoning bias could be naturally introduced to learning based planning
modules, which leads to a possible degradation of driving performance. To
address this issue, we identify the decisions and their corresponding time
horizons, and characterize a so-called decision scope by retaining decisions
within derivable horizons only, to mitigate the effect of irrational behaviors
caused by unpredictable events. Several viable implementations have been
proposed, among which batch normalization along the temporal dimension is
particularly effective and achieves superior performance. It consistently
outperforms baseline methods in terms of driving scores, as demonstrated
through closed-loop evaluations on the nuPlan dataset. Essentially, this
approach accommodates an appealing plug-and-play feature to enhance the
closed-loop performance of other learning-based planning models.
| [
{
"version": "v1",
"created": "Fri, 1 Nov 2024 09:43:49 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 09:44:08 GMT"
}
]
| 2025-03-05T00:00:00 | [
[
"Xin",
"Ren",
""
],
[
"Cheng",
"Jie",
""
],
[
"Liu",
"Hongji",
""
],
[
"Ma",
"Jun",
""
]
]
| TITLE: PlanScope: Learning to Plan Within Decision Scope Does Matter
ABSTRACT: In the context of autonomous driving, learning-based methods have been
promising for the development of planning modules. During the training process
of planning modules, directly minimizing the discrepancy between expert-driving
logs and planning output is widely deployed. In general, driving logs consist
of suddenly appearing obstacles or swiftly changing traffic signals, which
typically necessitate swift and nuanced adjustments in driving maneuvers.
Concurrently, future trajectories of the vehicles exhibit their long-term
decisions, such as adhering to a reference lane or circumventing stationary
obstacles. Due to the unpredictable influence of future events in driving logs,
reasoning bias could be naturally introduced to learning based planning
modules, which leads to a possible degradation of driving performance. To
address this issue, we identify the decisions and their corresponding time
horizons, and characterize a so-called decision scope by retaining decisions
within derivable horizons only, to mitigate the effect of irrational behaviors
caused by unpredictable events. Several viable implementations have been
proposed, among which batch normalization along the temporal dimension is
particularly effective and achieves superior performance. It consistently
outperforms baseline methods in terms of driving scores, as demonstrated
through closed-loop evaluations on the nuPlan dataset. Essentially, this
approach accommodates an appealing plug-and-play feature to enhance the
closed-loop performance of other learning-based planning models.
| no_new_dataset | 0.945298 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.