Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.16071 | Jiale Wei | Jiale Wei, Shuchi Wu, Ruochen Liu, Xiang Ying, Jingbo Shang, Fangbo
Tao | Tuning LLMs by RAG Principles: Towards LLM-native Memory | null | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Memory, additional information beyond the training of large language models
(LLMs), is crucial to various real-world applications, such as personal
assistant. The two mainstream solutions to incorporate memory into the
generation process are long-context LLMs and retrieval-augmented generation
(RAG). In this paper, we first systematically compare these two types of
solutions on three renovated/new datasets and show that (1) long-context
solutions, although more expensive, shall be easier to capture the big picture
and better answer queries which require considering the memory as a whole; and
(2) when the queries concern specific information, RAG solutions shall be more
competitive especially when the keywords can be explicitly matched. Therefore,
we propose a novel method RAG-Tuned-LLM which fine-tunes a relative small
(e.g., 7B) LLM using the data generated following the RAG principles, so it can
combine the advantages of both solutions. Extensive experiments on three
datasets demonstrate that RAG-Tuned-LLM can beat long-context LLMs and RAG
methods across a wide range of query types.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 12:04:40 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wei",
"Jiale",
""
],
[
"Wu",
"Shuchi",
""
],
[
"Liu",
"Ruochen",
""
],
[
"Ying",
"Xiang",
""
],
[
"Shang",
"Jingbo",
""
],
[
"Tao",
"Fangbo",
""
]
] | TITLE: Tuning LLMs by RAG Principles: Towards LLM-native Memory
ABSTRACT: Memory, additional information beyond the training of large language models
(LLMs), is crucial to various real-world applications, such as personal
assistant. The two mainstream solutions to incorporate memory into the
generation process are long-context LLMs and retrieval-augmented generation
(RAG). In this paper, we first systematically compare these two types of
solutions on three renovated/new datasets and show that (1) long-context
solutions, although more expensive, shall be easier to capture the big picture
and better answer queries which require considering the memory as a whole; and
(2) when the queries concern specific information, RAG solutions shall be more
competitive especially when the keywords can be explicitly matched. Therefore,
we propose a novel method RAG-Tuned-LLM which fine-tunes a relative small
(e.g., 7B) LLM using the data generated following the RAG principles, so it can
combine the advantages of both solutions. Extensive experiments on three
datasets demonstrate that RAG-Tuned-LLM can beat long-context LLMs and RAG
methods across a wide range of query types.
|
2503.16072 | Sergey Berezin | Sergey Berezin, Reza Farahbakhsh, Noel Crespi | Redefining Toxicity: An Objective and Context-Aware Approach for
Stress-Level-Based Detection | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | The fundamental problem of toxicity detection lies in the fact that the term
"toxicity" is ill-defined. Such uncertainty causes researchers to rely on
subjective and vague data during model training, which leads to non-robust and
inaccurate results, following the 'garbage in - garbage out' paradigm. This
study introduces a novel, objective, and context-aware framework for toxicity
detection, leveraging stress levels as a key determinant of toxicity. We
propose new definition, metric and training approach as a parts of our
framework and demonstrate it's effectiveness using a dataset we collected.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 12:09:01 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Berezin",
"Sergey",
""
],
[
"Farahbakhsh",
"Reza",
""
],
[
"Crespi",
"Noel",
""
]
] | TITLE: Redefining Toxicity: An Objective and Context-Aware Approach for
Stress-Level-Based Detection
ABSTRACT: The fundamental problem of toxicity detection lies in the fact that the term
"toxicity" is ill-defined. Such uncertainty causes researchers to rely on
subjective and vague data during model training, which leads to non-robust and
inaccurate results, following the 'garbage in - garbage out' paradigm. This
study introduces a novel, objective, and context-aware framework for toxicity
detection, leveraging stress levels as a key determinant of toxicity. We
propose new definition, metric and training approach as a parts of our
framework and demonstrate it's effectiveness using a dataset we collected.
|
2503.16094 | Reem Masoud | Reem I. Masoud, Martin Ferianc, Philip Treleaven, Miguel Rodrigues | Cultural Alignment in Large Language Models Using Soft Prompt Tuning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Model (LLM) alignment conventionally relies on supervised
fine-tuning or reinforcement learning based alignment frameworks. These methods
typically require labeled or preference datasets and involve updating model
weights to align the LLM with the training objective or reward model.
Meanwhile, in social sciences such as cross-cultural studies, factor analysis
is widely used to uncover underlying dimensions or latent variables that
explain observed patterns in survey data. The non-differentiable nature of
these measurements deriving from survey data renders the former alignment
methods infeasible for alignment with cultural dimensions. To overcome this, we
propose a parameter efficient strategy that combines soft prompt tuning, which
freezes the model parameters while modifying the input prompt embeddings, with
Differential Evolution (DE), a black-box optimization method for cases where a
differentiable objective is unattainable. This strategy ensures alignment
consistency without the need for preference data or model parameter updates,
significantly enhancing efficiency and mitigating overfitting. Our method
demonstrates significant improvements in LLama-3-8B-Instruct's cultural
dimensions across multiple regions, outperforming both the Naive LLM and the
In-context Learning (ICL) baseline, and effectively bridges computational
models with human cultural nuances.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 12:34:01 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Masoud",
"Reem I.",
""
],
[
"Ferianc",
"Martin",
""
],
[
"Treleaven",
"Philip",
""
],
[
"Rodrigues",
"Miguel",
""
]
] | TITLE: Cultural Alignment in Large Language Models Using Soft Prompt Tuning
ABSTRACT: Large Language Model (LLM) alignment conventionally relies on supervised
fine-tuning or reinforcement learning based alignment frameworks. These methods
typically require labeled or preference datasets and involve updating model
weights to align the LLM with the training objective or reward model.
Meanwhile, in social sciences such as cross-cultural studies, factor analysis
is widely used to uncover underlying dimensions or latent variables that
explain observed patterns in survey data. The non-differentiable nature of
these measurements deriving from survey data renders the former alignment
methods infeasible for alignment with cultural dimensions. To overcome this, we
propose a parameter efficient strategy that combines soft prompt tuning, which
freezes the model parameters while modifying the input prompt embeddings, with
Differential Evolution (DE), a black-box optimization method for cases where a
differentiable objective is unattainable. This strategy ensures alignment
consistency without the need for preference data or model parameter updates,
significantly enhancing efficiency and mitigating overfitting. Our method
demonstrates significant improvements in LLama-3-8B-Instruct's cultural
dimensions across multiple regions, outperforming both the Naive LLM and the
In-context Learning (ICL) baseline, and effectively bridges computational
models with human cultural nuances.
|
2503.16096 | Lucas Morin | Lucas Morin, Val\'ery Weber, Ahmed Nassar, Gerhard Ingmar Meijer, Luc
Van Gool, Yawei Li, Peter Staar | MarkushGrapher: Joint Visual and Textual Recognition of Markush
Structures | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The automated analysis of chemical literature holds promise to accelerate
discovery in fields such as material science and drug development. In
particular, search capabilities for chemical structures and Markush structures
(chemical structure templates) within patent documents are valuable, e.g., for
prior-art search. Advancements have been made in the automatic extraction of
chemical structures from text and images, yet the Markush structures remain
largely unexplored due to their complex multi-modal nature. In this work, we
present MarkushGrapher, a multi-modal approach for recognizing Markush
structures in documents. Our method jointly encodes text, image, and layout
information through a Vision-Text-Layout encoder and an Optical Chemical
Structure Recognition vision encoder. These representations are merged and used
to auto-regressively generate a sequential graph representation of the Markush
structure along with a table defining its variable groups. To overcome the lack
of real-world training data, we propose a synthetic data generation pipeline
that produces a wide range of realistic Markush structures. Additionally, we
present M2S, the first annotated benchmark of real-world Markush structures, to
advance research on this challenging task. Extensive experiments demonstrate
that our approach outperforms state-of-the-art chemistry-specific and
general-purpose vision-language models in most evaluation settings. Code,
models, and datasets will be available.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 12:40:38 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Morin",
"Lucas",
""
],
[
"Weber",
"Valéry",
""
],
[
"Nassar",
"Ahmed",
""
],
[
"Meijer",
"Gerhard Ingmar",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Li",
"Yawei",
""
],
[
"Staar",
"Peter",
""
]
] | TITLE: MarkushGrapher: Joint Visual and Textual Recognition of Markush
Structures
ABSTRACT: The automated analysis of chemical literature holds promise to accelerate
discovery in fields such as material science and drug development. In
particular, search capabilities for chemical structures and Markush structures
(chemical structure templates) within patent documents are valuable, e.g., for
prior-art search. Advancements have been made in the automatic extraction of
chemical structures from text and images, yet the Markush structures remain
largely unexplored due to their complex multi-modal nature. In this work, we
present MarkushGrapher, a multi-modal approach for recognizing Markush
structures in documents. Our method jointly encodes text, image, and layout
information through a Vision-Text-Layout encoder and an Optical Chemical
Structure Recognition vision encoder. These representations are merged and used
to auto-regressively generate a sequential graph representation of the Markush
structure along with a table defining its variable groups. To overcome the lack
of real-world training data, we propose a synthetic data generation pipeline
that produces a wide range of realistic Markush structures. Additionally, we
present M2S, the first annotated benchmark of real-world Markush structures, to
advance research on this challenging task. Extensive experiments demonstrate
that our approach outperforms state-of-the-art chemistry-specific and
general-purpose vision-language models in most evaluation settings. Code,
models, and datasets will be available.
|
2503.16117 | Alexandre Verine | Alexandre Verine, Mehdi Inane, Florian Le Bronnec, Benjamin
Negrevergne, Yann Chevaleyre | Improving Discriminator Guidance in Diffusion Models | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Discriminator Guidance has become a popular method for efficiently refining
pre-trained Score-Matching Diffusion models. However, in this paper, we
demonstrate that the standard implementation of this technique does not
necessarily lead to a distribution closer to the real data distribution.
Specifically, we show that training the discriminator using Cross-Entropy loss,
as commonly done, can in fact increase the Kullback-Leibler divergence between
the model and target distributions, particularly when the discriminator
overfits. To address this, we propose a theoretically sound training objective
for discriminator guidance that properly minimizes the KL divergence. We
analyze its properties and demonstrate empirically across multiple datasets
that our proposed method consistently improves over the conventional method by
producing samples of higher quality.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 13:04:43 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Verine",
"Alexandre",
""
],
[
"Inane",
"Mehdi",
""
],
[
"Bronnec",
"Florian Le",
""
],
[
"Negrevergne",
"Benjamin",
""
],
[
"Chevaleyre",
"Yann",
""
]
] | TITLE: Improving Discriminator Guidance in Diffusion Models
ABSTRACT: Discriminator Guidance has become a popular method for efficiently refining
pre-trained Score-Matching Diffusion models. However, in this paper, we
demonstrate that the standard implementation of this technique does not
necessarily lead to a distribution closer to the real data distribution.
Specifically, we show that training the discriminator using Cross-Entropy loss,
as commonly done, can in fact increase the Kullback-Leibler divergence between
the model and target distributions, particularly when the discriminator
overfits. To address this, we propose a theoretically sound training objective
for discriminator guidance that properly minimizes the KL divergence. We
analyze its properties and demonstrate empirically across multiple datasets
that our proposed method consistently improves over the conventional method by
producing samples of higher quality.
|
2503.16125 | Jiangyi Wang | Jiangyi Wang, and Na Zhao | Uncertainty Meets Diversity: A Comprehensive Active Learning Framework
for Indoor 3D Object Detection | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Active learning has emerged as a promising approach to reduce the substantial
annotation burden in 3D object detection tasks, spurring several initiatives in
outdoor environments. However, its application in indoor environments remains
unexplored. Compared to outdoor 3D datasets, indoor datasets face significant
challenges, including fewer training samples per class, a greater number of
classes, more severe class imbalance, and more diverse scene types and
intra-class variances. This paper presents the first study on active learning
for indoor 3D object detection, where we propose a novel framework tailored for
this task. Our method incorporates two key criteria - uncertainty and diversity
- to actively select the most ambiguous and informative unlabeled samples for
annotation. The uncertainty criterion accounts for both inaccurate detections
and undetected objects, ensuring that the most ambiguous samples are
prioritized. Meanwhile, the diversity criterion is formulated as a joint
optimization problem that maximizes the diversity of both object class
distributions and scene types, using a new Class-aware Adaptive Prototype (CAP)
bank. The CAP bank dynamically allocates representative prototypes to each
class, helping to capture varying intra-class diversity across different
categories. We evaluate our method on SUN RGB-D and ScanNetV2, where it
outperforms baselines by a significant margin, achieving over 85% of
fully-supervised performance with just 10% of the annotation budget.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 13:12:39 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wang",
"Jiangyi",
""
],
[
"Zhao",
"Na",
""
]
] | TITLE: Uncertainty Meets Diversity: A Comprehensive Active Learning Framework
for Indoor 3D Object Detection
ABSTRACT: Active learning has emerged as a promising approach to reduce the substantial
annotation burden in 3D object detection tasks, spurring several initiatives in
outdoor environments. However, its application in indoor environments remains
unexplored. Compared to outdoor 3D datasets, indoor datasets face significant
challenges, including fewer training samples per class, a greater number of
classes, more severe class imbalance, and more diverse scene types and
intra-class variances. This paper presents the first study on active learning
for indoor 3D object detection, where we propose a novel framework tailored for
this task. Our method incorporates two key criteria - uncertainty and diversity
- to actively select the most ambiguous and informative unlabeled samples for
annotation. The uncertainty criterion accounts for both inaccurate detections
and undetected objects, ensuring that the most ambiguous samples are
prioritized. Meanwhile, the diversity criterion is formulated as a joint
optimization problem that maximizes the diversity of both object class
distributions and scene types, using a new Class-aware Adaptive Prototype (CAP)
bank. The CAP bank dynamically allocates representative prototypes to each
class, helping to capture varying intra-class diversity across different
categories. We evaluate our method on SUN RGB-D and ScanNetV2, where it
outperforms baselines by a significant margin, achieving over 85% of
fully-supervised performance with just 10% of the annotation budget.
|
2503.16148 | Indira Sen | Mats Faulborn, Indira Sen, Max Pellert, Andreas Spitz, and David
Garcia | Only a Little to the Left: A Theory-grounded Measure of Political Bias
in Large Language Models | null | null | null | null | cs.CY cs.CL | http://creativecommons.org/licenses/by/4.0/ | Prompt-based language models like GPT4 and LLaMa have been used for a wide
variety of use cases such as simulating agents, searching for information, or
for content analysis. For all of these applications and others, political
biases in these models can affect their performance. Several researchers have
attempted to study political bias in language models using evaluation suites
based on surveys, such as the Political Compass Test (PCT), often finding a
particular leaning favored by these models. However, there is some variation in
the exact prompting techniques, leading to diverging findings and most research
relies on constrained-answer settings to extract model responses. Moreover, the
Political Compass Test is not a scientifically valid survey instrument. In this
work, we contribute a political bias measured informed by political science
theory, building on survey design principles to test a wide variety of input
prompts, while taking into account prompt sensitivity. We then prompt 11
different open and commercial models, differentiating between instruction-tuned
and non-instruction-tuned models, and automatically classify their political
stances from 88,110 responses. Leveraging this dataset, we compute political
bias profiles across different prompt variations and find that while PCT
exaggerates bias in certain models like GPT3.5, measures of political bias are
often unstable, but generally more left-leaning for instruction-tuned models.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 13:51:06 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Faulborn",
"Mats",
""
],
[
"Sen",
"Indira",
""
],
[
"Pellert",
"Max",
""
],
[
"Spitz",
"Andreas",
""
],
[
"Garcia",
"David",
""
]
] | TITLE: Only a Little to the Left: A Theory-grounded Measure of Political Bias
in Large Language Models
ABSTRACT: Prompt-based language models like GPT4 and LLaMa have been used for a wide
variety of use cases such as simulating agents, searching for information, or
for content analysis. For all of these applications and others, political
biases in these models can affect their performance. Several researchers have
attempted to study political bias in language models using evaluation suites
based on surveys, such as the Political Compass Test (PCT), often finding a
particular leaning favored by these models. However, there is some variation in
the exact prompting techniques, leading to diverging findings and most research
relies on constrained-answer settings to extract model responses. Moreover, the
Political Compass Test is not a scientifically valid survey instrument. In this
work, we contribute a political bias measured informed by political science
theory, building on survey design principles to test a wide variety of input
prompts, while taking into account prompt sensitivity. We then prompt 11
different open and commercial models, differentiating between instruction-tuned
and non-instruction-tuned models, and automatically classify their political
stances from 88,110 responses. Leveraging this dataset, we compute political
bias profiles across different prompt variations and find that while PCT
exaggerates bias in certain models like GPT3.5, measures of political bias are
often unstable, but generally more left-leaning for instruction-tuned models.
|
2503.16149 | Dong Chen | Dong Chen, Boyue Zhao, Yi Zhang, Meng Zhao | Selective Complementary Feature Fusion and Modal Feature Compression
Interaction for Brain Tumor Segmentation | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient modal feature fusion strategy is the key to achieve accurate
segmentation of brain glioma. However, due to the specificity of different MRI
modes, it is difficult to carry out cross-modal fusion with large differences
in modal features, resulting in the model ignoring rich feature information. On
the other hand, the problem of multi-modal feature redundancy interaction
occurs in parallel networks due to the proliferation of feature dimensions,
further increase the difficulty of multi-modal feature fusion at the bottom
end. In order to solve the above problems, we propose a noval complementary
feature compression interaction network (CFCI-Net), which realizes the
complementary fusion and compression interaction of multi-modal feature
information with an efficient mode fusion strategy. Firstly, we propose a
selective complementary feature fusion (SCFF) module, which adaptively fuses
rich cross-modal feature information by complementary soft selection weights.
Secondly, a modal feature compression interaction (MFCI) transformer is
proposed to deal with the multi-mode fusion redundancy problem when the feature
dimension surges. The MFCI transformer is composed of modal feature compression
(MFC) and modal feature interaction (MFI) to realize redundancy feature
compression and multi-mode feature interactive learning. %In MFI, we propose a
hierarchical interactive attention mechanism based on multi-head attention.
Evaluations on the BraTS2019 and BraTS2020 datasets demonstrate that CFCI-Net
achieves superior results compared to state-of-the-art models. Code:
https://github.com/CDmm0/CFCI-Net
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 13:52:51 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chen",
"Dong",
""
],
[
"Zhao",
"Boyue",
""
],
[
"Zhang",
"Yi",
""
],
[
"Zhao",
"Meng",
""
]
] | TITLE: Selective Complementary Feature Fusion and Modal Feature Compression
Interaction for Brain Tumor Segmentation
ABSTRACT: Efficient modal feature fusion strategy is the key to achieve accurate
segmentation of brain glioma. However, due to the specificity of different MRI
modes, it is difficult to carry out cross-modal fusion with large differences
in modal features, resulting in the model ignoring rich feature information. On
the other hand, the problem of multi-modal feature redundancy interaction
occurs in parallel networks due to the proliferation of feature dimensions,
further increase the difficulty of multi-modal feature fusion at the bottom
end. In order to solve the above problems, we propose a noval complementary
feature compression interaction network (CFCI-Net), which realizes the
complementary fusion and compression interaction of multi-modal feature
information with an efficient mode fusion strategy. Firstly, we propose a
selective complementary feature fusion (SCFF) module, which adaptively fuses
rich cross-modal feature information by complementary soft selection weights.
Secondly, a modal feature compression interaction (MFCI) transformer is
proposed to deal with the multi-mode fusion redundancy problem when the feature
dimension surges. The MFCI transformer is composed of modal feature compression
(MFC) and modal feature interaction (MFI) to realize redundancy feature
compression and multi-mode feature interactive learning. %In MFI, we propose a
hierarchical interactive attention mechanism based on multi-head attention.
Evaluations on the BraTS2019 and BraTS2020 datasets demonstrate that CFCI-Net
achieves superior results compared to state-of-the-art models. Code:
https://github.com/CDmm0/CFCI-Net
|
2503.16158 | Shenbin Qian | Shenbin Qian, Constantin Or\u{a}san, Diptesh Kanojia, F\'elix do Carmo | Automatically Generating Chinese Homophone Words to Probe Machine
Translation Estimation Systems | Accepted to the 10th Workshop on Noisy and User-generated Text at
NAACL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Evaluating machine translation (MT) of user-generated content (UGC) involves
unique challenges such as checking whether the nuance of emotions from the
source are preserved in the target text. Recent studies have proposed
emotion-related datasets, frameworks and models to automatically evaluate MT
quality of Chinese UGC, without relying on reference translations. However,
whether these models are robust to the challenge of preserving emotional
nuances has been left largely unexplored. To address this gap, we introduce a
novel method inspired by information theory which generates challenging Chinese
homophone words related to emotions, by leveraging the concept of
self-information. Our approach generates homophones that were observed to cause
translation errors in emotion preservation, and exposes vulnerabilities in MT
systems and their evaluation methods when tackling emotional UGC. We evaluate
the efficacy of our method using human evaluation for the quality of these
generated homophones, and compare it with an existing one, showing that our
method achieves higher correlation with human judgments. The generated Chinese
homophones, along with their manual translations, are utilized to generate
perturbations and to probe the robustness of existing quality evaluation
models, including models trained using multi-task learning, fine-tuned variants
of multilingual language models, as well as large language models (LLMs). Our
results indicate that LLMs with larger size exhibit higher stability and
robustness to such perturbations. We release our data and code for
reproducibility and further research.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 13:56:15 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Qian",
"Shenbin",
""
],
[
"Orăsan",
"Constantin",
""
],
[
"Kanojia",
"Diptesh",
""
],
[
"Carmo",
"Félix do",
""
]
] | TITLE: Automatically Generating Chinese Homophone Words to Probe Machine
Translation Estimation Systems
ABSTRACT: Evaluating machine translation (MT) of user-generated content (UGC) involves
unique challenges such as checking whether the nuance of emotions from the
source are preserved in the target text. Recent studies have proposed
emotion-related datasets, frameworks and models to automatically evaluate MT
quality of Chinese UGC, without relying on reference translations. However,
whether these models are robust to the challenge of preserving emotional
nuances has been left largely unexplored. To address this gap, we introduce a
novel method inspired by information theory which generates challenging Chinese
homophone words related to emotions, by leveraging the concept of
self-information. Our approach generates homophones that were observed to cause
translation errors in emotion preservation, and exposes vulnerabilities in MT
systems and their evaluation methods when tackling emotional UGC. We evaluate
the efficacy of our method using human evaluation for the quality of these
generated homophones, and compare it with an existing one, showing that our
method achieves higher correlation with human judgments. The generated Chinese
homophones, along with their manual translations, are utilized to generate
perturbations and to probe the robustness of existing quality evaluation
models, including models trained using multi-task learning, fine-tuned variants
of multilingual language models, as well as large language models (LLMs). Our
results indicate that LLMs with larger size exhibit higher stability and
robustness to such perturbations. We release our data and code for
reproducibility and further research.
|
2503.16159 | Federico Berto | Jiwoo Son, Zhikai Zhao, Federico Berto, Chuanbo Hua, Changhyun Kwon,
Jinkyoo Park | Neural Combinatorial Optimization for Real-World Routing | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vehicle Routing Problems (VRPs) are a class of NP-hard problems ubiquitous in
several real-world logistics scenarios that pose significant challenges for
optimization. Neural Combinatorial Optimization (NCO) has emerged as a
promising alternative to classical approaches, as it can learn fast heuristics
to solve VRPs. However, most research works in NCO for VRPs focus on simplified
settings, which do not account for asymmetric distances and travel durations
that cannot be derived by simple Euclidean distances and unrealistic data
distributions, hindering real-world deployment. This work introduces RRNCO
(Real Routing NCO) to bridge the gap of NCO between synthetic and real-world
VRPs in the critical aspects of both data and modeling. First, we introduce a
new, openly available dataset with real-world data containing a diverse dataset
of locations, distances, and duration matrices from 100 cities, considering
realistic settings with actual routing distances and durations obtained from
Open Source Routing Machine (OSRM). Second, we propose a novel approach that
efficiently processes both node and edge features through contextual gating,
enabling the construction of more informed node embedding, and we finally
incorporate an Adaptation Attention Free Module (AAFM) with neural adaptive
bias mechanisms that effectively integrates not only distance matrices but also
angular relationships between nodes, allowing our model to capture rich
structural information. RRNCO achieves state-of-the-art results in real-world
VRPs among NCO methods. We make our dataset and code publicly available at
https://github.com/ai4co/real-routing-nco.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 13:57:33 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Son",
"Jiwoo",
""
],
[
"Zhao",
"Zhikai",
""
],
[
"Berto",
"Federico",
""
],
[
"Hua",
"Chuanbo",
""
],
[
"Kwon",
"Changhyun",
""
],
[
"Park",
"Jinkyoo",
""
]
] | TITLE: Neural Combinatorial Optimization for Real-World Routing
ABSTRACT: Vehicle Routing Problems (VRPs) are a class of NP-hard problems ubiquitous in
several real-world logistics scenarios that pose significant challenges for
optimization. Neural Combinatorial Optimization (NCO) has emerged as a
promising alternative to classical approaches, as it can learn fast heuristics
to solve VRPs. However, most research works in NCO for VRPs focus on simplified
settings, which do not account for asymmetric distances and travel durations
that cannot be derived by simple Euclidean distances and unrealistic data
distributions, hindering real-world deployment. This work introduces RRNCO
(Real Routing NCO) to bridge the gap of NCO between synthetic and real-world
VRPs in the critical aspects of both data and modeling. First, we introduce a
new, openly available dataset with real-world data containing a diverse dataset
of locations, distances, and duration matrices from 100 cities, considering
realistic settings with actual routing distances and durations obtained from
Open Source Routing Machine (OSRM). Second, we propose a novel approach that
efficiently processes both node and edge features through contextual gating,
enabling the construction of more informed node embedding, and we finally
incorporate an Adaptation Attention Free Module (AAFM) with neural adaptive
bias mechanisms that effectively integrates not only distance matrices but also
angular relationships between nodes, allowing our model to capture rich
structural information. RRNCO achieves state-of-the-art results in real-world
VRPs among NCO methods. We make our dataset and code publicly available at
https://github.com/ai4co/real-routing-nco.
|
2503.16165 | Li Xiangyu | Xiangyu Li, Wanshu Fan, Yue Shen, Cong Wang, Wei Wang, Xin Yang, Qiang
Zhang and Dongsheng Zhou | Iterative Optimal Attention and Local Model for Single Image Rain Streak
Removal | 14 pages, 14 figures, 6 tables | null | null | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-fidelity imaging is crucial for the successful safety supervision and
intelligent deployment of vision-based measurement systems (VBMS). It ensures
high-quality imaging in VBMS, which is fundamental for reliable visual
measurement and analysis. However, imaging quality can be significantly
impaired by adverse weather conditions, particularly rain, leading to blurred
images and reduced contrast. Such impairments increase the risk of inaccurate
evaluations and misinterpretations in VBMS. To address these limitations, we
propose an Expectation Maximization Reconstruction Transformer (EMResformer)
for single image rain streak removal. The EMResformer retains the key
self-attention values for feature aggregation, enhancing local features to
produce superior image reconstruction. Specifically, we propose an Expectation
Maximization Block seamlessly integrated into the single image rain streak
removal network, enhancing its ability to eliminate superfluous information and
restore a cleaner background image. Additionally, to further enhance local
information for improved detail rendition, we introduce a Local Model Residual
Block, which integrates two local model blocks along with a sequence of
convolutions and activation functions. This integration synergistically
facilitates the extraction of more pertinent features for enhanced single image
rain streak removal. Extensive experiments validate that our proposed
EMResformer surpasses current state-of-the-art single image rain streak removal
methods on both synthetic and real-world datasets, achieving an improved
balance between model complexity and single image deraining performance.
Furthermore, we evaluate the effectiveness of our method in VBMS scenarios,
demonstrating that high-quality imaging significantly improves the accuracy and
reliability of VBMS tasks.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 14:06:53 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Li",
"Xiangyu",
""
],
[
"Fan",
"Wanshu",
""
],
[
"Shen",
"Yue",
""
],
[
"Wang",
"Cong",
""
],
[
"Wang",
"Wei",
""
],
[
"Yang",
"Xin",
""
],
[
"Zhang",
"Qiang",
""
],
[
"Zhou",
"Dongsheng",
""
]
] | TITLE: Iterative Optimal Attention and Local Model for Single Image Rain Streak
Removal
ABSTRACT: High-fidelity imaging is crucial for the successful safety supervision and
intelligent deployment of vision-based measurement systems (VBMS). It ensures
high-quality imaging in VBMS, which is fundamental for reliable visual
measurement and analysis. However, imaging quality can be significantly
impaired by adverse weather conditions, particularly rain, leading to blurred
images and reduced contrast. Such impairments increase the risk of inaccurate
evaluations and misinterpretations in VBMS. To address these limitations, we
propose an Expectation Maximization Reconstruction Transformer (EMResformer)
for single image rain streak removal. The EMResformer retains the key
self-attention values for feature aggregation, enhancing local features to
produce superior image reconstruction. Specifically, we propose an Expectation
Maximization Block seamlessly integrated into the single image rain streak
removal network, enhancing its ability to eliminate superfluous information and
restore a cleaner background image. Additionally, to further enhance local
information for improved detail rendition, we introduce a Local Model Residual
Block, which integrates two local model blocks along with a sequence of
convolutions and activation functions. This integration synergistically
facilitates the extraction of more pertinent features for enhanced single image
rain streak removal. Extensive experiments validate that our proposed
EMResformer surpasses current state-of-the-art single image rain streak removal
methods on both synthetic and real-world datasets, achieving an improved
balance between model complexity and single image deraining performance.
Furthermore, we evaluate the effectiveness of our method in VBMS scenarios,
demonstrating that high-quality imaging significantly improves the accuracy and
reliability of VBMS tasks.
|
2503.16166 | Mert Yildiz | Mert Yildiz, Alexey Rolich, Andrea Baiocchi | The Merit of Simple Policies: Buying Performance With Parallelism and
System Architecture | IEEE INFOCOM Workshop on Intelligent Cloud Computing and Networking
(ICCN 2025) | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | While scheduling and dispatching of computational workloads is a
well-investigated subject, only recently has Google provided publicly a vast
high-resolution measurement dataset of its cloud workloads. We revisit
dispatching and scheduling algorithms fed by traffic workloads derived from
those measurements. The main finding is that mean job response time attains a
minimum as the number of servers of the computing cluster is varied, under the
constraint that the overall computational budget is kept constant. Moreover,
simple policies, such as Join Idle Queue, appear to attain the same performance
as more complex, size-based policies for suitably high degrees of parallelism.
Further, better performance, definitely outperforming size-based dispatching
policies, is obtained by using multi-stage server clusters, even using very
simple policies such as Round Robin. The takeaway is that parallelism and
architecture of computing systems might be powerful knobs to control
performance, even more than policies, under realistic workload traffic.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 14:07:24 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yildiz",
"Mert",
""
],
[
"Rolich",
"Alexey",
""
],
[
"Baiocchi",
"Andrea",
""
]
] | TITLE: The Merit of Simple Policies: Buying Performance With Parallelism and
System Architecture
ABSTRACT: While scheduling and dispatching of computational workloads is a
well-investigated subject, only recently has Google provided publicly a vast
high-resolution measurement dataset of its cloud workloads. We revisit
dispatching and scheduling algorithms fed by traffic workloads derived from
those measurements. The main finding is that mean job response time attains a
minimum as the number of servers of the computing cluster is varied, under the
constraint that the overall computational budget is kept constant. Moreover,
simple policies, such as Join Idle Queue, appear to attain the same performance
as more complex, size-based policies for suitably high degrees of parallelism.
Further, better performance, definitely outperforming size-based dispatching
policies, is obtained by using multi-stage server clusters, even using very
simple policies such as Round Robin. The takeaway is that parallelism and
architecture of computing systems might be powerful knobs to control
performance, even more than policies, under realistic workload traffic.
|
2503.16185 | Peihao Wu | Peihao Wu, Yongxiang Yao, Wenfei Zhang, Dong Wei, Yi Wan, Yansheng Li,
Yongjun Zhang | MapGlue: Multimodal Remote Sensing Image Matching | The dataset and code are available at
https://github.com/PeihaoWu/MapGlue | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal remote sensing image (MRSI) matching is pivotal for cross-modal
fusion, localization, and object detection, but it faces severe challenges due
to geometric, radiometric, and viewpoint discrepancies across imaging
modalities. Existing unimodal datasets lack scale and diversity, limiting deep
learning solutions. This paper proposes MapGlue, a universal MRSI matching
framework, and MapData, a large-scale multimodal dataset addressing these gaps.
Our contributions are twofold. MapData, a globally diverse dataset spanning 233
sampling points, offers original images (7,000x5,000 to 20,000x15,000 pixels).
After rigorous cleaning, it provides 121,781 aligned electronic map-visible
image pairs (512x512 pixels) with hybrid manual-automated ground truth,
addressing the scarcity of scalable multimodal benchmarks. MapGlue integrates
semantic context with a dual graph-guided mechanism to extract cross-modal
invariant features. This structure enables global-to-local interaction,
enhancing descriptor robustness against modality-specific distortions.
Extensive evaluations on MapData and five public datasets demonstrate MapGlue's
superiority in matching accuracy under complex conditions, outperforming
state-of-the-art methods. Notably, MapGlue generalizes effectively to unseen
modalities without retraining, highlighting its adaptability. This work
addresses longstanding challenges in MRSI matching by combining scalable
dataset construction with a robust, semantics-driven framework. Furthermore,
MapGlue shows strong generalization capabilities on other modality matching
tasks for which it was not specifically trained. The dataset and code are
available at https://github.com/PeihaoWu/MapGlue.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 14:36:16 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wu",
"Peihao",
""
],
[
"Yao",
"Yongxiang",
""
],
[
"Zhang",
"Wenfei",
""
],
[
"Wei",
"Dong",
""
],
[
"Wan",
"Yi",
""
],
[
"Li",
"Yansheng",
""
],
[
"Zhang",
"Yongjun",
""
]
] | TITLE: MapGlue: Multimodal Remote Sensing Image Matching
ABSTRACT: Multimodal remote sensing image (MRSI) matching is pivotal for cross-modal
fusion, localization, and object detection, but it faces severe challenges due
to geometric, radiometric, and viewpoint discrepancies across imaging
modalities. Existing unimodal datasets lack scale and diversity, limiting deep
learning solutions. This paper proposes MapGlue, a universal MRSI matching
framework, and MapData, a large-scale multimodal dataset addressing these gaps.
Our contributions are twofold. MapData, a globally diverse dataset spanning 233
sampling points, offers original images (7,000x5,000 to 20,000x15,000 pixels).
After rigorous cleaning, it provides 121,781 aligned electronic map-visible
image pairs (512x512 pixels) with hybrid manual-automated ground truth,
addressing the scarcity of scalable multimodal benchmarks. MapGlue integrates
semantic context with a dual graph-guided mechanism to extract cross-modal
invariant features. This structure enables global-to-local interaction,
enhancing descriptor robustness against modality-specific distortions.
Extensive evaluations on MapData and five public datasets demonstrate MapGlue's
superiority in matching accuracy under complex conditions, outperforming
state-of-the-art methods. Notably, MapGlue generalizes effectively to unseen
modalities without retraining, highlighting its adaptability. This work
addresses longstanding challenges in MRSI matching by combining scalable
dataset construction with a robust, semantics-driven framework. Furthermore,
MapGlue shows strong generalization capabilities on other modality matching
tasks for which it was not specifically trained. The dataset and code are
available at https://github.com/PeihaoWu/MapGlue.
|
2503.16195 | Chia-Yi Hsu | Chia-Yi Hsu, Jia-You Chen, Yu-Lin Tsai, Chih-Hsun Lin, Pin-Yu Chen,
Chia-Mu Yu and Chun-Ying Huang | VP-NTK: Exploring the Benefits of Visual Prompting in Differentially
Private Data Synthesis | Accepted by ICASSP 2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differentially private (DP) synthetic data has become the de facto standard
for releasing sensitive data. However, many DP generative models suffer from
the low utility of synthetic data, especially for high-resolution images. On
the other hand, one of the emerging techniques in parameter efficient
fine-tuning (PEFT) is visual prompting (VP), which allows well-trained existing
models to be reused for the purpose of adapting to subsequent downstream tasks.
In this work, we explore such a phenomenon in constructing captivating
generative models with DP constraints. We show that VP in conjunction with
DP-NTK, a DP generator that exploits the power of the neural tangent kernel
(NTK) in training DP generative models, achieves a significant performance
boost, particularly for high-resolution image datasets, with accuracy improving
from 0.644$\pm$0.044 to 0.769. Lastly, we perform ablation studies on the
effect of different parameters that influence the overall performance of
VP-NTK. Our work demonstrates a promising step forward in improving the utility
of DP synthetic data, particularly for high-resolution images.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 14:42:11 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hsu",
"Chia-Yi",
""
],
[
"Chen",
"Jia-You",
""
],
[
"Tsai",
"Yu-Lin",
""
],
[
"Lin",
"Chih-Hsun",
""
],
[
"Chen",
"Pin-Yu",
""
],
[
"Yu",
"Chia-Mu",
""
],
[
"Huang",
"Chun-Ying",
""
]
] | TITLE: VP-NTK: Exploring the Benefits of Visual Prompting in Differentially
Private Data Synthesis
ABSTRACT: Differentially private (DP) synthetic data has become the de facto standard
for releasing sensitive data. However, many DP generative models suffer from
the low utility of synthetic data, especially for high-resolution images. On
the other hand, one of the emerging techniques in parameter efficient
fine-tuning (PEFT) is visual prompting (VP), which allows well-trained existing
models to be reused for the purpose of adapting to subsequent downstream tasks.
In this work, we explore such a phenomenon in constructing captivating
generative models with DP constraints. We show that VP in conjunction with
DP-NTK, a DP generator that exploits the power of the neural tangent kernel
(NTK) in training DP generative models, achieves a significant performance
boost, particularly for high-resolution image datasets, with accuracy improving
from 0.644$\pm$0.044 to 0.769. Lastly, we perform ablation studies on the
effect of different parameters that influence the overall performance of
VP-NTK. Our work demonstrates a promising step forward in improving the utility
of DP synthetic data, particularly for high-resolution images.
|
2503.16207 | Wenjun Cui | Wenjun Cui, Qiyu Kang, Xuhao Li, Kai Zhao, Wee Peng Tay, Weihua Deng,
Yidong Li | Neural Variable-Order Fractional Differential Equation Networks | AAAI 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural differential equation models have garnered significant attention in
recent years for their effectiveness in machine learning applications.Among
these, fractional differential equations (FDEs) have emerged as a promising
tool due to their ability to capture memory-dependent dynamics, which are often
challenging to model with traditional integer-order approaches.While existing
models have primarily focused on constant-order fractional derivatives,
variable-order fractional operators offer a more flexible and expressive
framework for modeling complex memory patterns. In this work, we introduce the
Neural Variable-Order Fractional Differential Equation network (NvoFDE), a
novel neural network framework that integrates variable-order fractional
derivatives with learnable neural networks.Our framework allows for the
modeling of adaptive derivative orders dependent on hidden features, capturing
more complex feature-updating dynamics and providing enhanced flexibility. We
conduct extensive experiments across multiple graph datasets to validate the
effectiveness of our approach.Our results demonstrate that NvoFDE outperforms
traditional constant-order fractional and integer models across a range of
tasks, showcasing its superior adaptability and performance.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 14:54:19 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Cui",
"Wenjun",
""
],
[
"Kang",
"Qiyu",
""
],
[
"Li",
"Xuhao",
""
],
[
"Zhao",
"Kai",
""
],
[
"Tay",
"Wee Peng",
""
],
[
"Deng",
"Weihua",
""
],
[
"Li",
"Yidong",
""
]
] | TITLE: Neural Variable-Order Fractional Differential Equation Networks
ABSTRACT: Neural differential equation models have garnered significant attention in
recent years for their effectiveness in machine learning applications.Among
these, fractional differential equations (FDEs) have emerged as a promising
tool due to their ability to capture memory-dependent dynamics, which are often
challenging to model with traditional integer-order approaches.While existing
models have primarily focused on constant-order fractional derivatives,
variable-order fractional operators offer a more flexible and expressive
framework for modeling complex memory patterns. In this work, we introduce the
Neural Variable-Order Fractional Differential Equation network (NvoFDE), a
novel neural network framework that integrates variable-order fractional
derivatives with learnable neural networks.Our framework allows for the
modeling of adaptive derivative orders dependent on hidden features, capturing
more complex feature-updating dynamics and providing enhanced flexibility. We
conduct extensive experiments across multiple graph datasets to validate the
effectiveness of our approach.Our results demonstrate that NvoFDE outperforms
traditional constant-order fractional and integer models across a range of
tasks, showcasing its superior adaptability and performance.
|
2503.16212 | Qizhi Pei | Qizhi Pei, Lijun Wu, Zhuoshi Pan, Yu Li, Honglin Lin, Chenlin Ming,
Xin Gao, Conghui He, Rui Yan | MathFusion: Enhancing Mathematic Problem-solving of LLM through
Instruction Fusion | Work in progress | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have shown impressive progress in mathematical
reasoning. While data augmentation is promising to enhance mathematical
problem-solving ability, current approaches are predominantly limited to
instance-level modifications-such as rephrasing or generating syntactic
variations-which fail to capture and leverage the intrinsic relational
structures inherent in mathematical knowledge. Inspired by human learning
processes, where mathematical proficiency develops through systematic exposure
to interconnected concepts, we introduce MathFusion, a novel framework that
enhances mathematical reasoning through cross-problem instruction synthesis.
MathFusion implements this through three fusion strategies: (1) sequential
fusion, which chains related problems to model solution dependencies; (2)
parallel fusion, which combines analogous problems to reinforce conceptual
understanding; and (3) conditional fusion, which creates context-aware
selective problems to enhance reasoning flexibility. By applying these
strategies, we generate a new dataset, \textbf{MathFusionQA}, followed by
fine-tuning models (DeepSeekMath-7B, Mistral-7B, Llama3-8B) on it. Experimental
results demonstrate that MathFusion achieves substantial improvements in
mathematical reasoning while maintaining high data efficiency, boosting
performance by 18.0 points in accuracy across diverse benchmarks while
requiring only 45K additional synthetic instructions, representing a
substantial improvement over traditional single-instruction approaches. Our
datasets, models, and code are publicly available at
https://github.com/QizhiPei/mathfusion.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 15:00:41 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Pei",
"Qizhi",
""
],
[
"Wu",
"Lijun",
""
],
[
"Pan",
"Zhuoshi",
""
],
[
"Li",
"Yu",
""
],
[
"Lin",
"Honglin",
""
],
[
"Ming",
"Chenlin",
""
],
[
"Gao",
"Xin",
""
],
[
"He",
"Conghui",
""
],
[
"Yan",
"Rui",
""
]
] | TITLE: MathFusion: Enhancing Mathematic Problem-solving of LLM through
Instruction Fusion
ABSTRACT: Large Language Models (LLMs) have shown impressive progress in mathematical
reasoning. While data augmentation is promising to enhance mathematical
problem-solving ability, current approaches are predominantly limited to
instance-level modifications-such as rephrasing or generating syntactic
variations-which fail to capture and leverage the intrinsic relational
structures inherent in mathematical knowledge. Inspired by human learning
processes, where mathematical proficiency develops through systematic exposure
to interconnected concepts, we introduce MathFusion, a novel framework that
enhances mathematical reasoning through cross-problem instruction synthesis.
MathFusion implements this through three fusion strategies: (1) sequential
fusion, which chains related problems to model solution dependencies; (2)
parallel fusion, which combines analogous problems to reinforce conceptual
understanding; and (3) conditional fusion, which creates context-aware
selective problems to enhance reasoning flexibility. By applying these
strategies, we generate a new dataset, \textbf{MathFusionQA}, followed by
fine-tuning models (DeepSeekMath-7B, Mistral-7B, Llama3-8B) on it. Experimental
results demonstrate that MathFusion achieves substantial improvements in
mathematical reasoning while maintaining high data efficiency, boosting
performance by 18.0 points in accuracy across diverse benchmarks while
requiring only 45K additional synthetic instructions, representing a
substantial improvement over traditional single-instruction approaches. Our
datasets, models, and code are publicly available at
https://github.com/QizhiPei/mathfusion.
|
2503.16218 | Yu Cao | Yu Cao, Zengqun Zhao, Ioannis Patras, Shaogang Gong | Temporal Score Analysis for Understanding and Correcting Diffusion
Artifacts | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Visual artifacts remain a persistent challenge in diffusion models, even with
training on massive datasets. Current solutions primarily rely on supervised
detectors, yet lack understanding of why these artifacts occur in the first
place. In our analysis, we identify three distinct phases in the diffusion
generative process: Profiling, Mutation, and Refinement. Artifacts typically
emerge during the Mutation phase, where certain regions exhibit anomalous score
dynamics over time, causing abrupt disruptions in the normal evolution pattern.
This temporal nature explains why existing methods focusing only on spatial
uncertainty of the final output fail at effective artifact localization. Based
on these insights, we propose ASCED (Abnormal Score Correction for Enhancing
Diffusion), that detects artifacts by monitoring abnormal score dynamics during
the diffusion process, with a trajectory-aware on-the-fly mitigation strategy
that appropriate generation of noise in the detected areas. Unlike most
existing methods that apply post hoc corrections, \eg, by applying a
noising-denoising scheme after generation, our mitigation strategy operates
seamlessly within the existing diffusion process. Extensive experiments
demonstrate that our proposed approach effectively reduces artifacts across
diverse domains, matching or surpassing existing supervised methods without
additional training.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 15:11:56 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Cao",
"Yu",
""
],
[
"Zhao",
"Zengqun",
""
],
[
"Patras",
"Ioannis",
""
],
[
"Gong",
"Shaogang",
""
]
] | TITLE: Temporal Score Analysis for Understanding and Correcting Diffusion
Artifacts
ABSTRACT: Visual artifacts remain a persistent challenge in diffusion models, even with
training on massive datasets. Current solutions primarily rely on supervised
detectors, yet lack understanding of why these artifacts occur in the first
place. In our analysis, we identify three distinct phases in the diffusion
generative process: Profiling, Mutation, and Refinement. Artifacts typically
emerge during the Mutation phase, where certain regions exhibit anomalous score
dynamics over time, causing abrupt disruptions in the normal evolution pattern.
This temporal nature explains why existing methods focusing only on spatial
uncertainty of the final output fail at effective artifact localization. Based
on these insights, we propose ASCED (Abnormal Score Correction for Enhancing
Diffusion), that detects artifacts by monitoring abnormal score dynamics during
the diffusion process, with a trajectory-aware on-the-fly mitigation strategy
that appropriate generation of noise in the detected areas. Unlike most
existing methods that apply post hoc corrections, \eg, by applying a
noising-denoising scheme after generation, our mitigation strategy operates
seamlessly within the existing diffusion process. Extensive experiments
demonstrate that our proposed approach effectively reduces artifacts across
diverse domains, matching or surpassing existing supervised methods without
additional training.
|
2503.16219 | Quy-Anh Dang | Quy-Anh Dang and Chris Ngo | Reinforcement Learning for Reasoning in Small LLMs: What Works and What
Doesn't | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Enhancing the reasoning capabilities of large language models (LLMs)
typically relies on massive computational resources and extensive datasets,
limiting accessibility for resource-constrained settings. Our study
investigates the potential of reinforcement learning (RL) to improve reasoning
in small LLMs, focusing on a 1.5-billion-parameter model,
DeepSeek-R1-Distill-Qwen-1.5B, under strict constraints: training on 4 NVIDIA
A40 GPUs (48 GB VRAM each) within 24 hours. Adapting the Group Relative Policy
Optimization (GRPO) algorithm and curating a compact, high-quality mathematical
reasoning dataset, we conducted three experiments to explore model behavior and
performance. Our results demonstrate rapid reasoning gains - e.g., AMC23
accuracy rising from 63% to 80% and AIME24 reaching 46.7%, surpassing
o1-preview - using only 7,000 samples and a $42 training cost, compared to
thousands of dollars for baseline models. However, challenges such as
optimization instability and length constraints emerged with prolonged
training. These findings highlight the efficacy of RL-based fine-tuning for
small LLMs, offering a cost-effective alternative to large-scale approaches. We
release our code and datasets as open-source resources, providing insights into
trade-offs and laying a foundation for scalable, reasoning-capable LLMs in
resource-limited environments. All are available at
https://github.com/knoveleng/open-rs.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 15:13:23 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Dang",
"Quy-Anh",
""
],
[
"Ngo",
"Chris",
""
]
] | TITLE: Reinforcement Learning for Reasoning in Small LLMs: What Works and What
Doesn't
ABSTRACT: Enhancing the reasoning capabilities of large language models (LLMs)
typically relies on massive computational resources and extensive datasets,
limiting accessibility for resource-constrained settings. Our study
investigates the potential of reinforcement learning (RL) to improve reasoning
in small LLMs, focusing on a 1.5-billion-parameter model,
DeepSeek-R1-Distill-Qwen-1.5B, under strict constraints: training on 4 NVIDIA
A40 GPUs (48 GB VRAM each) within 24 hours. Adapting the Group Relative Policy
Optimization (GRPO) algorithm and curating a compact, high-quality mathematical
reasoning dataset, we conducted three experiments to explore model behavior and
performance. Our results demonstrate rapid reasoning gains - e.g., AMC23
accuracy rising from 63% to 80% and AIME24 reaching 46.7%, surpassing
o1-preview - using only 7,000 samples and a $42 training cost, compared to
thousands of dollars for baseline models. However, challenges such as
optimization instability and length constraints emerged with prolonged
training. These findings highlight the efficacy of RL-based fine-tuning for
small LLMs, offering a cost-effective alternative to large-scale approaches. We
release our code and datasets as open-source resources, providing insights into
trade-offs and laying a foundation for scalable, reasoning-capable LLMs in
resource-limited environments. All are available at
https://github.com/knoveleng/open-rs.
|
2503.16233 | Dawood Wasif | Dawood Wasif, Dian Chen, Sindhuja Madabushi, Nithin Alluru, Terrence
J. Moore, Jin-Hee Cho | Empirical Analysis of Privacy-Fairness-Accuracy Trade-offs in Federated
Learning: A Step Towards Responsible AI | Submitted to IJCAI 2025 (under review) | null | null | null | cs.LG cs.CR cs.DC cs.ET | http://creativecommons.org/licenses/by/4.0/ | Federated Learning (FL) enables collaborative machine learning while
preserving data privacy but struggles to balance privacy preservation (PP) and
fairness. Techniques like Differential Privacy (DP), Homomorphic Encryption
(HE), and Secure Multi-Party Computation (SMC) protect sensitive data but
introduce trade-offs. DP enhances privacy but can disproportionately impact
underrepresented groups, while HE and SMC mitigate fairness concerns at the
cost of computational overhead. This work explores the privacy-fairness
trade-offs in FL under IID (Independent and Identically Distributed) and
non-IID data distributions, benchmarking q-FedAvg, q-MAML, and Ditto on diverse
datasets. Our findings highlight context-dependent trade-offs and offer
guidelines for designing FL systems that uphold responsible AI principles,
ensuring fairness, privacy, and equitable real-world applications.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 15:31:01 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wasif",
"Dawood",
""
],
[
"Chen",
"Dian",
""
],
[
"Madabushi",
"Sindhuja",
""
],
[
"Alluru",
"Nithin",
""
],
[
"Moore",
"Terrence J.",
""
],
[
"Cho",
"Jin-Hee",
""
]
] | TITLE: Empirical Analysis of Privacy-Fairness-Accuracy Trade-offs in Federated
Learning: A Step Towards Responsible AI
ABSTRACT: Federated Learning (FL) enables collaborative machine learning while
preserving data privacy but struggles to balance privacy preservation (PP) and
fairness. Techniques like Differential Privacy (DP), Homomorphic Encryption
(HE), and Secure Multi-Party Computation (SMC) protect sensitive data but
introduce trade-offs. DP enhances privacy but can disproportionately impact
underrepresented groups, while HE and SMC mitigate fairness concerns at the
cost of computational overhead. This work explores the privacy-fairness
trade-offs in FL under IID (Independent and Identically Distributed) and
non-IID data distributions, benchmarking q-FedAvg, q-MAML, and Ditto on diverse
datasets. Our findings highlight context-dependent trade-offs and offer
guidelines for designing FL systems that uphold responsible AI principles,
ensuring fairness, privacy, and equitable real-world applications.
|
2503.16247 | Max Gutbrod | Max Gutbrod, David Rauber, Danilo Weber Nunes, Christoph Palm | OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution
Detection | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The growing reliance on Artificial Intelligence (AI) in critical domains such
as healthcare demands robust mechanisms to ensure the trustworthiness of these
systems, especially when faced with unexpected or anomalous inputs. This paper
introduces the Open Medical Imaging Benchmarks for Out-Of-Distribution
Detection (OpenMIBOOD), a comprehensive framework for evaluating
out-of-distribution (OOD) detection methods specifically in medical imaging
contexts. OpenMIBOOD includes three benchmarks from diverse medical domains,
encompassing 14 datasets divided into covariate-shifted in-distribution,
near-OOD, and far-OOD categories. We evaluate 24 post-hoc methods across these
benchmarks, providing a standardized reference to advance the development and
fair comparison of OOD detection methods. Results reveal that findings from
broad-scale OOD benchmarks in natural image domains do not translate to medical
applications, underscoring the critical need for such benchmarks in the medical
field. By mitigating the risk of exposing AI models to inputs outside their
training distribution, OpenMIBOOD aims to support the advancement of reliable
and trustworthy AI systems in healthcare. The repository is available at
https://github.com/remic-othr/OpenMIBOOD.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 15:43:14 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gutbrod",
"Max",
""
],
[
"Rauber",
"David",
""
],
[
"Nunes",
"Danilo Weber",
""
],
[
"Palm",
"Christoph",
""
]
] | TITLE: OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution
Detection
ABSTRACT: The growing reliance on Artificial Intelligence (AI) in critical domains such
as healthcare demands robust mechanisms to ensure the trustworthiness of these
systems, especially when faced with unexpected or anomalous inputs. This paper
introduces the Open Medical Imaging Benchmarks for Out-Of-Distribution
Detection (OpenMIBOOD), a comprehensive framework for evaluating
out-of-distribution (OOD) detection methods specifically in medical imaging
contexts. OpenMIBOOD includes three benchmarks from diverse medical domains,
encompassing 14 datasets divided into covariate-shifted in-distribution,
near-OOD, and far-OOD categories. We evaluate 24 post-hoc methods across these
benchmarks, providing a standardized reference to advance the development and
fair comparison of OOD detection methods. Results reveal that findings from
broad-scale OOD benchmarks in natural image domains do not translate to medical
applications, underscoring the critical need for such benchmarks in the medical
field. By mitigating the risk of exposing AI models to inputs outside their
training distribution, OpenMIBOOD aims to support the advancement of reliable
and trustworthy AI systems in healthcare. The repository is available at
https://github.com/remic-othr/OpenMIBOOD.
|
2503.16251 | Dawood Wasif | Dawood Wasif, Terrence J. Moore, Jin-Hee Cho | RESFL: An Uncertainty-Aware Framework for Responsible Federated Learning
by Balancing Privacy, Fairness and Utility in Autonomous Vehicles | Submitted to PETS 2025 (under review) | null | null | null | cs.LG cs.CV cs.DC cs.ET | http://creativecommons.org/licenses/by/4.0/ | Autonomous vehicles (AVs) increasingly rely on Federated Learning (FL) to
enhance perception models while preserving privacy. However, existing FL
frameworks struggle to balance privacy, fairness, and robustness, leading to
performance disparities across demographic groups. Privacy-preserving
techniques like differential privacy mitigate data leakage risks but worsen
fairness by restricting access to sensitive attributes needed for bias
correction. This work explores the trade-off between privacy and fairness in
FL-based object detection for AVs and introduces RESFL, an integrated solution
optimizing both. RESFL incorporates adversarial privacy disentanglement and
uncertainty-guided fairness-aware aggregation. The adversarial component uses a
gradient reversal layer to remove sensitive attributes, reducing privacy risks
while maintaining fairness. The uncertainty-aware aggregation employs an
evidential neural network to weight client updates adaptively, prioritizing
contributions with lower fairness disparities and higher confidence. This
ensures robust and equitable FL model updates. We evaluate RESFL on the FACET
dataset and CARLA simulator, assessing accuracy, fairness, privacy resilience,
and robustness under varying conditions. RESFL improves detection accuracy,
reduces fairness disparities, and lowers privacy attack success rates while
demonstrating superior robustness to adversarial conditions compared to other
approaches.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 15:46:03 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wasif",
"Dawood",
""
],
[
"Moore",
"Terrence J.",
""
],
[
"Cho",
"Jin-Hee",
""
]
] | TITLE: RESFL: An Uncertainty-Aware Framework for Responsible Federated Learning
by Balancing Privacy, Fairness and Utility in Autonomous Vehicles
ABSTRACT: Autonomous vehicles (AVs) increasingly rely on Federated Learning (FL) to
enhance perception models while preserving privacy. However, existing FL
frameworks struggle to balance privacy, fairness, and robustness, leading to
performance disparities across demographic groups. Privacy-preserving
techniques like differential privacy mitigate data leakage risks but worsen
fairness by restricting access to sensitive attributes needed for bias
correction. This work explores the trade-off between privacy and fairness in
FL-based object detection for AVs and introduces RESFL, an integrated solution
optimizing both. RESFL incorporates adversarial privacy disentanglement and
uncertainty-guided fairness-aware aggregation. The adversarial component uses a
gradient reversal layer to remove sensitive attributes, reducing privacy risks
while maintaining fairness. The uncertainty-aware aggregation employs an
evidential neural network to weight client updates adaptively, prioritizing
contributions with lower fairness disparities and higher confidence. This
ensures robust and equitable FL model updates. We evaluate RESFL on the FACET
dataset and CARLA simulator, assessing accuracy, fairness, privacy resilience,
and robustness under varying conditions. RESFL improves detection accuracy,
reduces fairness disparities, and lowers privacy attack success rates while
demonstrating superior robustness to adversarial conditions compared to other
approaches.
|
2503.16254 | Onay Urfalioglu | Markus Karmann, Peng-Tao Jiang, Bo Li, Onay Urfalioglu | M2N2V2: Multi-Modal Unsupervised and Training-free Interactive
Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Markov Map Nearest Neighbor V2 (M2N2V2), a novel and simple, yet
effective approach which leverages depth guidance and attention maps for
unsupervised and training-free point-prompt-based interactive segmentation.
Following recent trends in supervised multimodal approaches, we carefully
integrate depth as an additional modality to create novel depth-guided
Markov-maps. Furthermore, we observe occasional segment size fluctuations in
M2N2 during the interactive process, which can decrease the overall mIoU's. To
mitigate this problem, we model the prompting as a sequential process and
propose a novel adaptive score function which considers the previous
segmentation and the current prompt point in order to prevent unreasonable
segment size changes. Using Stable Diffusion 2 and Depth Anything V2 as
backbones, we empirically show that our proposed M2N2V2 significantly improves
the Number of Clicks (NoC) and mIoU compared to M2N2 in all datasets except
those from the medical domain. Interestingly, our unsupervised approach
achieves competitive results compared to supervised methods like SAM and
SimpleClick in the more challenging DAVIS and HQSeg44K datasets in the NoC
metric, reducing the gap between supervised and unsupervised methods.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 15:47:14 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Karmann",
"Markus",
""
],
[
"Jiang",
"Peng-Tao",
""
],
[
"Li",
"Bo",
""
],
[
"Urfalioglu",
"Onay",
""
]
] | TITLE: M2N2V2: Multi-Modal Unsupervised and Training-free Interactive
Segmentation
ABSTRACT: We present Markov Map Nearest Neighbor V2 (M2N2V2), a novel and simple, yet
effective approach which leverages depth guidance and attention maps for
unsupervised and training-free point-prompt-based interactive segmentation.
Following recent trends in supervised multimodal approaches, we carefully
integrate depth as an additional modality to create novel depth-guided
Markov-maps. Furthermore, we observe occasional segment size fluctuations in
M2N2 during the interactive process, which can decrease the overall mIoU's. To
mitigate this problem, we model the prompting as a sequential process and
propose a novel adaptive score function which considers the previous
segmentation and the current prompt point in order to prevent unreasonable
segment size changes. Using Stable Diffusion 2 and Depth Anything V2 as
backbones, we empirically show that our proposed M2N2V2 significantly improves
the Number of Clicks (NoC) and mIoU compared to M2N2 in all datasets except
those from the medical domain. Interestingly, our unsupervised approach
achieves competitive results compared to supervised methods like SAM and
SimpleClick in the more challenging DAVIS and HQSeg44K datasets in the NoC
metric, reducing the gap between supervised and unsupervised methods.
|
2503.16260 | Zijian Li | Zijian Li, Jingjing Fu, Lei Song, Jiang Bian, Jun Zhang, Rui Wang | Chain of Functions: A Programmatic Pipeline for Fine-Grained Chart
Reasoning Data | Under review | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Visual reasoning is crucial for multimodal large language models (MLLMs) to
address complex chart queries, yet high-quality rationale data remains scarce.
Existing methods leveraged (M)LLMs for data generation, but direct prompting
often yields limited precision and diversity. In this paper, we propose
\textit{Chain of Functions (CoF)}, a novel programmatic reasoning data
generation pipeline that utilizes freely-explored reasoning paths as
supervision to ensure data precision and diversity. Specifically, it starts
with human-free exploration among the atomic functions (e.g., maximum data and
arithmetic operations) to generate diverse function chains, which are then
translated into linguistic rationales and questions with only a moderate
open-sourced LLM. \textit{CoF} provides multiple benefits: 1) Precision:
function-governed generation reduces hallucinations compared to freeform
generation; 2) Diversity: enumerating function chains enables varied question
taxonomies; 3) Explainability: function chains serve as built-in rationales,
allowing fine-grained evaluation beyond overall accuracy; 4) Practicality:
eliminating reliance on extremely large models. Employing \textit{CoF}, we
construct the \textit{ChartCoF} dataset, with 1.4k complex reasoning Q\&A for
fine-grained analysis and 50k Q\&A for reasoning enhancement. The fine-grained
evaluation on \textit{ChartCoF} reveals varying performance across question
taxonomies for each MLLM, and the experiments also show that finetuning with
\textit{ChartCoF} achieves state-of-the-art performance among same-scale MLLMs
on widely used benchmarks. Furthermore, the novel paradigm of function-governed
rationale generation in \textit{CoF} could inspire broader applications beyond
charts.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 15:56:04 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Li",
"Zijian",
""
],
[
"Fu",
"Jingjing",
""
],
[
"Song",
"Lei",
""
],
[
"Bian",
"Jiang",
""
],
[
"Zhang",
"Jun",
""
],
[
"Wang",
"Rui",
""
]
] | TITLE: Chain of Functions: A Programmatic Pipeline for Fine-Grained Chart
Reasoning Data
ABSTRACT: Visual reasoning is crucial for multimodal large language models (MLLMs) to
address complex chart queries, yet high-quality rationale data remains scarce.
Existing methods leveraged (M)LLMs for data generation, but direct prompting
often yields limited precision and diversity. In this paper, we propose
\textit{Chain of Functions (CoF)}, a novel programmatic reasoning data
generation pipeline that utilizes freely-explored reasoning paths as
supervision to ensure data precision and diversity. Specifically, it starts
with human-free exploration among the atomic functions (e.g., maximum data and
arithmetic operations) to generate diverse function chains, which are then
translated into linguistic rationales and questions with only a moderate
open-sourced LLM. \textit{CoF} provides multiple benefits: 1) Precision:
function-governed generation reduces hallucinations compared to freeform
generation; 2) Diversity: enumerating function chains enables varied question
taxonomies; 3) Explainability: function chains serve as built-in rationales,
allowing fine-grained evaluation beyond overall accuracy; 4) Practicality:
eliminating reliance on extremely large models. Employing \textit{CoF}, we
construct the \textit{ChartCoF} dataset, with 1.4k complex reasoning Q\&A for
fine-grained analysis and 50k Q\&A for reasoning enhancement. The fine-grained
evaluation on \textit{ChartCoF} reveals varying performance across question
taxonomies for each MLLM, and the experiments also show that finetuning with
\textit{ChartCoF} achieves state-of-the-art performance among same-scale MLLMs
on widely used benchmarks. Furthermore, the novel paradigm of function-governed
rationale generation in \textit{CoF} could inspire broader applications beyond
charts.
|
2503.16264 | Dounia Hammou | Dounia Hammou, Yancheng Cai, Pavan Madhusudanarao, Christos G. Bampis,
Rafa{\l} K. Mantiuk | Do image and video quality metrics model low-level human vision? | null | null | null | null | eess.IV cs.CV cs.MM | http://creativecommons.org/licenses/by/4.0/ | Image and video quality metrics, such as SSIM, LPIPS, and VMAF, are aimed to
predict the perceived quality of the evaluated content and are often claimed to
be "perceptual". Yet, few metrics directly model human visual perception, and
most rely on hand-crafted formulas or training datasets to achieve alignment
with perceptual data. In this paper, we propose a set of tests for
full-reference quality metrics that examine their ability to model several
aspects of low-level human vision: contrast sensitivity, contrast masking, and
contrast matching. The tests are meant to provide additional scrutiny for newly
proposed metrics. We use our tests to analyze 33 existing image and video
quality metrics and find their strengths and weaknesses, such as the ability of
LPIPS and MS-SSIM to predict contrast masking and poor performance of VMAF in
this task. We further find that the popular SSIM metric overemphasizes
differences in high spatial frequencies, but its multi-scale counterpart,
MS-SSIM, addresses this shortcoming. Such findings cannot be easily made using
existing evaluation protocols.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 15:57:25 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hammou",
"Dounia",
""
],
[
"Cai",
"Yancheng",
""
],
[
"Madhusudanarao",
"Pavan",
""
],
[
"Bampis",
"Christos G.",
""
],
[
"Mantiuk",
"Rafał K.",
""
]
] | TITLE: Do image and video quality metrics model low-level human vision?
ABSTRACT: Image and video quality metrics, such as SSIM, LPIPS, and VMAF, are aimed to
predict the perceived quality of the evaluated content and are often claimed to
be "perceptual". Yet, few metrics directly model human visual perception, and
most rely on hand-crafted formulas or training datasets to achieve alignment
with perceptual data. In this paper, we propose a set of tests for
full-reference quality metrics that examine their ability to model several
aspects of low-level human vision: contrast sensitivity, contrast masking, and
contrast matching. The tests are meant to provide additional scrutiny for newly
proposed metrics. We use our tests to analyze 33 existing image and video
quality metrics and find their strengths and weaknesses, such as the ability of
LPIPS and MS-SSIM to predict contrast masking and poor performance of VMAF in
this task. We further find that the popular SSIM metric overemphasizes
differences in high spatial frequencies, but its multi-scale counterpart,
MS-SSIM, addresses this shortcoming. Such findings cannot be easily made using
existing evaluation protocols.
|
2503.16275 | Tian Yi Lim | Tian Yi Lim, Boyang Sun, Marc Pollefeys, Hermann Blum | Loop Closure from Two Views: Revisiting PGO for Scalable Trajectory
Estimation through Monocular Priors | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | (Visual) Simultaneous Localization and Mapping (SLAM) remains a fundamental
challenge in enabling autonomous systems to navigate and understand large-scale
environments. Traditional SLAM approaches struggle to balance efficiency and
accuracy, particularly in large-scale settings where extensive computational
resources are required for scene reconstruction and Bundle Adjustment (BA).
However, this scene reconstruction, in the form of sparse pointclouds of visual
landmarks, is often only used within the SLAM system because navigation and
planning methods require different map representations. In this work, we
therefore investigate a more scalable Visual SLAM (VSLAM) approach without
reconstruction, mainly based on approaches for two-view loop closures. By
restricting the map to a sparse keyframed pose graph without dense geometry
representations, our '2GO' system achieves efficient optimization with
competitive absolute trajectory accuracy. In particular, we find that recent
advancements in image matching and monocular depth priors enable very accurate
trajectory optimization from two-view edges. We conduct extensive experiments
on diverse datasets, including large-scale scenarios, and provide a detailed
analysis of the trade-offs between runtime, accuracy, and map size. Our results
demonstrate that this streamlined approach supports real-time performance,
scales well in map size and trajectory duration, and effectively broadens the
capabilities of VSLAM for long-duration deployments to large environments.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:05:35 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Lim",
"Tian Yi",
""
],
[
"Sun",
"Boyang",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Blum",
"Hermann",
""
]
] | TITLE: Loop Closure from Two Views: Revisiting PGO for Scalable Trajectory
Estimation through Monocular Priors
ABSTRACT: (Visual) Simultaneous Localization and Mapping (SLAM) remains a fundamental
challenge in enabling autonomous systems to navigate and understand large-scale
environments. Traditional SLAM approaches struggle to balance efficiency and
accuracy, particularly in large-scale settings where extensive computational
resources are required for scene reconstruction and Bundle Adjustment (BA).
However, this scene reconstruction, in the form of sparse pointclouds of visual
landmarks, is often only used within the SLAM system because navigation and
planning methods require different map representations. In this work, we
therefore investigate a more scalable Visual SLAM (VSLAM) approach without
reconstruction, mainly based on approaches for two-view loop closures. By
restricting the map to a sparse keyframed pose graph without dense geometry
representations, our '2GO' system achieves efficient optimization with
competitive absolute trajectory accuracy. In particular, we find that recent
advancements in image matching and monocular depth priors enable very accurate
trajectory optimization from two-view edges. We conduct extensive experiments
on diverse datasets, including large-scale scenarios, and provide a detailed
analysis of the trade-offs between runtime, accuracy, and map size. Our results
demonstrate that this streamlined approach supports real-time performance,
scales well in map size and trajectory duration, and effectively broadens the
capabilities of VSLAM for long-duration deployments to large environments.
|
2503.16282 | Zhaochong An | Zhaochong An, Guolei Sun, Yun Liu, Runjia Li, Junlin Han, Ender
Konukoglu, Serge Belongie | Generalized Few-shot 3D Point Cloud Segmentation with Vision-Language
Model | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Generalized few-shot 3D point cloud segmentation (GFS-PCS) adapts models to
new classes with few support samples while retaining base class segmentation.
Existing GFS-PCS methods enhance prototypes via interacting with support or
query features but remain limited by sparse knowledge from few-shot samples.
Meanwhile, 3D vision-language models (3D VLMs), generalizing across open-world
novel classes, contain rich but noisy novel class knowledge. In this work, we
introduce a GFS-PCS framework that synergizes dense but noisy pseudo-labels
from 3D VLMs with precise yet sparse few-shot samples to maximize the strengths
of both, named GFS-VL. Specifically, we present a prototype-guided pseudo-label
selection to filter low-quality regions, followed by an adaptive infilling
strategy that combines knowledge from pseudo-label contexts and few-shot
samples to adaptively label the filtered, unlabeled areas. Additionally, we
design a novel-base mix strategy to embed few-shot samples into training
scenes, preserving essential context for improved novel class learning.
Moreover, recognizing the limited diversity in current GFS-PCS benchmarks, we
introduce two challenging benchmarks with diverse novel classes for
comprehensive generalization evaluation. Experiments validate the effectiveness
of our framework across models and datasets. Our approach and benchmarks
provide a solid foundation for advancing GFS-PCS in the real world. The code is
at https://github.com/ZhaochongAn/GFS-VL
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:10:33 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"An",
"Zhaochong",
""
],
[
"Sun",
"Guolei",
""
],
[
"Liu",
"Yun",
""
],
[
"Li",
"Runjia",
""
],
[
"Han",
"Junlin",
""
],
[
"Konukoglu",
"Ender",
""
],
[
"Belongie",
"Serge",
""
]
] | TITLE: Generalized Few-shot 3D Point Cloud Segmentation with Vision-Language
Model
ABSTRACT: Generalized few-shot 3D point cloud segmentation (GFS-PCS) adapts models to
new classes with few support samples while retaining base class segmentation.
Existing GFS-PCS methods enhance prototypes via interacting with support or
query features but remain limited by sparse knowledge from few-shot samples.
Meanwhile, 3D vision-language models (3D VLMs), generalizing across open-world
novel classes, contain rich but noisy novel class knowledge. In this work, we
introduce a GFS-PCS framework that synergizes dense but noisy pseudo-labels
from 3D VLMs with precise yet sparse few-shot samples to maximize the strengths
of both, named GFS-VL. Specifically, we present a prototype-guided pseudo-label
selection to filter low-quality regions, followed by an adaptive infilling
strategy that combines knowledge from pseudo-label contexts and few-shot
samples to adaptively label the filtered, unlabeled areas. Additionally, we
design a novel-base mix strategy to embed few-shot samples into training
scenes, preserving essential context for improved novel class learning.
Moreover, recognizing the limited diversity in current GFS-PCS benchmarks, we
introduce two challenging benchmarks with diverse novel classes for
comprehensive generalization evaluation. Experiments validate the effectiveness
of our framework across models and datasets. Our approach and benchmarks
provide a solid foundation for advancing GFS-PCS in the real world. The code is
at https://github.com/ZhaochongAn/GFS-VL
|
2503.16289 | Inwoo Hwang | Inwoo Hwang, Bing Zhou, Young Min Kim, Jian Wang, Chuan Guo | SceneMI: Motion In-betweening for Modeling Human-Scene Interactions | 15 pages, Project page: http://inwoohwang.me/SceneMI | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Modeling human-scene interactions (HSI) is essential for understanding and
simulating everyday human behaviors. Recent approaches utilizing generative
modeling have made progress in this domain; however, they are limited in
controllability and flexibility for real-world applications. To address these
challenges, we propose reformulating the HSI modeling problem as Scene-aware
Motion In-betweening -- a more tractable and practical task. We introduce
SceneMI, a framework that supports several practical applications, including
keyframe-guided character animation in 3D scenes and enhancing the motion
quality of imperfect HSI data. SceneMI employs dual scene descriptors to
comprehensively encode global and local scene context. Furthermore, our
framework leverages the inherent denoising nature of diffusion models to
generalize on noisy keyframes. Experimental results demonstrate SceneMI's
effectiveness in scene-aware keyframe in-betweening and generalization to the
real-world GIMO dataset, where motions and scenes are acquired by noisy IMU
sensors and smartphones. We further showcase SceneMI's applicability in HSI
reconstruction from monocular videos.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:15:16 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hwang",
"Inwoo",
""
],
[
"Zhou",
"Bing",
""
],
[
"Kim",
"Young Min",
""
],
[
"Wang",
"Jian",
""
],
[
"Guo",
"Chuan",
""
]
] | TITLE: SceneMI: Motion In-betweening for Modeling Human-Scene Interactions
ABSTRACT: Modeling human-scene interactions (HSI) is essential for understanding and
simulating everyday human behaviors. Recent approaches utilizing generative
modeling have made progress in this domain; however, they are limited in
controllability and flexibility for real-world applications. To address these
challenges, we propose reformulating the HSI modeling problem as Scene-aware
Motion In-betweening -- a more tractable and practical task. We introduce
SceneMI, a framework that supports several practical applications, including
keyframe-guided character animation in 3D scenes and enhancing the motion
quality of imperfect HSI data. SceneMI employs dual scene descriptors to
comprehensively encode global and local scene context. Furthermore, our
framework leverages the inherent denoising nature of diffusion models to
generalize on noisy keyframes. Experimental results demonstrate SceneMI's
effectiveness in scene-aware keyframe in-betweening and generalization to the
real-world GIMO dataset, where motions and scenes are acquired by noisy IMU
sensors and smartphones. We further showcase SceneMI's applicability in HSI
reconstruction from monocular videos.
|
2503.16290 | Fan Huang | Fan Huang and Wei Wang | Diffusion-augmented Graph Contrastive Learning for Collaborative Filter | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph-based collaborative filtering has been established as a prominent
approach in recommendation systems, leveraging the inherent graph topology of
user-item interactions to model high-order connectivity patterns and enhance
recommendation performance. Recent advances in Graph Contrastive Learning (GCL)
have demonstrated promising potential to alleviate data sparsity issues by
improving representation learning through contrastive view generation and
mutual information maximization. However, existing approaches lack effective
data augmentation strategies. Structural augmentation risks distorting
fundamental graph topology, while feature-level perturbation techniques
predominantly employ uniform noise scales that fail to account for
node-specific characteristics. To solve these challenges, we propose
Diffusion-augmented Contrastive Learning (DGCL), an innovative framework that
integrates diffusion models with contrastive learning for enhanced
collaborative filtering. Our approach employs a diffusion process that learns
node-specific Gaussian distributions of representations, thereby generating
semantically consistent yet diversified contrastive views through reverse
diffusion sampling. DGCL facilitates adaptive data augmentation based on
reconstructed representations, considering both semantic coherence and
node-specific features. In addition, it explores unrepresented regions of the
latent sparse feature space, thereby enriching the diversity of contrastive
views. Extensive experimental results demonstrate the effectiveness of DGCL on
three public datasets.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:15:20 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Huang",
"Fan",
""
],
[
"Wang",
"Wei",
""
]
] | TITLE: Diffusion-augmented Graph Contrastive Learning for Collaborative Filter
ABSTRACT: Graph-based collaborative filtering has been established as a prominent
approach in recommendation systems, leveraging the inherent graph topology of
user-item interactions to model high-order connectivity patterns and enhance
recommendation performance. Recent advances in Graph Contrastive Learning (GCL)
have demonstrated promising potential to alleviate data sparsity issues by
improving representation learning through contrastive view generation and
mutual information maximization. However, existing approaches lack effective
data augmentation strategies. Structural augmentation risks distorting
fundamental graph topology, while feature-level perturbation techniques
predominantly employ uniform noise scales that fail to account for
node-specific characteristics. To solve these challenges, we propose
Diffusion-augmented Contrastive Learning (DGCL), an innovative framework that
integrates diffusion models with contrastive learning for enhanced
collaborative filtering. Our approach employs a diffusion process that learns
node-specific Gaussian distributions of representations, thereby generating
semantically consistent yet diversified contrastive views through reverse
diffusion sampling. DGCL facilitates adaptive data augmentation based on
reconstructed representations, considering both semantic coherence and
node-specific features. In addition, it explores unrepresented regions of the
latent sparse feature space, thereby enriching the diversity of contrastive
views. Extensive experimental results demonstrate the effectiveness of DGCL on
three public datasets.
|
2503.16309 | Vivek Gopalakrishnan | Vivek Gopalakrishnan, Neel Dey, David-Dimitris Chlorogiannis, Andrew
Abumoussa, Anna M. Larson, Darren B. Orbach, Sarah Frisken, Polina Golland | Rapid patient-specific neural networks for intraoperative X-ray to
volume registration | null | null | null | null | eess.IV cs.CV physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | The integration of artificial intelligence in image-guided interventions
holds transformative potential, promising to extract 3D geometric and
quantitative information from conventional 2D imaging modalities during complex
procedures. Achieving this requires the rapid and precise alignment of 2D
intraoperative images (e.g., X-ray) with 3D preoperative volumes (e.g., CT,
MRI). However, current 2D/3D registration methods fail across the broad
spectrum of procedures dependent on X-ray guidance: traditional optimization
techniques require custom parameter tuning for each subject, whereas neural
networks trained on small datasets do not generalize to new patients or require
labor-intensive manual annotations, increasing clinical burden and precluding
application to new anatomical targets. To address these challenges, we present
xvr, a fully automated framework for training patient-specific neural networks
for 2D/3D registration. xvr uses physics-based simulation to generate abundant
high-quality training data from a patient's own preoperative volumetric
imaging, thereby overcoming the inherently limited ability of supervised models
to generalize to new patients and procedures. Furthermore, xvr requires only 5
minutes of training per patient, making it suitable for emergency interventions
as well as planned procedures. We perform the largest evaluation of a 2D/3D
registration algorithm on real X-ray data to date and find that xvr robustly
generalizes across a diverse dataset comprising multiple anatomical structures,
imaging modalities, and hospitals. Across surgical tasks, xvr achieves
submillimeter-accurate registration at intraoperative speeds, improving upon
existing methods by an order of magnitude. xvr is released as open-source
software freely available at https://github.com/eigenvivek/xvr.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:33:45 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gopalakrishnan",
"Vivek",
""
],
[
"Dey",
"Neel",
""
],
[
"Chlorogiannis",
"David-Dimitris",
""
],
[
"Abumoussa",
"Andrew",
""
],
[
"Larson",
"Anna M.",
""
],
[
"Orbach",
"Darren B.",
""
],
[
"Frisken",
"Sarah",
""
],
[
"Golland",
"Polina",
""
]
] | TITLE: Rapid patient-specific neural networks for intraoperative X-ray to
volume registration
ABSTRACT: The integration of artificial intelligence in image-guided interventions
holds transformative potential, promising to extract 3D geometric and
quantitative information from conventional 2D imaging modalities during complex
procedures. Achieving this requires the rapid and precise alignment of 2D
intraoperative images (e.g., X-ray) with 3D preoperative volumes (e.g., CT,
MRI). However, current 2D/3D registration methods fail across the broad
spectrum of procedures dependent on X-ray guidance: traditional optimization
techniques require custom parameter tuning for each subject, whereas neural
networks trained on small datasets do not generalize to new patients or require
labor-intensive manual annotations, increasing clinical burden and precluding
application to new anatomical targets. To address these challenges, we present
xvr, a fully automated framework for training patient-specific neural networks
for 2D/3D registration. xvr uses physics-based simulation to generate abundant
high-quality training data from a patient's own preoperative volumetric
imaging, thereby overcoming the inherently limited ability of supervised models
to generalize to new patients and procedures. Furthermore, xvr requires only 5
minutes of training per patient, making it suitable for emergency interventions
as well as planned procedures. We perform the largest evaluation of a 2D/3D
registration algorithm on real X-ray data to date and find that xvr robustly
generalizes across a diverse dataset comprising multiple anatomical structures,
imaging modalities, and hospitals. Across surgical tasks, xvr achieves
submillimeter-accurate registration at intraoperative speeds, improving upon
existing methods by an order of magnitude. xvr is released as open-source
software freely available at https://github.com/eigenvivek/xvr.
|
2503.16320 | Noor Nashid | Noor Nashid, Islem Bouzenia, Michael Pradel, Ali Mesbah | Issue2Test: Generating Reproducing Test Cases from Issue Reports | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated tools for solving GitHub issues are receiving significant attention
by both researchers and practitioners, e.g., in the form of foundation models
and LLM-based agents prompted with issues. A crucial step toward successfully
solving an issue is creating a test case that accurately reproduces the issue.
Such a test case can guide the search for an appropriate patch and help
validate whether the patch matches the issue's intent. However, existing
techniques for issue reproduction show only moderate success. This paper
presents Issue2Test, an LLM-based technique for automatically generating a
reproducing test case for a given issue report. Unlike automated regression
test generators, which aim at creating passing tests, our approach aims at a
test that fails, and that fails specifically for the reason described in the
issue. To this end, Issue2Test performs three steps: (1) understand the issue
and gather context (e.g., related files and project-specific guidelines)
relevant for reproducing it; (2) generate a candidate test case; and (3)
iteratively refine the test case based on compilation and runtime feedback
until it fails and the failure aligns with the problem described in the issue.
We evaluate Issue2Test on the SWT-bench-lite dataset, where it successfully
reproduces 30.4 of the issues, achieving a 40.1% relative improvement over the
best existing technique. Our evaluation also shows that Issue2test reproduces
28 issues that seven prior techniques fail to address, contributing a total of
68.3% of all issues reproduced by any tool. We envision our approach to
contribute to enhancing the overall progress in the important task of
automatically solving GitHub issues.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:44:00 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Nashid",
"Noor",
""
],
[
"Bouzenia",
"Islem",
""
],
[
"Pradel",
"Michael",
""
],
[
"Mesbah",
"Ali",
""
]
] | TITLE: Issue2Test: Generating Reproducing Test Cases from Issue Reports
ABSTRACT: Automated tools for solving GitHub issues are receiving significant attention
by both researchers and practitioners, e.g., in the form of foundation models
and LLM-based agents prompted with issues. A crucial step toward successfully
solving an issue is creating a test case that accurately reproduces the issue.
Such a test case can guide the search for an appropriate patch and help
validate whether the patch matches the issue's intent. However, existing
techniques for issue reproduction show only moderate success. This paper
presents Issue2Test, an LLM-based technique for automatically generating a
reproducing test case for a given issue report. Unlike automated regression
test generators, which aim at creating passing tests, our approach aims at a
test that fails, and that fails specifically for the reason described in the
issue. To this end, Issue2Test performs three steps: (1) understand the issue
and gather context (e.g., related files and project-specific guidelines)
relevant for reproducing it; (2) generate a candidate test case; and (3)
iteratively refine the test case based on compilation and runtime feedback
until it fails and the failure aligns with the problem described in the issue.
We evaluate Issue2Test on the SWT-bench-lite dataset, where it successfully
reproduces 30.4 of the issues, achieving a 40.1% relative improvement over the
best existing technique. Our evaluation also shows that Issue2test reproduces
28 issues that seven prior techniques fail to address, contributing a total of
68.3% of all issues reproduced by any tool. We envision our approach to
contribute to enhancing the overall progress in the important task of
automatically solving GitHub issues.
|
2503.16323 | Peter Sharpe | Peter Sharpe, R. John Hansman | NeuralFoil: An Airfoil Aerodynamics Analysis Tool Using Physics-Informed
Machine Learning | 42 pages, 14 figures | null | null | null | physics.flu-dyn cs.LG | http://creativecommons.org/licenses/by/4.0/ | NeuralFoil is an open-source Python-based tool for rapid aerodynamics
analysis of airfoils, similar in purpose to XFoil. Speedups ranging from 8x to
1,000x over XFoil are demonstrated, after controlling for equivalent accuracy.
NeuralFoil computes both global and local quantities (lift, drag, velocity
distribution, etc.) over a broad input space, including: an 18-dimensional
space of airfoil shapes, possibly including control deflections; a 360 degree
range of angles of attack; Reynolds numbers from $10^2$ to $10^{10}$; subsonic
flows up to the transonic drag rise; and with varying turbulence parameters.
Results match those of XFoil closely: the mean relative error of drag is 0.37%
on simple cases, and remains as low as 2.0% on a test dataset with numerous
post-stall and transitional cases. NeuralFoil facilitates gradient-based design
optimization, due to its $C^\infty$-continuous solutions,
automatic-differentiation-compatibility, and bounded computational cost without
non-convergence issues.
NeuralFoil is a hybrid of physics-informed machine learning techniques and
analytical models. Here, physics information includes symmetries that are
structurally embedded into the model architecture, feature engineering using
domain knowledge, and guaranteed extrapolation to known limit cases. This work
also introduces a new approach for surrogate model uncertainty quantification
that enables robust design optimization.
This work discusses the methodology and performance of NeuralFoil with
several case studies, including a practical airfoil design optimization study
including both aerodynamic and non-aerodynamic constraints. Here, NeuralFoil
optimization is able to produce airfoils nearly identical in performance and
shape to expert-designed airfoils within seconds; these
computationally-optimized airfoils provide a useful starting point for further
expert refinement.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:44:53 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Sharpe",
"Peter",
""
],
[
"Hansman",
"R. John",
""
]
] | TITLE: NeuralFoil: An Airfoil Aerodynamics Analysis Tool Using Physics-Informed
Machine Learning
ABSTRACT: NeuralFoil is an open-source Python-based tool for rapid aerodynamics
analysis of airfoils, similar in purpose to XFoil. Speedups ranging from 8x to
1,000x over XFoil are demonstrated, after controlling for equivalent accuracy.
NeuralFoil computes both global and local quantities (lift, drag, velocity
distribution, etc.) over a broad input space, including: an 18-dimensional
space of airfoil shapes, possibly including control deflections; a 360 degree
range of angles of attack; Reynolds numbers from $10^2$ to $10^{10}$; subsonic
flows up to the transonic drag rise; and with varying turbulence parameters.
Results match those of XFoil closely: the mean relative error of drag is 0.37%
on simple cases, and remains as low as 2.0% on a test dataset with numerous
post-stall and transitional cases. NeuralFoil facilitates gradient-based design
optimization, due to its $C^\infty$-continuous solutions,
automatic-differentiation-compatibility, and bounded computational cost without
non-convergence issues.
NeuralFoil is a hybrid of physics-informed machine learning techniques and
analytical models. Here, physics information includes symmetries that are
structurally embedded into the model architecture, feature engineering using
domain knowledge, and guaranteed extrapolation to known limit cases. This work
also introduces a new approach for surrogate model uncertainty quantification
that enables robust design optimization.
This work discusses the methodology and performance of NeuralFoil with
several case studies, including a practical airfoil design optimization study
including both aerodynamic and non-aerodynamic constraints. Here, NeuralFoil
optimization is able to produce airfoils nearly identical in performance and
shape to expert-designed airfoils within seconds; these
computationally-optimized airfoils provide a useful starting point for further
expert refinement.
|
2503.16332 | Mohamed Bilel Besbes | Mohamed Bilel Besbes, Diego Elias Costa, Suhaib Mujahid, Gregory
Mierzwinski, Marco Castelluccio | A Dataset of Performance Measurements and Alerts from Mozilla (Data
Artifact) | null | null | 10.1145/3680256.3721973 | null | cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performance regressions in software systems can lead to significant financial
losses and degraded user satisfaction, making their early detection and
mitigation critical. Despite the importance of practices that capture
performance regressions early, there is a lack of publicly available datasets
that comprehensively capture real-world performance measurements,
expert-validated alerts, and associated metadata such as bugs and testing
conditions.
To address this gap, we introduce a unique dataset to support various
research studies in performance engineering, anomaly detection, and machine
learning. This dataset was collected from Mozilla Firefox's performance testing
infrastructure and comprises 5,655 performance time series, 17,989 performance
alerts, and detailed annotations of resulting bugs collected from May 2023 to
May 2024. By publishing this dataset, we provide researchers with an invaluable
resource for studying performance trends, developing novel change point
detection methods, and advancing performance regression analysis across diverse
platforms and testing environments. The dataset is available at
https://doi.org/10.5281/zenodo.14642238
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:55:05 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Besbes",
"Mohamed Bilel",
""
],
[
"Costa",
"Diego Elias",
""
],
[
"Mujahid",
"Suhaib",
""
],
[
"Mierzwinski",
"Gregory",
""
],
[
"Castelluccio",
"Marco",
""
]
] | TITLE: A Dataset of Performance Measurements and Alerts from Mozilla (Data
Artifact)
ABSTRACT: Performance regressions in software systems can lead to significant financial
losses and degraded user satisfaction, making their early detection and
mitigation critical. Despite the importance of practices that capture
performance regressions early, there is a lack of publicly available datasets
that comprehensively capture real-world performance measurements,
expert-validated alerts, and associated metadata such as bugs and testing
conditions.
To address this gap, we introduce a unique dataset to support various
research studies in performance engineering, anomaly detection, and machine
learning. This dataset was collected from Mozilla Firefox's performance testing
infrastructure and comprises 5,655 performance time series, 17,989 performance
alerts, and detailed annotations of resulting bugs collected from May 2023 to
May 2024. By publishing this dataset, we provide researchers with an invaluable
resource for studying performance trends, developing novel change point
detection methods, and advancing performance regression analysis across diverse
platforms and testing environments. The dataset is available at
https://doi.org/10.5281/zenodo.14642238
|
2503.16338 | Shengjun Zhang | Shengjun Zhang, Xin Fei, Fangfu Liu, Haixu Song, Yueqi Duan | Gaussian Graph Network: Learning Efficient and Generalizable Gaussian
Representations from Multi-view Images | NeurIPS 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis
performance. While conventional methods require per-scene optimization, more
recently several feed-forward methods have been proposed to generate
pixel-aligned Gaussian representations with a learnable network, which are
generalizable to different scenes. However, these methods simply combine
pixel-aligned Gaussians from multiple views as scene representations, thereby
leading to artifacts and extra memory cost without fully capturing the
relations of Gaussians from different images. In this paper, we propose
Gaussian Graph Network (GGN) to generate efficient and generalizable Gaussian
representations. Specifically, we construct Gaussian Graphs to model the
relations of Gaussian groups from different views. To support message passing
at Gaussian level, we reformulate the basic graph operations over Gaussian
representations, enabling each Gaussian to benefit from its connected Gaussian
groups with Gaussian feature fusion. Furthermore, we design a Gaussian pooling
layer to aggregate various Gaussian groups for efficient representations. We
conduct experiments on the large-scale RealEstate10K and ACID datasets to
demonstrate the efficiency and generalization of our method. Compared to the
state-of-the-art methods, our model uses fewer Gaussians and achieves better
image quality with higher rendering speed.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 16:56:13 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhang",
"Shengjun",
""
],
[
"Fei",
"Xin",
""
],
[
"Liu",
"Fangfu",
""
],
[
"Song",
"Haixu",
""
],
[
"Duan",
"Yueqi",
""
]
] | TITLE: Gaussian Graph Network: Learning Efficient and Generalizable Gaussian
Representations from Multi-view Images
ABSTRACT: 3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis
performance. While conventional methods require per-scene optimization, more
recently several feed-forward methods have been proposed to generate
pixel-aligned Gaussian representations with a learnable network, which are
generalizable to different scenes. However, these methods simply combine
pixel-aligned Gaussians from multiple views as scene representations, thereby
leading to artifacts and extra memory cost without fully capturing the
relations of Gaussians from different images. In this paper, we propose
Gaussian Graph Network (GGN) to generate efficient and generalizable Gaussian
representations. Specifically, we construct Gaussian Graphs to model the
relations of Gaussian groups from different views. To support message passing
at Gaussian level, we reformulate the basic graph operations over Gaussian
representations, enabling each Gaussian to benefit from its connected Gaussian
groups with Gaussian feature fusion. Furthermore, we design a Gaussian pooling
layer to aggregate various Gaussian groups for efficient representations. We
conduct experiments on the large-scale RealEstate10K and ACID datasets to
demonstrate the efficiency and generalization of our method. Compared to the
state-of-the-art methods, our model uses fewer Gaussians and achieves better
image quality with higher rendering speed.
|
2503.16351 | Sameed Siddiqui | Krithik Ramesh (1 and 2), Sameed M. Siddiqui (1 and 3), Albert Gu (4),
Michael D. Mitzenmacher (1 and 5), Pardis C. Sabeti (1 and 6 and 7 and 8)
((1) Broad Institute of MIT and Harvard, (2) Massachusetts Institute of
Technology, (3) Computational and Systems Biology Program, Massachusetts
Institute of Technology, (4) Machine Learning Department, Carnegie Mellon
University, (5) School of Engineering and Applied Sciences, Harvard
University, (6) Department of Organismic and Evolutionary Biology, Harvard
University, (7) Department of Immunology and Infectious Diseases, Harvard
T.H. Chan School of Public Health, Harvard University, (8) Howard Hughes
Medical Institute) | Lyra: An Efficient and Expressive Subquadratic Architecture for Modeling
Biological Sequences | 53 pages, 5 figures | null | null | null | cs.LG q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning architectures such as convolutional neural networks and
Transformers have revolutionized biological sequence modeling, with recent
advances driven by scaling up foundation and task-specific models. The
computational resources and large datasets required, however, limit their
applicability in biological contexts. We introduce Lyra, a subquadratic
architecture for sequence modeling, grounded in the biological framework of
epistasis for understanding sequence-to-function relationships. Mathematically,
we demonstrate that state space models efficiently capture global epistatic
interactions and combine them with projected gated convolutions for modeling
local relationships. We demonstrate that Lyra is performant across over 100
wide-ranging biological tasks, achieving state-of-the-art (SOTA) performance in
many key areas, including protein fitness landscape prediction, biophysical
property prediction (e.g. disordered protein region functions) peptide
engineering applications (e.g. antibody binding, cell-penetrating peptide
prediction), RNA structure analysis, RNA function prediction, and CRISPR guide
design. It achieves this with orders-of-magnitude improvements in inference
speed and reduction in parameters (up to 120,000-fold in our tests) compared to
recent biology foundation models. Using Lyra, we were able to train and run
every task in this study on two or fewer GPUs in under two hours, democratizing
access to biological sequence modeling at SOTA performance, with potential
applications to many fields.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:09:18 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Ramesh",
"Krithik",
"",
"1 and 2"
],
[
"Siddiqui",
"Sameed M.",
"",
"1 and 3"
],
[
"Gu",
"Albert",
"",
"1 and 5"
],
[
"Mitzenmacher",
"Michael D.",
"",
"1 and 5"
],
[
"Sabeti",
"Pardis C.",
"",
"1 and 6 and 7 and 8"
]
] | TITLE: Lyra: An Efficient and Expressive Subquadratic Architecture for Modeling
Biological Sequences
ABSTRACT: Deep learning architectures such as convolutional neural networks and
Transformers have revolutionized biological sequence modeling, with recent
advances driven by scaling up foundation and task-specific models. The
computational resources and large datasets required, however, limit their
applicability in biological contexts. We introduce Lyra, a subquadratic
architecture for sequence modeling, grounded in the biological framework of
epistasis for understanding sequence-to-function relationships. Mathematically,
we demonstrate that state space models efficiently capture global epistatic
interactions and combine them with projected gated convolutions for modeling
local relationships. We demonstrate that Lyra is performant across over 100
wide-ranging biological tasks, achieving state-of-the-art (SOTA) performance in
many key areas, including protein fitness landscape prediction, biophysical
property prediction (e.g. disordered protein region functions) peptide
engineering applications (e.g. antibody binding, cell-penetrating peptide
prediction), RNA structure analysis, RNA function prediction, and CRISPR guide
design. It achieves this with orders-of-magnitude improvements in inference
speed and reduction in parameters (up to 120,000-fold in our tests) compared to
recent biology foundation models. Using Lyra, we were able to train and run
every task in this study on two or fewer GPUs in under two hours, democratizing
access to biological sequence modeling at SOTA performance, with potential
applications to many fields.
|
2503.16356 | Ningyu Zhang | Yunzhi Yao, Jizhan Fang, Jia-Chen Gu, Ningyu Zhang, Shumin Deng,
Huajun Chen, Nanyun Peng | CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners | Work in progress | null | null | null | cs.CL cs.AI cs.CV cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Editing (KE) enables the modification of outdated or incorrect
information in large language models (LLMs). While existing KE methods can
update isolated facts, they struggle to generalize these updates to multi-hop
reasoning tasks that depend on the modified knowledge. Through an analysis of
reasoning circuits -- the neural pathways LLMs use for knowledge-based
inference, we observe that current layer-localized KE approaches, such as MEMIT
and WISE, which edit only single or a few model layers, struggle to effectively
incorporate updated information into these reasoning pathways. To address this
limitation, we propose CaKE (Circuit-aware Knowledge Editing), a novel method
that enables more effective integration of updated knowledge in LLMs. CaKE
leverages strategically curated data, guided by our circuits-based analysis,
that enforces the model to utilize the modified knowledge, stimulating the
model to develop appropriate reasoning circuits for newly integrated knowledge.
Experimental results show that CaKE enables more accurate and consistent use of
updated knowledge across related reasoning tasks, leading to an average of 20%
improvement in multi-hop reasoning accuracy on MQuAKE dataset compared to
existing KE methods. We release the code and data in
https://github.com/zjunlp/CaKE.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:14:34 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yao",
"Yunzhi",
""
],
[
"Fang",
"Jizhan",
""
],
[
"Gu",
"Jia-Chen",
""
],
[
"Zhang",
"Ningyu",
""
],
[
"Deng",
"Shumin",
""
],
[
"Chen",
"Huajun",
""
],
[
"Peng",
"Nanyun",
""
]
] | TITLE: CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners
ABSTRACT: Knowledge Editing (KE) enables the modification of outdated or incorrect
information in large language models (LLMs). While existing KE methods can
update isolated facts, they struggle to generalize these updates to multi-hop
reasoning tasks that depend on the modified knowledge. Through an analysis of
reasoning circuits -- the neural pathways LLMs use for knowledge-based
inference, we observe that current layer-localized KE approaches, such as MEMIT
and WISE, which edit only single or a few model layers, struggle to effectively
incorporate updated information into these reasoning pathways. To address this
limitation, we propose CaKE (Circuit-aware Knowledge Editing), a novel method
that enables more effective integration of updated knowledge in LLMs. CaKE
leverages strategically curated data, guided by our circuits-based analysis,
that enforces the model to utilize the modified knowledge, stimulating the
model to develop appropriate reasoning circuits for newly integrated knowledge.
Experimental results show that CaKE enables more accurate and consistent use of
updated knowledge across related reasoning tasks, leading to an average of 20%
improvement in multi-hop reasoning accuracy on MQuAKE dataset compared to
existing KE methods. We release the code and data in
https://github.com/zjunlp/CaKE.
|
2503.16357 | Tao Feng | Tao Feng, Yifan Xie, Xun Guan, Jiyuan Song, Zhou Liu, Fei Ma, Fei Yu | UniSync: A Unified Framework for Audio-Visual Synchronization | 7 pages, 3 figures, accepted by ICME 2025 | null | null | null | cs.CV cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Precise audio-visual synchronization in speech videos is crucial for content
quality and viewer comprehension. Existing methods have made significant
strides in addressing this challenge through rule-based approaches and
end-to-end learning techniques. However, these methods often rely on limited
audio-visual representations and suboptimal learning strategies, potentially
constraining their effectiveness in more complex scenarios. To address these
limitations, we present UniSync, a novel approach for evaluating audio-visual
synchronization using embedding similarities. UniSync offers broad
compatibility with various audio representations (e.g., Mel spectrograms,
HuBERT) and visual representations (e.g., RGB images, face parsing maps, facial
landmarks, 3DMM), effectively handling their significant dimensional
differences. We enhance the contrastive learning framework with a margin-based
loss component and cross-speaker unsynchronized pairs, improving discriminative
capabilities. UniSync outperforms existing methods on standard datasets and
demonstrates versatility across diverse audio-visual representations. Its
integration into talking face generation frameworks enhances synchronization
quality in both natural and AI-generated content.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:16:03 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Feng",
"Tao",
""
],
[
"Xie",
"Yifan",
""
],
[
"Guan",
"Xun",
""
],
[
"Song",
"Jiyuan",
""
],
[
"Liu",
"Zhou",
""
],
[
"Ma",
"Fei",
""
],
[
"Yu",
"Fei",
""
]
] | TITLE: UniSync: A Unified Framework for Audio-Visual Synchronization
ABSTRACT: Precise audio-visual synchronization in speech videos is crucial for content
quality and viewer comprehension. Existing methods have made significant
strides in addressing this challenge through rule-based approaches and
end-to-end learning techniques. However, these methods often rely on limited
audio-visual representations and suboptimal learning strategies, potentially
constraining their effectiveness in more complex scenarios. To address these
limitations, we present UniSync, a novel approach for evaluating audio-visual
synchronization using embedding similarities. UniSync offers broad
compatibility with various audio representations (e.g., Mel spectrograms,
HuBERT) and visual representations (e.g., RGB images, face parsing maps, facial
landmarks, 3DMM), effectively handling their significant dimensional
differences. We enhance the contrastive learning framework with a margin-based
loss component and cross-speaker unsynchronized pairs, improving discriminative
capabilities. UniSync outperforms existing methods on standard datasets and
demonstrates versatility across diverse audio-visual representations. Its
integration into talking face generation frameworks enhances synchronization
quality in both natural and AI-generated content.
|
2503.16359 | Michael McCann | M. McCann, C. P. Ballance, F. McNeill, S. A. Sim, C. A. Ramsbottom | Electron-Impact Excitation of Zirconium I-III in support of Neutron Star
Merger Diagnostics | 17 pages, 11 figures | null | null | null | physics.atom-ph | http://creativecommons.org/licenses/by/4.0/ | Recent observation and analysis of kilonovae (KNe) spectra as a result of
neutron star mergers require accurate and complete atomic structure and
collisional data for interpretation. Ideally, the atomic datasets for elements
predicted to be abundant in the ejecta should be experimentally calibrated. For
near-neutral ion stages of Zirconium in particular, the A-values and the
associated excitation/de-excitation rates are required from collision
calculations built upon accurate structure models. The atomic orbitals required
to perform the structure calculations may be calculated using a
Multi-Configuration-Dirac-Fock (MCDF) approximation implemented within the
General Relativistic Atomic Structure Package (GRASP0). Optimized sets of
relativistic atomic orbitals are then imported into electron-impact excitation
collision calculations. A relativistic R-matrix formulation within the Dirac
Atomic R-matrix Code (DARC) is employed to compute collision strengths, which
are subsequently Maxwellian convolved to produce excitation/de-excitation rates
for a wide range of electron temperatures. These atomic datasets subsequently
provide the foundations for non-local thermodynamic equilibrium (NLTE)
collisional-radiative models. In this work all these computations have been
carried out for the first three ion stages of Zirconium (Zr I-III) with the
data further interfaced with collisional-radiative and radiative transfer codes
to produce synthetic spectra which can be compared with observation.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:17:14 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"McCann",
"M.",
""
],
[
"Ballance",
"C. P.",
""
],
[
"McNeill",
"F.",
""
],
[
"Sim",
"S. A.",
""
],
[
"Ramsbottom",
"C. A.",
""
]
] | TITLE: Electron-Impact Excitation of Zirconium I-III in support of Neutron Star
Merger Diagnostics
ABSTRACT: Recent observation and analysis of kilonovae (KNe) spectra as a result of
neutron star mergers require accurate and complete atomic structure and
collisional data for interpretation. Ideally, the atomic datasets for elements
predicted to be abundant in the ejecta should be experimentally calibrated. For
near-neutral ion stages of Zirconium in particular, the A-values and the
associated excitation/de-excitation rates are required from collision
calculations built upon accurate structure models. The atomic orbitals required
to perform the structure calculations may be calculated using a
Multi-Configuration-Dirac-Fock (MCDF) approximation implemented within the
General Relativistic Atomic Structure Package (GRASP0). Optimized sets of
relativistic atomic orbitals are then imported into electron-impact excitation
collision calculations. A relativistic R-matrix formulation within the Dirac
Atomic R-matrix Code (DARC) is employed to compute collision strengths, which
are subsequently Maxwellian convolved to produce excitation/de-excitation rates
for a wide range of electron temperatures. These atomic datasets subsequently
provide the foundations for non-local thermodynamic equilibrium (NLTE)
collisional-radiative models. In this work all these computations have been
carried out for the first three ion stages of Zirconium (Zr I-III) with the
data further interfaced with collisional-radiative and radiative transfer codes
to produce synthetic spectra which can be compared with observation.
|
2503.16363 | Haoqi He | Haoqi He, Yan Xiao | Probabilistic Quantum SVM Training on Ising Machine | null | null | null | null | cs.LG quant-ph | http://creativecommons.org/licenses/by/4.0/ | Quantum computing holds significant potential to accelerate machine learning
algorithms, especially in solving optimization problems like those encountered
in Support Vector Machine (SVM) training. However, current QUBO-based Quantum
SVM (QSVM) methods rely solely on binary optimal solutions, limiting their
ability to identify fuzzy boundaries in data. Additionally, the limited qubit
count in contemporary quantum devices constrains training on larger datasets.
In this paper, we propose a probabilistic quantum SVM training framework
suitable for Coherent Ising Machines (CIMs). By formulating the SVM training
problem as a QUBO model, we leverage CIMs' energy minimization capabilities and
introduce a Boltzmann distribution-based probabilistic approach to better
approximate optimal SVM solutions, enhancing robustness. To address qubit
limitations, we employ batch processing and multi-batch ensemble strategies,
enabling small-scale quantum devices to train SVMs on larger datasets and
support multi-class classification tasks via a one-vs-one approach. Our method
is validated through simulations and real-machine experiments on binary and
multi-class datasets. On the banknote binary classification dataset, our
CIM-based QSVM, utilizing an energy-based probabilistic approach, achieved up
to 20% higher accuracy compared to the original QSVM, while training up to
$10^4$ times faster than simulated annealing methods. Compared with classical
SVM, our approach either matched or reduced training time. On the IRIS
three-class dataset, our improved QSVM outperformed existing QSVM models in all
key metrics. As quantum technology advances, increased qubit counts are
expected to further enhance QSVM performance relative to classical SVM.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:20:26 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"He",
"Haoqi",
""
],
[
"Xiao",
"Yan",
""
]
] | TITLE: Probabilistic Quantum SVM Training on Ising Machine
ABSTRACT: Quantum computing holds significant potential to accelerate machine learning
algorithms, especially in solving optimization problems like those encountered
in Support Vector Machine (SVM) training. However, current QUBO-based Quantum
SVM (QSVM) methods rely solely on binary optimal solutions, limiting their
ability to identify fuzzy boundaries in data. Additionally, the limited qubit
count in contemporary quantum devices constrains training on larger datasets.
In this paper, we propose a probabilistic quantum SVM training framework
suitable for Coherent Ising Machines (CIMs). By formulating the SVM training
problem as a QUBO model, we leverage CIMs' energy minimization capabilities and
introduce a Boltzmann distribution-based probabilistic approach to better
approximate optimal SVM solutions, enhancing robustness. To address qubit
limitations, we employ batch processing and multi-batch ensemble strategies,
enabling small-scale quantum devices to train SVMs on larger datasets and
support multi-class classification tasks via a one-vs-one approach. Our method
is validated through simulations and real-machine experiments on binary and
multi-class datasets. On the banknote binary classification dataset, our
CIM-based QSVM, utilizing an energy-based probabilistic approach, achieved up
to 20% higher accuracy compared to the original QSVM, while training up to
$10^4$ times faster than simulated annealing methods. Compared with classical
SVM, our approach either matched or reduced training time. On the IRIS
three-class dataset, our improved QSVM outperformed existing QSVM models in all
key metrics. As quantum technology advances, increased qubit counts are
expected to further enhance QSVM performance relative to classical SVM.
|
2503.16365 | Zihao Wang | Muyao Li, Zihao Wang, Kaichen He, Xiaojian Ma, Yitao Liang | JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play
Visual Games with Keyboards and Mouse | 22 pages, 5 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, action-based decision-making in open-world environments has gained
significant attention. Visual Language Action (VLA) models, pretrained on
large-scale web datasets, have shown promise in decision-making tasks. However,
previous work has primarily focused on action post-training, often neglecting
enhancements to the foundational model itself. In response, we introduce a
novel approach, Act from Visual Language Post-Training, which refines Visual
Language Models (VLMs) through visual and linguistic guidance in a
self-supervised manner. This enhancement improves the models' capabilities in
world knowledge, visual recognition, and spatial grounding in open-world
environments. Following the above post-training paradigms, we obtain the first
VLA models in Minecraft that can follow human instructions on over 1k different
atomic tasks, including crafting, smelting, cooking, mining, and killing. Our
experiments demonstrate that post-training on non-trajectory tasks leads to a
significant 40% improvement over the best agent baseline on a diverse set of
atomic tasks. Furthermore, we demonstrate that our approach surpasses
traditional imitation learning-based policies in Minecraft, achieving
state-of-the-art performance. We have open-sourced the code, models, and
datasets to foster further research. The project page can be found in
https://craftjarvis.github.io/JarvisVLA.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:21:58 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Li",
"Muyao",
""
],
[
"Wang",
"Zihao",
""
],
[
"He",
"Kaichen",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Liang",
"Yitao",
""
]
] | TITLE: JARVIS-VLA: Post-Training Large-Scale Vision Language Models to Play
Visual Games with Keyboards and Mouse
ABSTRACT: Recently, action-based decision-making in open-world environments has gained
significant attention. Visual Language Action (VLA) models, pretrained on
large-scale web datasets, have shown promise in decision-making tasks. However,
previous work has primarily focused on action post-training, often neglecting
enhancements to the foundational model itself. In response, we introduce a
novel approach, Act from Visual Language Post-Training, which refines Visual
Language Models (VLMs) through visual and linguistic guidance in a
self-supervised manner. This enhancement improves the models' capabilities in
world knowledge, visual recognition, and spatial grounding in open-world
environments. Following the above post-training paradigms, we obtain the first
VLA models in Minecraft that can follow human instructions on over 1k different
atomic tasks, including crafting, smelting, cooking, mining, and killing. Our
experiments demonstrate that post-training on non-trajectory tasks leads to a
significant 40% improvement over the best agent baseline on a diverse set of
atomic tasks. Furthermore, we demonstrate that our approach surpasses
traditional imitation learning-based policies in Minecraft, achieving
state-of-the-art performance. We have open-sourced the code, models, and
datasets to foster further research. The project page can be found in
https://craftjarvis.github.io/JarvisVLA.
|
2503.16376 | Leyang Wang | Leyang Wang and Joice Lin | LaPIG: Cross-Modal Generation of Paired Thermal and Visible Facial
Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of modern machine learning, particularly in facial translation
networks, is highly dependent on the availability of high-quality, paired,
large-scale datasets. However, acquiring sufficient data is often challenging
and costly. Inspired by the recent success of diffusion models in high-quality
image synthesis and advancements in Large Language Models (LLMs), we propose a
novel framework called LLM-assisted Paired Image Generation (LaPIG). This
framework enables the construction of comprehensive, high-quality paired
visible and thermal images using captions generated by LLMs. Our method
encompasses three parts: visible image synthesis with ArcFace embedding,
thermal image translation using Latent Diffusion Models (LDMs), and caption
generation with LLMs. Our approach not only generates multi-view paired visible
and thermal images to increase data diversity but also produces high-quality
paired data while maintaining their identity information. We evaluate our
method on public datasets by comparing it with existing methods, demonstrating
the superiority of LaPIG.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:39:06 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wang",
"Leyang",
""
],
[
"Lin",
"Joice",
""
]
] | TITLE: LaPIG: Cross-Modal Generation of Paired Thermal and Visible Facial
Images
ABSTRACT: The success of modern machine learning, particularly in facial translation
networks, is highly dependent on the availability of high-quality, paired,
large-scale datasets. However, acquiring sufficient data is often challenging
and costly. Inspired by the recent success of diffusion models in high-quality
image synthesis and advancements in Large Language Models (LLMs), we propose a
novel framework called LLM-assisted Paired Image Generation (LaPIG). This
framework enables the construction of comprehensive, high-quality paired
visible and thermal images using captions generated by LLMs. Our method
encompasses three parts: visible image synthesis with ArcFace embedding,
thermal image translation using Latent Diffusion Models (LDMs), and caption
generation with LLMs. Our approach not only generates multi-view paired visible
and thermal images to increase data diversity but also produces high-quality
paired data while maintaining their identity information. We evaluate our
method on public datasets by comparing it with existing methods, demonstrating
the superiority of LaPIG.
|
2503.16378 | Alexey Nekrasov | Tzu-Yun Tseng, Alexey Nekrasov, Malcolm Burdorf, Bastian Leibe, Julie
Stephany Berrio, Mao Shan, Stewart Worrall | Panoptic-CUDAL Technical Report: Rural Australia Point Cloud Dataset in
Rainy Conditions | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Existing autonomous driving datasets are predominantly oriented towards
well-structured urban settings and favorable weather conditions, leaving the
complexities of rural environments and adverse weather conditions largely
unaddressed. Although some datasets encompass variations in weather and
lighting, bad weather scenarios do not appear often. Rainfall can significantly
impair sensor functionality, introducing noise and reflections in LiDAR and
camera data and reducing the system's capabilities for reliable environmental
perception and safe navigation. We introduce the Panoptic-CUDAL dataset, a
novel dataset purpose-built for panoptic segmentation in rural areas subject to
rain. By recording high-resolution LiDAR, camera, and pose data, Panoptic-CUDAL
offers a diverse, information-rich dataset in a challenging scenario. We
present analysis of the recorded data and provide baseline results for panoptic
and semantic segmentation methods on LiDAR point clouds. The dataset can be
found here:
https://robotics.sydney.edu.au/our-research/intelligent-transportation-systems/
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:41:16 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Tseng",
"Tzu-Yun",
""
],
[
"Nekrasov",
"Alexey",
""
],
[
"Burdorf",
"Malcolm",
""
],
[
"Leibe",
"Bastian",
""
],
[
"Berrio",
"Julie Stephany",
""
],
[
"Shan",
"Mao",
""
],
[
"Worrall",
"Stewart",
""
]
] | TITLE: Panoptic-CUDAL Technical Report: Rural Australia Point Cloud Dataset in
Rainy Conditions
ABSTRACT: Existing autonomous driving datasets are predominantly oriented towards
well-structured urban settings and favorable weather conditions, leaving the
complexities of rural environments and adverse weather conditions largely
unaddressed. Although some datasets encompass variations in weather and
lighting, bad weather scenarios do not appear often. Rainfall can significantly
impair sensor functionality, introducing noise and reflections in LiDAR and
camera data and reducing the system's capabilities for reliable environmental
perception and safe navigation. We introduce the Panoptic-CUDAL dataset, a
novel dataset purpose-built for panoptic segmentation in rural areas subject to
rain. By recording high-resolution LiDAR, camera, and pose data, Panoptic-CUDAL
offers a diverse, information-rich dataset in a challenging scenario. We
present analysis of the recorded data and provide baseline results for panoptic
and semantic segmentation methods on LiDAR point clouds. The dataset can be
found here:
https://robotics.sydney.edu.au/our-research/intelligent-transportation-systems/
|
2503.16399 | Chen Chen | Chen Chen, Zhirui Wang, Taowei Sheng, Yi Jiang, Yundu Li, Peirui
Cheng, Luning Zhang, Kaiqiang Chen, Yanfeng Hu, Xue Yang, Xian Sun | SA-Occ: Satellite-Assisted 3D Occupancy Prediction in Real World | 10 pages | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing vision-based 3D occupancy prediction methods are inherently limited
in accuracy due to their exclusive reliance on street-view imagery, neglecting
the potential benefits of incorporating satellite views. We propose SA-Occ, the
first Satellite-Assisted 3D occupancy prediction model, which leverages GPS &
IMU to integrate historical yet readily available satellite imagery into
real-time applications, effectively mitigating limitations of ego-vehicle
perceptions, involving occlusions and degraded performance in distant regions.
To address the core challenges of cross-view perception, we propose: 1)
Dynamic-Decoupling Fusion, which resolves inconsistencies in dynamic regions
caused by the temporal asynchrony between satellite and street views; 2)
3D-Proj Guidance, a module that enhances 3D feature extraction from inherently
2D satellite imagery; and 3) Uniform Sampling Alignment, which aligns the
sampling density between street and satellite views. Evaluated on
Occ3D-nuScenes, SA-Occ achieves state-of-the-art performance, especially among
single-frame methods, with a 39.05% mIoU (a 6.97% improvement), while incurring
only 6.93 ms of additional latency per frame. Our code and newly curated
dataset are available at https://github.com/chenchen235/SA-Occ.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:54:29 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chen",
"Chen",
""
],
[
"Wang",
"Zhirui",
""
],
[
"Sheng",
"Taowei",
""
],
[
"Jiang",
"Yi",
""
],
[
"Li",
"Yundu",
""
],
[
"Cheng",
"Peirui",
""
],
[
"Zhang",
"Luning",
""
],
[
"Chen",
"Kaiqiang",
""
],
[
"Hu",
"Yanfeng",
""
],
[
"Yang",
"Xue",
""
],
[
"Sun",
"Xian",
""
]
] | TITLE: SA-Occ: Satellite-Assisted 3D Occupancy Prediction in Real World
ABSTRACT: Existing vision-based 3D occupancy prediction methods are inherently limited
in accuracy due to their exclusive reliance on street-view imagery, neglecting
the potential benefits of incorporating satellite views. We propose SA-Occ, the
first Satellite-Assisted 3D occupancy prediction model, which leverages GPS &
IMU to integrate historical yet readily available satellite imagery into
real-time applications, effectively mitigating limitations of ego-vehicle
perceptions, involving occlusions and degraded performance in distant regions.
To address the core challenges of cross-view perception, we propose: 1)
Dynamic-Decoupling Fusion, which resolves inconsistencies in dynamic regions
caused by the temporal asynchrony between satellite and street views; 2)
3D-Proj Guidance, a module that enhances 3D feature extraction from inherently
2D satellite imagery; and 3) Uniform Sampling Alignment, which aligns the
sampling density between street and satellite views. Evaluated on
Occ3D-nuScenes, SA-Occ achieves state-of-the-art performance, especially among
single-frame methods, with a 39.05% mIoU (a 6.97% improvement), while incurring
only 6.93 ms of additional latency per frame. Our code and newly curated
dataset are available at https://github.com/chenchen235/SA-Occ.
|
2503.16401 | Guanyu Chen | Guanyu Chen, Peiyang Wang, Tianren Zhang, Feng Chen | Exploring the Hidden Reasoning Process of Large Language Models by
Misleading Them | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) and Vision language models (VLMs) have been able
to perform various forms of reasoning tasks in a wide range of scenarios, but
are they truly engaging in task abstraction and rule-based reasoning beyond
mere memorization and pattern matching? To answer this question, we propose a
novel experimental approach, Misleading Fine-Tuning (MisFT), to examine whether
LLMs/VLMs perform abstract reasoning by altering their original understanding
of fundamental rules. In particular, by constructing a dataset with math
expressions that contradict correct operation principles, we fine-tune the
model to learn those contradictory rules and assess its generalization ability
on different test domains. Through a series of experiments, we find that
current LLMs/VLMs are capable of effectively applying contradictory rules to
solve practical math word problems and math expressions represented by images,
implying the presence of an internal mechanism that abstracts before reasoning.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:54:42 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chen",
"Guanyu",
""
],
[
"Wang",
"Peiyang",
""
],
[
"Zhang",
"Tianren",
""
],
[
"Chen",
"Feng",
""
]
] | TITLE: Exploring the Hidden Reasoning Process of Large Language Models by
Misleading Them
ABSTRACT: Large language models (LLMs) and Vision language models (VLMs) have been able
to perform various forms of reasoning tasks in a wide range of scenarios, but
are they truly engaging in task abstraction and rule-based reasoning beyond
mere memorization and pattern matching? To answer this question, we propose a
novel experimental approach, Misleading Fine-Tuning (MisFT), to examine whether
LLMs/VLMs perform abstract reasoning by altering their original understanding
of fundamental rules. In particular, by constructing a dataset with math
expressions that contradict correct operation principles, we fine-tune the
model to learn those contradictory rules and assess its generalization ability
on different test domains. Through a series of experiments, we find that
current LLMs/VLMs are capable of effectively applying contradictory rules to
solve practical math word problems and math expressions represented by images,
implying the presence of an internal mechanism that abstracts before reasoning.
|
2503.16406 | SeungJu Cha | SeungJu Cha, Kwanyoung Lee, Ye-Chan Kim, Hyunwoo Oh, Dong-Jin Kim | VerbDiff: Text-Only Diffusion Models with Enhanced Interaction Awareness | Accepted at CVPR 2025, code :
https://github.com/SeungJuCha/VerbDiff.git | null | null | null | cs.GR cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent large-scale text-to-image diffusion models generate photorealistic
images but often struggle to accurately depict interactions between humans and
objects due to their limited ability to differentiate various interaction
words. In this work, we propose VerbDiff to address the challenge of capturing
nuanced interactions within text-to-image diffusion models. VerbDiff is a novel
text-to-image generation model that weakens the bias between interaction words
and objects, enhancing the understanding of interactions. Specifically, we
disentangle various interaction words from frequency-based anchor words and
leverage localized interaction regions from generated images to help the model
better capture semantics in distinctive words without extra conditions. Our
approach enables the model to accurately understand the intended interaction
between humans and objects, producing high-quality images with accurate
interactions aligned with specified verbs. Extensive experiments on the
HICO-DET dataset demonstrate the effectiveness of our method compared to
previous approaches.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:56:20 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Cha",
"SeungJu",
""
],
[
"Lee",
"Kwanyoung",
""
],
[
"Kim",
"Ye-Chan",
""
],
[
"Oh",
"Hyunwoo",
""
],
[
"Kim",
"Dong-Jin",
""
]
] | TITLE: VerbDiff: Text-Only Diffusion Models with Enhanced Interaction Awareness
ABSTRACT: Recent large-scale text-to-image diffusion models generate photorealistic
images but often struggle to accurately depict interactions between humans and
objects due to their limited ability to differentiate various interaction
words. In this work, we propose VerbDiff to address the challenge of capturing
nuanced interactions within text-to-image diffusion models. VerbDiff is a novel
text-to-image generation model that weakens the bias between interaction words
and objects, enhancing the understanding of interactions. Specifically, we
disentangle various interaction words from frequency-based anchor words and
leverage localized interaction regions from generated images to help the model
better capture semantics in distinctive words without extra conditions. Our
approach enables the model to accurately understand the intended interaction
between humans and objects, producing high-quality images with accurate
interactions aligned with specified verbs. Extensive experiments on the
HICO-DET dataset demonstrate the effectiveness of our method compared to
previous approaches.
|
2503.16421 | Quanhao Li | Quanhao Li, Zhen Xing, Rui Wang, Hui Zhang, Qi Dai, Zuxuan Wu | MagicMotion: Controllable Video Generation with Dense-to-Sparse
Trajectory Guidance | null | null | null | null | cs.CV cs.AI cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in video generation have led to remarkable improvements in
visual quality and temporal coherence. Upon this, trajectory-controllable video
generation has emerged to enable precise object motion control through
explicitly defined spatial paths. However, existing methods struggle with
complex object movements and multi-object motion control, resulting in
imprecise trajectory adherence, poor object consistency, and compromised visual
quality. Furthermore, these methods only support trajectory control in a single
format, limiting their applicability in diverse scenarios. Additionally, there
is no publicly available dataset or benchmark specifically tailored for
trajectory-controllable video generation, hindering robust training and
systematic evaluation. To address these challenges, we introduce MagicMotion, a
novel image-to-video generation framework that enables trajectory control
through three levels of conditions from dense to sparse: masks, bounding boxes,
and sparse boxes. Given an input image and trajectories, MagicMotion seamlessly
animates objects along defined trajectories while maintaining object
consistency and visual quality. Furthermore, we present MagicData, a
large-scale trajectory-controlled video dataset, along with an automated
pipeline for annotation and filtering. We also introduce MagicBench, a
comprehensive benchmark that assesses both video quality and trajectory control
accuracy across different numbers of objects. Extensive experiments demonstrate
that MagicMotion outperforms previous methods across various metrics. Our
project page are publicly available at
https://quanhaol.github.io/magicmotion-site.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 17:59:42 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Li",
"Quanhao",
""
],
[
"Xing",
"Zhen",
""
],
[
"Wang",
"Rui",
""
],
[
"Zhang",
"Hui",
""
],
[
"Dai",
"Qi",
""
],
[
"Wu",
"Zuxuan",
""
]
] | TITLE: MagicMotion: Controllable Video Generation with Dense-to-Sparse
Trajectory Guidance
ABSTRACT: Recent advances in video generation have led to remarkable improvements in
visual quality and temporal coherence. Upon this, trajectory-controllable video
generation has emerged to enable precise object motion control through
explicitly defined spatial paths. However, existing methods struggle with
complex object movements and multi-object motion control, resulting in
imprecise trajectory adherence, poor object consistency, and compromised visual
quality. Furthermore, these methods only support trajectory control in a single
format, limiting their applicability in diverse scenarios. Additionally, there
is no publicly available dataset or benchmark specifically tailored for
trajectory-controllable video generation, hindering robust training and
systematic evaluation. To address these challenges, we introduce MagicMotion, a
novel image-to-video generation framework that enables trajectory control
through three levels of conditions from dense to sparse: masks, bounding boxes,
and sparse boxes. Given an input image and trajectories, MagicMotion seamlessly
animates objects along defined trajectories while maintaining object
consistency and visual quality. Furthermore, we present MagicData, a
large-scale trajectory-controlled video dataset, along with an automated
pipeline for annotation and filtering. We also introduce MagicBench, a
comprehensive benchmark that assesses both video quality and trajectory control
accuracy across different numbers of objects. Extensive experiments demonstrate
that MagicMotion outperforms previous methods across various metrics. Our
project page are publicly available at
https://quanhaol.github.io/magicmotion-site.
|
2202.03434 | Josefine Vilsb{\o}ll Sundgaard | Josefine Vilsb{\o}ll Sundgaard, Morten Rieger Hannemose, S{\o}ren
Laugesen, Peter Bray, James Harte, Yosuke Kamide, Chiemi Tanaka, Rasmus R.
Paulsen, and Anders Nymark Christensen | Multi-modal data generation with a deep metric variational autoencoder | null | null | 10.7557/18.6803 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a deep metric variational autoencoder for multi-modal data
generation. The variational autoencoder employs triplet loss in the latent
space, which allows for conditional data generation by sampling in the latent
space within each class cluster. The approach is evaluated on a multi-modal
dataset consisting of otoscopy images of the tympanic membrane with
corresponding wideband tympanometry measurements. The modalities in this
dataset are correlated, as they represent different aspects of the state of the
middle ear, but they do not present a direct pixel-to-pixel correlation. The
approach shows promising results for the conditional generation of pairs of
images and tympanograms, and will allow for efficient data augmentation of data
from multi-modal sources.
| [
{
"version": "v1",
"created": "Mon, 7 Feb 2022 15:00:02 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Sundgaard",
"Josefine Vilsbøll",
""
],
[
"Hannemose",
"Morten Rieger",
""
],
[
"Laugesen",
"Søren",
""
],
[
"Bray",
"Peter",
""
],
[
"Harte",
"James",
""
],
[
"Kamide",
"Yosuke",
""
],
[
"Tanaka",
"Chiemi",
""
],
[
"Paulsen",
"Rasmus R.",
""
],
[
"Christensen",
"Anders Nymark",
""
]
] | TITLE: Multi-modal data generation with a deep metric variational autoencoder
ABSTRACT: We present a deep metric variational autoencoder for multi-modal data
generation. The variational autoencoder employs triplet loss in the latent
space, which allows for conditional data generation by sampling in the latent
space within each class cluster. The approach is evaluated on a multi-modal
dataset consisting of otoscopy images of the tympanic membrane with
corresponding wideband tympanometry measurements. The modalities in this
dataset are correlated, as they represent different aspects of the state of the
middle ear, but they do not present a direct pixel-to-pixel correlation. The
approach shows promising results for the conditional generation of pairs of
images and tympanograms, and will allow for efficient data augmentation of data
from multi-modal sources.
|
2302.04391 | Tong Guo | Tong Guo | The Re-Label Method For Data-Centric Machine Learning | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | In industry deep learning application, our manually labeled data has a
certain number of noisy data. To solve this problem and achieve more than 90
score in dev dataset, we present a simple method to find the noisy data and
re-label the noisy data by human, given the model predictions as references in
human labeling. In this paper, we illustrate our idea for a broad set of deep
learning tasks, includes classification, sequence tagging, object detection,
sequence generation, click-through rate prediction. The dev dataset evaluation
results and human evaluation results verify our idea.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2023 01:09:57 GMT"
},
{
"version": "v10",
"created": "Wed, 19 Mar 2025 02:56:53 GMT"
},
{
"version": "v2",
"created": "Sat, 6 May 2023 01:27:14 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Jul 2023 12:46:48 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Jul 2023 10:19:45 GMT"
},
{
"version": "v5",
"created": "Mon, 28 Aug 2023 08:02:47 GMT"
},
{
"version": "v6",
"created": "Thu, 2 Nov 2023 03:46:34 GMT"
},
{
"version": "v7",
"created": "Sun, 14 Jan 2024 13:50:20 GMT"
},
{
"version": "v8",
"created": "Fri, 1 Nov 2024 02:49:24 GMT"
},
{
"version": "v9",
"created": "Fri, 22 Nov 2024 01:41:55 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Guo",
"Tong",
""
]
] | TITLE: The Re-Label Method For Data-Centric Machine Learning
ABSTRACT: In industry deep learning application, our manually labeled data has a
certain number of noisy data. To solve this problem and achieve more than 90
score in dev dataset, we present a simple method to find the noisy data and
re-label the noisy data by human, given the model predictions as references in
human labeling. In this paper, we illustrate our idea for a broad set of deep
learning tasks, includes classification, sequence tagging, object detection,
sequence generation, click-through rate prediction. The dev dataset evaluation
results and human evaluation results verify our idea.
|
2304.07728 | Leyuan Sun | Leyuan Sun, Guanqun Ding, Yue Qiu, Yusuke Yoshiyasu and Fumio Kanehiro | TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion
Odometry Estimation | Submitted to IEEE Sensors Journal with some modifications. This work
has been submitted to the IEEE for possible publication | JSEN.2023.3302401 | 10.1109/JSEN.2023.3302401 | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multi-modal fusion of sensors is a commonly used approach to enhance the
performance of odometry estimation, which is also a fundamental module for
mobile robots. However, the question of \textit{how to perform fusion among
different modalities in a supervised sensor fusion odometry estimation task?}
is still one of challenging issues remains. Some simple operations, such as
element-wise summation and concatenation, are not capable of assigning adaptive
attentional weights to incorporate different modalities efficiently, which make
it difficult to achieve competitive odometry results. Recently, the Transformer
architecture has shown potential for multi-modal fusion tasks, particularly in
the domains of vision with language. In this work, we propose an end-to-end
supervised Transformer-based LiDAR-Inertial fusion framework (namely
TransFusionOdom) for odometry estimation. The multi-attention fusion module
demonstrates different fusion approaches for homogeneous and heterogeneous
modalities to address the overfitting problem that can arise from blindly
increasing the complexity of the model. Additionally, to interpret the learning
process of the Transformer-based multi-modal interactions, a general
visualization approach is introduced to illustrate the interactions between
modalities. Moreover, exhaustive ablation studies evaluate different
multi-modal fusion strategies to verify the performance of the proposed fusion
strategy. A synthetic multi-modal dataset is made public to validate the
generalization ability of the proposed fusion strategy, which also works for
other combinations of different modalities. The quantitative and qualitative
odometry evaluations on the KITTI dataset verify the proposed TransFusionOdom
could achieve superior performance compared with other related works.
| [
{
"version": "v1",
"created": "Sun, 16 Apr 2023 08:54:36 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Apr 2023 00:44:25 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Sun",
"Leyuan",
""
],
[
"Ding",
"Guanqun",
""
],
[
"Qiu",
"Yue",
""
],
[
"Yoshiyasu",
"Yusuke",
""
],
[
"Kanehiro",
"Fumio",
""
]
] | TITLE: TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion
Odometry Estimation
ABSTRACT: Multi-modal fusion of sensors is a commonly used approach to enhance the
performance of odometry estimation, which is also a fundamental module for
mobile robots. However, the question of \textit{how to perform fusion among
different modalities in a supervised sensor fusion odometry estimation task?}
is still one of challenging issues remains. Some simple operations, such as
element-wise summation and concatenation, are not capable of assigning adaptive
attentional weights to incorporate different modalities efficiently, which make
it difficult to achieve competitive odometry results. Recently, the Transformer
architecture has shown potential for multi-modal fusion tasks, particularly in
the domains of vision with language. In this work, we propose an end-to-end
supervised Transformer-based LiDAR-Inertial fusion framework (namely
TransFusionOdom) for odometry estimation. The multi-attention fusion module
demonstrates different fusion approaches for homogeneous and heterogeneous
modalities to address the overfitting problem that can arise from blindly
increasing the complexity of the model. Additionally, to interpret the learning
process of the Transformer-based multi-modal interactions, a general
visualization approach is introduced to illustrate the interactions between
modalities. Moreover, exhaustive ablation studies evaluate different
multi-modal fusion strategies to verify the performance of the proposed fusion
strategy. A synthetic multi-modal dataset is made public to validate the
generalization ability of the proposed fusion strategy, which also works for
other combinations of different modalities. The quantitative and qualitative
odometry evaluations on the KITTI dataset verify the proposed TransFusionOdom
could achieve superior performance compared with other related works.
|
2304.08247 | Keno Bressem | Tianyu Han and Lisa C. Adams and Jens-Michalis Papaioannou and Paul
Grundmann and Tom Oberhauser and Alexei Figueroa and Alexander L\"oser and
Daniel Truhn and Keno K. Bressem | MedAlpaca -- An Open-Source Collection of Medical Conversational AI
Models and Training Data | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) like OpenAI's GPT series continue to make
strides, we witness the emergence of artificial intelligence applications in an
ever-expanding range of fields. In medicine, these LLMs hold considerable
promise for improving medical workflows, diagnostics, patient care, and
education. Yet, there is an urgent need for open-source models that can be
deployed on-premises to safeguard patient privacy. In our work, we present an
innovative dataset consisting of over 160,000 entries, specifically crafted to
fine-tune LLMs for effective medical applications. We investigate the impact of
fine-tuning these datasets on publicly accessible pre-trained LLMs, and
subsequently, we juxtapose the performance of pre-trained-only models against
the fine-tuned models concerning the examinations that future medical doctors
must pass to achieve certification.
| [
{
"version": "v1",
"created": "Fri, 14 Apr 2023 11:28:08 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 23:28:00 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 21:31:51 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Han",
"Tianyu",
""
],
[
"Adams",
"Lisa C.",
""
],
[
"Papaioannou",
"Jens-Michalis",
""
],
[
"Grundmann",
"Paul",
""
],
[
"Oberhauser",
"Tom",
""
],
[
"Figueroa",
"Alexei",
""
],
[
"Löser",
"Alexander",
""
],
[
"Truhn",
"Daniel",
""
],
[
"Bressem",
"Keno K.",
""
]
] | TITLE: MedAlpaca -- An Open-Source Collection of Medical Conversational AI
Models and Training Data
ABSTRACT: As large language models (LLMs) like OpenAI's GPT series continue to make
strides, we witness the emergence of artificial intelligence applications in an
ever-expanding range of fields. In medicine, these LLMs hold considerable
promise for improving medical workflows, diagnostics, patient care, and
education. Yet, there is an urgent need for open-source models that can be
deployed on-premises to safeguard patient privacy. In our work, we present an
innovative dataset consisting of over 160,000 entries, specifically crafted to
fine-tune LLMs for effective medical applications. We investigate the impact of
fine-tuning these datasets on publicly accessible pre-trained LLMs, and
subsequently, we juxtapose the performance of pre-trained-only models against
the fine-tuned models concerning the examinations that future medical doctors
must pass to achieve certification.
|
2312.10048 | Kavita Sharma | Kavita Sharma, Ritu Patel, Sunita Iyer | Knowledge Graph Enhanced Aspect-Level Sentiment Analysis | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a novel method to enhance sentiment analysis by
addressing the challenge of context-specific word meanings. It combines the
advantages of a BERT model with a knowledge graph based synonym data. This
synergy leverages a dynamic attention mechanism to develop a knowledge-driven
state vector. For classifying sentiments linked to specific aspects, the
approach constructs a memory bank integrating positional data. The data are
then analyzed using a DCGRU to pinpoint sentiment characteristics related to
specific aspect terms. Experiments on three widely used datasets demonstrate
the superior performance of our method in sentiment classification.
| [
{
"version": "v1",
"created": "Sat, 2 Dec 2023 04:45:17 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Jan 2024 23:04:14 GMT"
},
{
"version": "v3",
"created": "Sat, 27 Jan 2024 00:09:23 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2025 21:32:48 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Sharma",
"Kavita",
""
],
[
"Patel",
"Ritu",
""
],
[
"Iyer",
"Sunita",
""
]
] | TITLE: Knowledge Graph Enhanced Aspect-Level Sentiment Analysis
ABSTRACT: In this paper, we propose a novel method to enhance sentiment analysis by
addressing the challenge of context-specific word meanings. It combines the
advantages of a BERT model with a knowledge graph based synonym data. This
synergy leverages a dynamic attention mechanism to develop a knowledge-driven
state vector. For classifying sentiments linked to specific aspects, the
approach constructs a memory bank integrating positional data. The data are
then analyzed using a DCGRU to pinpoint sentiment characteristics related to
specific aspect terms. Experiments on three widely used datasets demonstrate
the superior performance of our method in sentiment classification.
|
2402.07770 | David Antony Selby | David Selby, Kai Spriestersbach, Yuichiro Iwashita, Mohammad Saad,
Dennis Bappert, Archana Warrier, Sumantrak Mukherjee, Koichi Kise, Sebastian
Vollmer | Had enough of experts? Quantitative knowledge retrieval from large
language models | null | Stat, 14: e70054 (2025) | 10.1002/sta4.70054 | null | cs.IR cs.CL stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have been extensively studied for their
abilities to generate convincing natural language sequences, however their
utility for quantitative information retrieval is less well understood. Here we
explore the feasibility of LLMs as a mechanism for quantitative knowledge
retrieval to aid two data analysis tasks: elicitation of prior distributions
for Bayesian models and imputation of missing data. We introduce a framework
that leverages LLMs to enhance Bayesian workflows by eliciting expert-like
prior knowledge and imputing missing data. Tested on diverse datasets, this
approach can improve predictive accuracy and reduce data requirements, offering
significant potential in healthcare, environmental science and engineering
applications. We discuss the implications and challenges of treating LLMs as
'experts'.
| [
{
"version": "v1",
"created": "Mon, 12 Feb 2024 16:32:37 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Feb 2025 12:52:46 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Selby",
"David",
""
],
[
"Spriestersbach",
"Kai",
""
],
[
"Iwashita",
"Yuichiro",
""
],
[
"Saad",
"Mohammad",
""
],
[
"Bappert",
"Dennis",
""
],
[
"Warrier",
"Archana",
""
],
[
"Mukherjee",
"Sumantrak",
""
],
[
"Kise",
"Koichi",
""
],
[
"Vollmer",
"Sebastian",
""
]
] | TITLE: Had enough of experts? Quantitative knowledge retrieval from large
language models
ABSTRACT: Large language models (LLMs) have been extensively studied for their
abilities to generate convincing natural language sequences, however their
utility for quantitative information retrieval is less well understood. Here we
explore the feasibility of LLMs as a mechanism for quantitative knowledge
retrieval to aid two data analysis tasks: elicitation of prior distributions
for Bayesian models and imputation of missing data. We introduce a framework
that leverages LLMs to enhance Bayesian workflows by eliciting expert-like
prior knowledge and imputing missing data. Tested on diverse datasets, this
approach can improve predictive accuracy and reduce data requirements, offering
significant potential in healthcare, environmental science and engineering
applications. We discuss the implications and challenges of treating LLMs as
'experts'.
|
2402.09122 | James Odgers | James Odgers, Ruby Sedgwick, Chrysoula Kappatou, Ruth Misener, Sarah
Filippi | Weighted-Sum of Gaussian Process Latent Variable Models | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work develops a Bayesian non-parametric approach to signal separation
where the signals may vary according to latent variables. Our key contribution
is to augment Gaussian Process Latent Variable Models (GPLVMs) for the case
where each data point comprises the weighted sum of a known number of pure
component signals, observed across several input locations. Our framework
allows arbitrary non-linear variations in the signals while being able to
incorporate useful priors for the linear weights, such as summing-to-one. Our
contributions are particularly relevant to spectroscopy, where changing
conditions may cause the underlying pure component signals to vary from sample
to sample. To demonstrate the applicability to both spectroscopy and other
domains, we consider several applications: a near-infrared spectroscopy dataset
with varying temperatures, a simulated dataset for identifying flow
configuration through a pipe, and a dataset for determining the type of rock
from its reflectance.
| [
{
"version": "v1",
"created": "Wed, 14 Feb 2024 12:18:23 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Nov 2024 12:40:43 GMT"
},
{
"version": "v3",
"created": "Sun, 24 Nov 2024 01:44:40 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Mar 2025 16:25:15 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Odgers",
"James",
""
],
[
"Sedgwick",
"Ruby",
""
],
[
"Kappatou",
"Chrysoula",
""
],
[
"Misener",
"Ruth",
""
],
[
"Filippi",
"Sarah",
""
]
] | TITLE: Weighted-Sum of Gaussian Process Latent Variable Models
ABSTRACT: This work develops a Bayesian non-parametric approach to signal separation
where the signals may vary according to latent variables. Our key contribution
is to augment Gaussian Process Latent Variable Models (GPLVMs) for the case
where each data point comprises the weighted sum of a known number of pure
component signals, observed across several input locations. Our framework
allows arbitrary non-linear variations in the signals while being able to
incorporate useful priors for the linear weights, such as summing-to-one. Our
contributions are particularly relevant to spectroscopy, where changing
conditions may cause the underlying pure component signals to vary from sample
to sample. To demonstrate the applicability to both spectroscopy and other
domains, we consider several applications: a near-infrared spectroscopy dataset
with varying temperatures, a simulated dataset for identifying flow
configuration through a pipe, and a dataset for determining the type of rock
from its reflectance.
|
2402.09682 | Geneva Ecola | Geneva Ecola, Bill Yen, Ana Banzer Morgado, Bodhi Priyantha, Ranveer
Chandra, Zerina Kapetanovic | SARLink: Satellite Backscatter Connectivity using Synthetic Aperture
Radar | 13 pages, 16 figures | null | null | null | eess.SP cs.ET cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | SARLink is a passive satellite backscatter communication system that uses
existing spaceborne synthetic aperture radar (SAR) imaging satellites to
provide connectivity in remote regions. It achieves orders of magnitude more
range than traditional backscatter systems, enabling communication between a
passive ground node and a satellite in low earth orbit. The system is composed
of a cooperative ground target, a SAR satellite, and a data processing
algorithm. A mechanically modulating reflector was designed to apply amplitude
modulation to ambient SAR backscatter signals by changing its radar cross
section. These communication bits are extracted from the raw SAR data using an
algorithm that leverages subaperture processing to detect multiple bits from a
target in a single image dataset. A theoretical analysis of this communication
system using on-off keying is presented, including the expected signal model,
throughput, and bit error rate. The results suggest a 5.5 ft by 5.5 ft
modulating corner reflector could send 60 bits every satellite pass, enough to
support low bandwidth sensor data and messages. Using Sentinel-1A, a SAR
satellite at an altitude of 693~km, we deployed static and modulating
reflectors to evaluate the system. The results, successfully detecting the
changing state of a modulating ground target, demonstrate our algorithm's
effectiveness for extracting bits, paving the way for ultra-long-range,
low-power satellite backscatter communication.
| [
{
"version": "v1",
"created": "Thu, 15 Feb 2024 03:34:17 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Jul 2024 21:28:27 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Jul 2024 02:12:53 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2025 21:43:40 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ecola",
"Geneva",
""
],
[
"Yen",
"Bill",
""
],
[
"Morgado",
"Ana Banzer",
""
],
[
"Priyantha",
"Bodhi",
""
],
[
"Chandra",
"Ranveer",
""
],
[
"Kapetanovic",
"Zerina",
""
]
] | TITLE: SARLink: Satellite Backscatter Connectivity using Synthetic Aperture
Radar
ABSTRACT: SARLink is a passive satellite backscatter communication system that uses
existing spaceborne synthetic aperture radar (SAR) imaging satellites to
provide connectivity in remote regions. It achieves orders of magnitude more
range than traditional backscatter systems, enabling communication between a
passive ground node and a satellite in low earth orbit. The system is composed
of a cooperative ground target, a SAR satellite, and a data processing
algorithm. A mechanically modulating reflector was designed to apply amplitude
modulation to ambient SAR backscatter signals by changing its radar cross
section. These communication bits are extracted from the raw SAR data using an
algorithm that leverages subaperture processing to detect multiple bits from a
target in a single image dataset. A theoretical analysis of this communication
system using on-off keying is presented, including the expected signal model,
throughput, and bit error rate. The results suggest a 5.5 ft by 5.5 ft
modulating corner reflector could send 60 bits every satellite pass, enough to
support low bandwidth sensor data and messages. Using Sentinel-1A, a SAR
satellite at an altitude of 693~km, we deployed static and modulating
reflectors to evaluate the system. The results, successfully detecting the
changing state of a modulating ground target, demonstrate our algorithm's
effectiveness for extracting bits, paving the way for ultra-long-range,
low-power satellite backscatter communication.
|
2403.01896 | Hiroaki Maeshima | Hiroaki Maeshima, Akira Otsuka | Robustness bounds on the successful adversarial examples in
probabilistic models: Implications from Gaussian processes | null | null | null | null | cs.LG cs.CR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial example (AE) is an attack method for machine learning, which is
crafted by adding imperceptible perturbation to the data inducing
misclassification. In the current paper, we investigated the upper bound of the
probability of successful AEs based on the Gaussian Process (GP)
classification, a probabilistic inference model. We proved a new upper bound of
the probability of a successful AE attack that depends on AE's perturbation
norm, the kernel function used in GP, and the distance of the closest pair with
different labels in the training dataset. Surprisingly, the upper bound is
determined regardless of the distribution of the sample dataset. We showed that
our theoretical result was confirmed through the experiment using ImageNet. In
addition, we showed that changing the parameters of the kernel function induces
a change of the upper bound of the probability of successful AEs.
| [
{
"version": "v1",
"created": "Mon, 4 Mar 2024 09:55:43 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 09:07:12 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Maeshima",
"Hiroaki",
""
],
[
"Otsuka",
"Akira",
""
]
] | TITLE: Robustness bounds on the successful adversarial examples in
probabilistic models: Implications from Gaussian processes
ABSTRACT: Adversarial example (AE) is an attack method for machine learning, which is
crafted by adding imperceptible perturbation to the data inducing
misclassification. In the current paper, we investigated the upper bound of the
probability of successful AEs based on the Gaussian Process (GP)
classification, a probabilistic inference model. We proved a new upper bound of
the probability of a successful AE attack that depends on AE's perturbation
norm, the kernel function used in GP, and the distance of the closest pair with
different labels in the training dataset. Surprisingly, the upper bound is
determined regardless of the distribution of the sample dataset. We showed that
our theoretical result was confirmed through the experiment using ImageNet. In
addition, we showed that changing the parameters of the kernel function induces
a change of the upper bound of the probability of successful AEs.
|
2403.04217 | Gabriel Schleder | Bruno Focassio, Luis Paulo Mezzina Freitas, Gabriel R. Schleder | Performance Assessment of Universal Machine Learning Interatomic
Potentials: Challenges and Directions for Materials' Surfaces | null | ACS Appl. Mater. Interfaces 17, 13111 (2025) | 10.1021/acsami.4c03815 | null | cond-mat.mtrl-sci cond-mat.dis-nn physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Machine learning interatomic potentials (MLIPs) are one of the main
techniques in the materials science toolbox, able to bridge ab initio accuracy
with the computational efficiency of classical force fields. This allows
simulations ranging from atoms, molecules, and biosystems, to solid and bulk
materials, surfaces, nanomaterials, and their interfaces and complex
interactions. A recent class of advanced MLIPs, which use equivariant
representations and deep graph neural networks, is known as universal models.
These models are proposed as foundational models suitable for any system,
covering most elements from the periodic table. Current universal MLIPs (UIPs)
have been trained with the largest consistent dataset available nowadays.
However, these are composed mostly of bulk materials' DFT calculations. In this
article, we assess the universality of all openly available UIPs, namely MACE,
CHGNet, and M3GNet, in a representative task of generalization: calculation of
surface energies. We find that the out-of-the-box foundational models have
significant shortcomings in this task, with errors correlated to the total
energy of surface simulations, having an out-of-domain distance from the
training dataset. Our results show that while UIPs are an efficient starting
point for fine-tuning specialized models, we envision the potential of
increasing the coverage of the materials space towards universal training
datasets for MLIPs.
| [
{
"version": "v1",
"created": "Thu, 7 Mar 2024 04:39:48 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 16:51:31 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Focassio",
"Bruno",
""
],
[
"Freitas",
"Luis Paulo Mezzina",
""
],
[
"Schleder",
"Gabriel R.",
""
]
] | TITLE: Performance Assessment of Universal Machine Learning Interatomic
Potentials: Challenges and Directions for Materials' Surfaces
ABSTRACT: Machine learning interatomic potentials (MLIPs) are one of the main
techniques in the materials science toolbox, able to bridge ab initio accuracy
with the computational efficiency of classical force fields. This allows
simulations ranging from atoms, molecules, and biosystems, to solid and bulk
materials, surfaces, nanomaterials, and their interfaces and complex
interactions. A recent class of advanced MLIPs, which use equivariant
representations and deep graph neural networks, is known as universal models.
These models are proposed as foundational models suitable for any system,
covering most elements from the periodic table. Current universal MLIPs (UIPs)
have been trained with the largest consistent dataset available nowadays.
However, these are composed mostly of bulk materials' DFT calculations. In this
article, we assess the universality of all openly available UIPs, namely MACE,
CHGNet, and M3GNet, in a representative task of generalization: calculation of
surface energies. We find that the out-of-the-box foundational models have
significant shortcomings in this task, with errors correlated to the total
energy of surface simulations, having an out-of-domain distance from the
training dataset. Our results show that while UIPs are an efficient starting
point for fine-tuning specialized models, we envision the potential of
increasing the coverage of the materials space towards universal training
datasets for MLIPs.
|
2403.08455 | Novel Certad | Novel Certad, Enrico del Re, Helena Kornd\"orfer, Gregory Schr\"oder,
Walter Morales-Alvarez, Sebastian Tschernuth, Delgermaa Gankhuyag, Luigi del
Re and Cristina Olaverri-Monreal | Interaction of Autonomous and Manually Controlled Vehicles Multiscenario
Vehicle Interaction Dataset | null | null | null | null | cs.RO cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The acquisition and analysis of high-quality sensor data constitute an
essential requirement in shaping the development of fully autonomous driving
systems. This process is indispensable for enhancing road safety and ensuring
the effectiveness of the technological advancements in the automotive industry.
This study introduces the Interaction of Autonomous and Manually-Controlled
Vehicles (IAMCV) dataset, a novel and extensive dataset focused on
inter-vehicle interactions. The dataset, enriched with a sophisticated array of
sensors such as Light Detection and Ranging, cameras, Inertial Measurement
Unit/Global Positioning System, and vehicle bus data acquisition, provides a
comprehensive representation of real-world driving scenarios that include
roundabouts, intersections, country roads, and highways, recorded across
diverse locations in Germany. Furthermore, the study shows the versatility of
the IAMCV dataset through several proof-of-concept use cases. Firstly, an
unsupervised trajectory clustering algorithm illustrates the dataset's
capability in categorizing vehicle movements without the need for labeled
training data. Secondly, we compare an online camera calibration method with
the Robot Operating System-based standard, using images captured in the
dataset. Finally, a preliminary test employing the YOLOv8 object-detection
model is conducted, augmented by reflections on the transferability of object
detection across various LIDAR resolutions. These use cases underscore the
practical utility of the collected dataset, emphasizing its potential to
advance research and innovation in the area of intelligent vehicles.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 12:09:44 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 08:30:26 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Certad",
"Novel",
""
],
[
"del Re",
"Enrico",
""
],
[
"Korndörfer",
"Helena",
""
],
[
"Schröder",
"Gregory",
""
],
[
"Morales-Alvarez",
"Walter",
""
],
[
"Tschernuth",
"Sebastian",
""
],
[
"Gankhuyag",
"Delgermaa",
""
],
[
"del Re",
"Luigi",
""
],
[
"Olaverri-Monreal",
"Cristina",
""
]
] | TITLE: Interaction of Autonomous and Manually Controlled Vehicles Multiscenario
Vehicle Interaction Dataset
ABSTRACT: The acquisition and analysis of high-quality sensor data constitute an
essential requirement in shaping the development of fully autonomous driving
systems. This process is indispensable for enhancing road safety and ensuring
the effectiveness of the technological advancements in the automotive industry.
This study introduces the Interaction of Autonomous and Manually-Controlled
Vehicles (IAMCV) dataset, a novel and extensive dataset focused on
inter-vehicle interactions. The dataset, enriched with a sophisticated array of
sensors such as Light Detection and Ranging, cameras, Inertial Measurement
Unit/Global Positioning System, and vehicle bus data acquisition, provides a
comprehensive representation of real-world driving scenarios that include
roundabouts, intersections, country roads, and highways, recorded across
diverse locations in Germany. Furthermore, the study shows the versatility of
the IAMCV dataset through several proof-of-concept use cases. Firstly, an
unsupervised trajectory clustering algorithm illustrates the dataset's
capability in categorizing vehicle movements without the need for labeled
training data. Secondly, we compare an online camera calibration method with
the Robot Operating System-based standard, using images captured in the
dataset. Finally, a preliminary test employing the YOLOv8 object-detection
model is conducted, augmented by reflections on the transferability of object
detection across various LIDAR resolutions. These use cases underscore the
practical utility of the collected dataset, emphasizing its potential to
advance research and innovation in the area of intelligent vehicles.
|
2403.11934 | Hamza Kheddar | Hamza Kheddar, Yassine Himeur, Abbes Amira, Rachik Soualah | Image and Point-cloud Classification for Jet Analysis in High-Energy
Physics: A survey | Accepted paper in Frontier of Physics | Frontier of Physics, Higher Education Press, 2025 | 10.15302/frontphys.2025.035301 | null | hep-ph cs.CV eess.IV hep-ex | http://creativecommons.org/licenses/by/4.0/ | Nowadays, there has been a growing trend in the field of high-energy physics
(HEP), in both its experimental and phenomenological studies, to incorporate
machine learning (ML) and its specialized branch, deep learning (DL). This
review paper provides a thorough illustration of these applications using
different ML and DL approaches. The first part of the paper examines the basics
of various particle physics types and establishes guidelines for assessing
particle physics alongside the available learning models. Next, a detailed
classification is provided for representing Jets that are reconstructed in
high-energy collisions, mainly in proton-proton collisions at well-defined beam
energies. This section covers various datasets, preprocessing techniques, and
feature extraction and selection methods. The presented techniques can be
applied to future hadron-hadron colliders (HHC), such as the high-luminosity
LHC (HL-LHC) and the future circular collider - hadron-hadron (FCChh). The
authors then explore several AI techniques analyses designed specifically for
both image and point-cloud (PC) data in HEP. Additionally, a closer look is
taken at the classification associated with Jet tagging in hadron collisions.
In this review, various state-of-the-art (SOTA) techniques in ML and DL are
examined, with a focus on their implications for HEP demands. More precisely,
this discussion addresses various applications in extensive detail, such as Jet
tagging, Jet tracking, particle classification, and more. The review concludes
with an analysis of the current state of HEP using DL methodologies. It
highlights the challenges and potential areas for future research, which are
illustrated for each application.
| [
{
"version": "v1",
"created": "Mon, 18 Mar 2024 16:33:29 GMT"
},
{
"version": "v2",
"created": "Thu, 23 May 2024 17:06:42 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Feb 2025 14:16:44 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Kheddar",
"Hamza",
""
],
[
"Himeur",
"Yassine",
""
],
[
"Amira",
"Abbes",
""
],
[
"Soualah",
"Rachik",
""
]
] | TITLE: Image and Point-cloud Classification for Jet Analysis in High-Energy
Physics: A survey
ABSTRACT: Nowadays, there has been a growing trend in the field of high-energy physics
(HEP), in both its experimental and phenomenological studies, to incorporate
machine learning (ML) and its specialized branch, deep learning (DL). This
review paper provides a thorough illustration of these applications using
different ML and DL approaches. The first part of the paper examines the basics
of various particle physics types and establishes guidelines for assessing
particle physics alongside the available learning models. Next, a detailed
classification is provided for representing Jets that are reconstructed in
high-energy collisions, mainly in proton-proton collisions at well-defined beam
energies. This section covers various datasets, preprocessing techniques, and
feature extraction and selection methods. The presented techniques can be
applied to future hadron-hadron colliders (HHC), such as the high-luminosity
LHC (HL-LHC) and the future circular collider - hadron-hadron (FCChh). The
authors then explore several AI techniques analyses designed specifically for
both image and point-cloud (PC) data in HEP. Additionally, a closer look is
taken at the classification associated with Jet tagging in hadron collisions.
In this review, various state-of-the-art (SOTA) techniques in ML and DL are
examined, with a focus on their implications for HEP demands. More precisely,
this discussion addresses various applications in extensive detail, such as Jet
tagging, Jet tracking, particle classification, and more. The review concludes
with an analysis of the current state of HEP using DL methodologies. It
highlights the challenges and potential areas for future research, which are
illustrated for each application.
|
2403.14163 | Leyuan Sun | Leyuan Sun, Asako Kanezaki, Guillaume Caron, Yusuke Yoshiyasu | Leveraging Large Language Model-based Room-Object Relationships
Knowledge for Enhancing Multimodal-Input Object Goal Navigation | will soon submit to the Elsevier journal, Advanced Engineering
Informatics | Advanced Engineering Informatics 65 (2025) | 10.1016/j.aei.2025.103135 | null | cs.RO cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object-goal navigation is a crucial engineering task for the community of
embodied navigation; it involves navigating to an instance of a specified
object category within unseen environments. Although extensive investigations
have been conducted on both end-to-end and modular-based, data-driven
approaches, fully enabling an agent to comprehend the environment through
perceptual knowledge and perform object-goal navigation as efficiently as
humans remains a significant challenge. Recently, large language models have
shown potential in this task, thanks to their powerful capabilities for
knowledge extraction and integration. In this study, we propose a data-driven,
modular-based approach, trained on a dataset that incorporates common-sense
knowledge of object-to-room relationships extracted from a large language
model. We utilize the multi-channel Swin-Unet architecture to conduct
multi-task learning incorporating with multimodal inputs. The results in the
Habitat simulator demonstrate that our framework outperforms the baseline by an
average of 10.6% in the efficiency metric, Success weighted by Path Length
(SPL). The real-world demonstration shows that the proposed approach can
efficiently conduct this task by traversing several rooms. For more details and
real-world demonstrations, please check our project webpage
(https://sunleyuan.github.io/ObjectNav).
| [
{
"version": "v1",
"created": "Thu, 21 Mar 2024 06:32:36 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Sun",
"Leyuan",
""
],
[
"Kanezaki",
"Asako",
""
],
[
"Caron",
"Guillaume",
""
],
[
"Yoshiyasu",
"Yusuke",
""
]
] | TITLE: Leveraging Large Language Model-based Room-Object Relationships
Knowledge for Enhancing Multimodal-Input Object Goal Navigation
ABSTRACT: Object-goal navigation is a crucial engineering task for the community of
embodied navigation; it involves navigating to an instance of a specified
object category within unseen environments. Although extensive investigations
have been conducted on both end-to-end and modular-based, data-driven
approaches, fully enabling an agent to comprehend the environment through
perceptual knowledge and perform object-goal navigation as efficiently as
humans remains a significant challenge. Recently, large language models have
shown potential in this task, thanks to their powerful capabilities for
knowledge extraction and integration. In this study, we propose a data-driven,
modular-based approach, trained on a dataset that incorporates common-sense
knowledge of object-to-room relationships extracted from a large language
model. We utilize the multi-channel Swin-Unet architecture to conduct
multi-task learning incorporating with multimodal inputs. The results in the
Habitat simulator demonstrate that our framework outperforms the baseline by an
average of 10.6% in the efficiency metric, Success weighted by Path Length
(SPL). The real-world demonstration shows that the proposed approach can
efficiently conduct this task by traversing several rooms. For more details and
real-world demonstrations, please check our project webpage
(https://sunleyuan.github.io/ObjectNav).
|
2403.16742 | Giulia Di Credico Dr | Giulia Di Credico, Luca Consolini, Mattia Laurini, Marco Locatelli,
Marco Milanesi, Michele Schiavo, Antonio Visioli | A Branch and Bound method for the exact parameter identification of the
PK/PD model for anesthetic drugs | null | 2024 IEEE 63rd Conference on Decision and Control (CDC), Milan,
Italy, 2024, 8754-8759 | 10.1109/CDC56724.2024.10885926 | null | eess.SY cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We address the problem of parameter identification for the standard
pharmacokinetic/pharmacodynamic (PK/PD) model for anesthetic drugs. Our main
contribution is the development of a global optimization method that guarantees
finding the parameters that minimize the one-step ahead prediction error. The
method is based on a branch-and-bound algorithm, that can be applied to solve a
more general class of nonlinear regression problems. We present some simulation
results, based on a dataset of twelve patients. In these simulations, we are
always able to identify the exact parameters, despite the non-convexity of the
overall identification problem.
| [
{
"version": "v1",
"created": "Mon, 25 Mar 2024 13:13:39 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Di Credico",
"Giulia",
""
],
[
"Consolini",
"Luca",
""
],
[
"Laurini",
"Mattia",
""
],
[
"Locatelli",
"Marco",
""
],
[
"Milanesi",
"Marco",
""
],
[
"Schiavo",
"Michele",
""
],
[
"Visioli",
"Antonio",
""
]
] | TITLE: A Branch and Bound method for the exact parameter identification of the
PK/PD model for anesthetic drugs
ABSTRACT: We address the problem of parameter identification for the standard
pharmacokinetic/pharmacodynamic (PK/PD) model for anesthetic drugs. Our main
contribution is the development of a global optimization method that guarantees
finding the parameters that minimize the one-step ahead prediction error. The
method is based on a branch-and-bound algorithm, that can be applied to solve a
more general class of nonlinear regression problems. We present some simulation
results, based on a dataset of twelve patients. In these simulations, we are
always able to identify the exact parameters, despite the non-convexity of the
overall identification problem.
|
2404.15979 | Sergei Grudinin | Dmitrii Zhemchuzhnikov and Sergei Grudinin | On the Fourier analysis in the SO(3) space : EquiLoPO Network | null | conference paper at ICLR 2025 | null | null | cs.CV cs.LG math.GR | http://creativecommons.org/licenses/by/4.0/ | Analyzing volumetric data with rotational invariance or equivariance is an
active topic in current research. Existing deep-learning approaches utilize
either group convolutional networks limited to discrete rotations or steerable
convolutional networks with constrained filter structures. This work proposes a
novel equivariant neural network architecture that achieves analytical
Equivariance to Local Pattern Orientation on the continuous SO(3) group while
allowing unconstrained trainable filters - EquiLoPO Network. Our key
innovations are a group convolutional operation leveraging irreducible
representations as the Fourier basis and a local activation function in the
SO(3) space that provides a well-defined mapping from input to output
functions, preserving equivariance. By integrating these operations into a
ResNet-style architecture, we propose a model that overcomes the limitations of
prior methods. A comprehensive evaluation on diverse 3D medical imaging
datasets from MedMNIST3D demonstrates the effectiveness of our approach, which
consistently outperforms state of the art. This work suggests the benefits of
true rotational equivariance on SO(3) and flexible unconstrained filters
enabled by the local activation function, providing a flexible framework for
equivariant deep learning on volumetric data with potential applications across
domains. Our code is publicly available at
https://gricad-gitlab.univ-grenoble-alpes.fr/GruLab/ILPO/-/tree/main/EquiLoPO.
| [
{
"version": "v1",
"created": "Wed, 24 Apr 2024 16:54:39 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 14:43:51 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhemchuzhnikov",
"Dmitrii",
""
],
[
"Grudinin",
"Sergei",
""
]
] | TITLE: On the Fourier analysis in the SO(3) space : EquiLoPO Network
ABSTRACT: Analyzing volumetric data with rotational invariance or equivariance is an
active topic in current research. Existing deep-learning approaches utilize
either group convolutional networks limited to discrete rotations or steerable
convolutional networks with constrained filter structures. This work proposes a
novel equivariant neural network architecture that achieves analytical
Equivariance to Local Pattern Orientation on the continuous SO(3) group while
allowing unconstrained trainable filters - EquiLoPO Network. Our key
innovations are a group convolutional operation leveraging irreducible
representations as the Fourier basis and a local activation function in the
SO(3) space that provides a well-defined mapping from input to output
functions, preserving equivariance. By integrating these operations into a
ResNet-style architecture, we propose a model that overcomes the limitations of
prior methods. A comprehensive evaluation on diverse 3D medical imaging
datasets from MedMNIST3D demonstrates the effectiveness of our approach, which
consistently outperforms state of the art. This work suggests the benefits of
true rotational equivariance on SO(3) and flexible unconstrained filters
enabled by the local activation function, providing a flexible framework for
equivariant deep learning on volumetric data with potential applications across
domains. Our code is publicly available at
https://gricad-gitlab.univ-grenoble-alpes.fr/GruLab/ILPO/-/tree/main/EquiLoPO.
|
2405.12390 | Elvis Han Cui | Eliuvish Cuicizion | A Metric-based Principal Curve Approach for Learning One-dimensional
Manifold | null | null | null | null | stat.ML cs.AI cs.LG stat.AP | http://creativecommons.org/licenses/by/4.0/ | Principal curve is a well-known statistical method oriented in manifold
learning using concepts from differential geometry. In this paper, we propose a
novel metric-based principal curve (MPC) method that learns one-dimensional
manifold of spatial data. Synthetic datasets Real applications using MNIST
dataset show that our method can learn the one-dimensional manifold well in
terms of the shape.
| [
{
"version": "v1",
"created": "Mon, 20 May 2024 21:50:19 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Aug 2024 13:48:07 GMT"
},
{
"version": "v3",
"created": "Sat, 7 Sep 2024 18:32:06 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2025 20:30:38 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Cuicizion",
"Eliuvish",
""
]
] | TITLE: A Metric-based Principal Curve Approach for Learning One-dimensional
Manifold
ABSTRACT: Principal curve is a well-known statistical method oriented in manifold
learning using concepts from differential geometry. In this paper, we propose a
novel metric-based principal curve (MPC) method that learns one-dimensional
manifold of spatial data. Synthetic datasets Real applications using MNIST
dataset show that our method can learn the one-dimensional manifold well in
terms of the shape.
|
2406.01130 | Samuel Kessler | Samuel Kessler, Tam Le, Vu Nguyen | SAVA: Scalable Learning-Agnostic Data Valuation | Accepted at ICLR 2025. 27 pages, 12 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Selecting data for training machine learning models is crucial since large,
web-scraped, real datasets contain noisy artifacts that affect the quality and
relevance of individual data points. These noisy artifacts will impact model
performance. We formulate this problem as a data valuation task, assigning a
value to data points in the training set according to how similar or dissimilar
they are to a clean and curated validation set. Recently, LAVA demonstrated the
use of optimal transport (OT) between a large noisy training dataset and a
clean validation set, to value training data efficiently, without the
dependency on model performance. However, the LAVA algorithm requires the
entire dataset as an input, this limits its application to larger datasets.
Inspired by the scalability of stochastic (gradient) approaches which carry out
computations on batches of data points instead of the entire dataset, we
analogously propose SAVA, a scalable variant of LAVA with its computation on
batches of data points. Intuitively, SAVA follows the same scheme as LAVA which
leverages the hierarchically defined OT for data valuation. However, while LAVA
processes the whole dataset, SAVA divides the dataset into batches of data
points, and carries out the OT problem computation on those batches. Moreover,
our theoretical derivations on the trade-off of using entropic regularization
for OT problems include refinements of prior work. We perform extensive
experiments, to demonstrate that SAVA can scale to large datasets with millions
of data points and does not trade off data valuation performance.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 09:17:35 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 17:02:40 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Kessler",
"Samuel",
""
],
[
"Le",
"Tam",
""
],
[
"Nguyen",
"Vu",
""
]
] | TITLE: SAVA: Scalable Learning-Agnostic Data Valuation
ABSTRACT: Selecting data for training machine learning models is crucial since large,
web-scraped, real datasets contain noisy artifacts that affect the quality and
relevance of individual data points. These noisy artifacts will impact model
performance. We formulate this problem as a data valuation task, assigning a
value to data points in the training set according to how similar or dissimilar
they are to a clean and curated validation set. Recently, LAVA demonstrated the
use of optimal transport (OT) between a large noisy training dataset and a
clean validation set, to value training data efficiently, without the
dependency on model performance. However, the LAVA algorithm requires the
entire dataset as an input, this limits its application to larger datasets.
Inspired by the scalability of stochastic (gradient) approaches which carry out
computations on batches of data points instead of the entire dataset, we
analogously propose SAVA, a scalable variant of LAVA with its computation on
batches of data points. Intuitively, SAVA follows the same scheme as LAVA which
leverages the hierarchically defined OT for data valuation. However, while LAVA
processes the whole dataset, SAVA divides the dataset into batches of data
points, and carries out the OT problem computation on those batches. Moreover,
our theoretical derivations on the trade-off of using entropic regularization
for OT problems include refinements of prior work. We perform extensive
experiments, to demonstrate that SAVA can scale to large datasets with millions
of data points and does not trade off data valuation performance.
|
2406.05704 | Xinhao Zhong | Xinhao Zhong, Hao Fang, Bin Chen, Xulin Gu, Meikang Qiu, Shuhan Qi,
Shu-Tao Xia | Hierarchical Features Matter: A Deep Exploration of Progressive
Parameterization Method for Dataset Distillation | Accepted to CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Dataset distillation is an emerging dataset reduction method, which condenses
large-scale datasets while maintaining task accuracy. Current parameterization
methods achieve enhanced performance under extremely high compression ratio by
optimizing determined synthetic dataset in informative feature domain. However,
they limit themselves to a fixed optimization space for distillation,
neglecting the diverse guidance across different informative latent spaces. To
overcome this limitation, we propose a novel parameterization method dubbed
Hierarchical Parameterization Distillation (H-PD), to systematically explore
hierarchical feature within provided feature space (e.g., layers within
pre-trained generative adversarial networks). We verify the correctness of our
insights by applying the hierarchical optimization strategy on GAN-based
parameterization method. In addition, we introduce a novel class-relevant
feature distance metric to alleviate the computational burden associated with
synthetic dataset evaluation, bridging the gap between synthetic and original
datasets. Experimental results demonstrate that the proposed H-PD achieves a
significant performance improvement under various settings with equivalent time
consumption, and even surpasses current generative distillation using diffusion
models under extreme compression ratios IPC=1 and IPC=10.
| [
{
"version": "v1",
"created": "Sun, 9 Jun 2024 09:15:54 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jun 2024 11:11:07 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 04:23:38 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhong",
"Xinhao",
""
],
[
"Fang",
"Hao",
""
],
[
"Chen",
"Bin",
""
],
[
"Gu",
"Xulin",
""
],
[
"Qiu",
"Meikang",
""
],
[
"Qi",
"Shuhan",
""
],
[
"Xia",
"Shu-Tao",
""
]
] | TITLE: Hierarchical Features Matter: A Deep Exploration of Progressive
Parameterization Method for Dataset Distillation
ABSTRACT: Dataset distillation is an emerging dataset reduction method, which condenses
large-scale datasets while maintaining task accuracy. Current parameterization
methods achieve enhanced performance under extremely high compression ratio by
optimizing determined synthetic dataset in informative feature domain. However,
they limit themselves to a fixed optimization space for distillation,
neglecting the diverse guidance across different informative latent spaces. To
overcome this limitation, we propose a novel parameterization method dubbed
Hierarchical Parameterization Distillation (H-PD), to systematically explore
hierarchical feature within provided feature space (e.g., layers within
pre-trained generative adversarial networks). We verify the correctness of our
insights by applying the hierarchical optimization strategy on GAN-based
parameterization method. In addition, we introduce a novel class-relevant
feature distance metric to alleviate the computational burden associated with
synthetic dataset evaluation, bridging the gap between synthetic and original
datasets. Experimental results demonstrate that the proposed H-PD achieves a
significant performance improvement under various settings with equivalent time
consumption, and even surpasses current generative distillation using diffusion
models under extreme compression ratios IPC=1 and IPC=10.
|
2406.10638 | Yexin Liu | Yexin Liu, Zhengyang Liang, Yueze Wang, Xianfeng Wu, Feilong Tang,
Muyang He, Jian Li, Zheng Liu, Harry Yang, Sernam Lim, Bo Zhao | Unveiling the Ignorance of MLLMs: Seeing Clearly, Answering Incorrectly | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Large Language Models (MLLMs) have displayed remarkable
performance in multi-modal tasks, particularly in visual comprehension.
However, we reveal that MLLMs often generate incorrect answers even when they
understand the visual content. To this end, we manually construct a benchmark
with 12 categories and design evaluation metrics that assess the degree of
error in MLLM responses even when the visual content is seemingly understood.
Based on this benchmark, we test 15 leading MLLMs and analyze the distribution
of attention maps and logits of some MLLMs. Our investigation identifies two
primary issues: 1) most instruction tuning datasets predominantly feature
questions that 'directly' relate to the visual content, leading to a bias in
MLLMs' responses to other indirect questions, and 2) MLLMs' attention to visual
tokens is notably lower than to system and question tokens. We further observe
that attention scores between questions and visual tokens as well as the
model's confidence in the answers are lower in response to misleading questions
than to straightforward ones. To address the first challenge, we introduce a
paired positive and negative data construction pipeline to diversify the
dataset. For the second challenge, we propose to enhance the model's focus on
visual content during decoding by refining the text and visual prompt. For the
text prompt, we propose a content guided refinement strategy that performs
preliminary visual content analysis to generate structured information before
answering the question. Additionally, we employ a visual attention refinement
strategy that highlights question-relevant visual tokens to increase the
model's attention to visual content that aligns with the question. Extensive
experiments demonstrate that these challenges can be significantly mitigated
with our proposed dataset and techniques.
| [
{
"version": "v1",
"created": "Sat, 15 Jun 2024 13:58:26 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Dec 2024 06:48:10 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 05:52:59 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Liu",
"Yexin",
""
],
[
"Liang",
"Zhengyang",
""
],
[
"Wang",
"Yueze",
""
],
[
"Wu",
"Xianfeng",
""
],
[
"Tang",
"Feilong",
""
],
[
"He",
"Muyang",
""
],
[
"Li",
"Jian",
""
],
[
"Liu",
"Zheng",
""
],
[
"Yang",
"Harry",
""
],
[
"Lim",
"Sernam",
""
],
[
"Zhao",
"Bo",
""
]
] | TITLE: Unveiling the Ignorance of MLLMs: Seeing Clearly, Answering Incorrectly
ABSTRACT: Multimodal Large Language Models (MLLMs) have displayed remarkable
performance in multi-modal tasks, particularly in visual comprehension.
However, we reveal that MLLMs often generate incorrect answers even when they
understand the visual content. To this end, we manually construct a benchmark
with 12 categories and design evaluation metrics that assess the degree of
error in MLLM responses even when the visual content is seemingly understood.
Based on this benchmark, we test 15 leading MLLMs and analyze the distribution
of attention maps and logits of some MLLMs. Our investigation identifies two
primary issues: 1) most instruction tuning datasets predominantly feature
questions that 'directly' relate to the visual content, leading to a bias in
MLLMs' responses to other indirect questions, and 2) MLLMs' attention to visual
tokens is notably lower than to system and question tokens. We further observe
that attention scores between questions and visual tokens as well as the
model's confidence in the answers are lower in response to misleading questions
than to straightforward ones. To address the first challenge, we introduce a
paired positive and negative data construction pipeline to diversify the
dataset. For the second challenge, we propose to enhance the model's focus on
visual content during decoding by refining the text and visual prompt. For the
text prompt, we propose a content guided refinement strategy that performs
preliminary visual content analysis to generate structured information before
answering the question. Additionally, we employ a visual attention refinement
strategy that highlights question-relevant visual tokens to increase the
model's attention to visual content that aligns with the question. Extensive
experiments demonstrate that these challenges can be significantly mitigated
with our proposed dataset and techniques.
|
2406.13642 | Wenxiao Cai | Wenxiao Cai, Iaroslav Ponomarenko, Jianhao Yuan, Xiaoqi Li, Wankou
Yang, Hao Dong, Bo Zhao | SpatialBot: Precise Spatial Understanding with Vision Language Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Language Models (VLMs) have achieved impressive performance in 2D
image understanding, however they are still struggling with spatial
understanding which is the foundation of Embodied AI. In this paper, we propose
SpatialBot for better spatial understanding by feeding both RGB and depth
images. Additionally, we have constructed the SpatialQA dataset, which involves
multi-level depth-related questions to train VLMs for depth understanding.
Finally, we present SpatialBench to comprehensively evaluate VLMs' capabilities
in spatial understanding at different levels. Extensive experiments on our
spatial-understanding benchmark, general VLM benchmarks and Embodied AI tasks,
demonstrate the remarkable improvements of SpatialBot trained on SpatialQA. The
model, code and data are available at https://github.com/BAAI-DCAI/SpatialBot.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 15:41:30 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jun 2024 16:30:48 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Jul 2024 01:44:37 GMT"
},
{
"version": "v4",
"created": "Tue, 30 Jul 2024 03:18:54 GMT"
},
{
"version": "v5",
"created": "Thu, 1 Aug 2024 04:46:58 GMT"
},
{
"version": "v6",
"created": "Tue, 17 Sep 2024 17:13:24 GMT"
},
{
"version": "v7",
"created": "Wed, 19 Mar 2025 05:09:14 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Cai",
"Wenxiao",
""
],
[
"Ponomarenko",
"Iaroslav",
""
],
[
"Yuan",
"Jianhao",
""
],
[
"Li",
"Xiaoqi",
""
],
[
"Yang",
"Wankou",
""
],
[
"Dong",
"Hao",
""
],
[
"Zhao",
"Bo",
""
]
] | TITLE: SpatialBot: Precise Spatial Understanding with Vision Language Models
ABSTRACT: Vision Language Models (VLMs) have achieved impressive performance in 2D
image understanding, however they are still struggling with spatial
understanding which is the foundation of Embodied AI. In this paper, we propose
SpatialBot for better spatial understanding by feeding both RGB and depth
images. Additionally, we have constructed the SpatialQA dataset, which involves
multi-level depth-related questions to train VLMs for depth understanding.
Finally, we present SpatialBench to comprehensively evaluate VLMs' capabilities
in spatial understanding at different levels. Extensive experiments on our
spatial-understanding benchmark, general VLM benchmarks and Embodied AI tasks,
demonstrate the remarkable improvements of SpatialBot trained on SpatialQA. The
model, code and data are available at https://github.com/BAAI-DCAI/SpatialBot.
|
2406.13844 | Lidia Garrucho Moras | Lidia Garrucho, Kaisar Kushibar, Claire-Anne Reidel, Smriti Joshi,
Richard Osuala, Apostolia Tsirikoglou, Maciej Bobowicz, Javier del Riego,
Alessandro Catanese, Katarzyna Gwo\'zdziewicz, Maria-Laura Cosaka, Pasant M.
Abo-Elhoda, Sara W. Tantawy, Shorouq S. Sakrana, Norhan O. Shawky-Abdelfatah,
Amr Muhammad Abdo-Salem, Androniki Kozana, Eugen Divjak, Gordana Ivanac,
Katerina Nikiforaki, Michail E. Klontzas, Rosa Garc\'ia-Dosd\'a, Meltem
Gulsun-Akpinar, O\u{g}uz Lafc{\i}, Ritse Mann, Carlos Mart\'in-Isla, Fred
Prior, Kostas Marias, Martijn P.A. Starmans, Fredrik Strand, Oliver D\'iaz,
Laura Igual, and Karim Lekadir | A large-scale multicenter breast cancer DCE-MRI benchmark dataset with
expert segmentations | 15 paes, 7 figures, 3 tables | Sci Data 12, 453 (2025) | 10.1038/s41597-025-04707-4 | null | cs.CV cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) research in breast cancer Magnetic Resonance
Imaging (MRI) faces challenges due to limited expert-labeled segmentations. To
address this, we present a multicenter dataset of 1506 pre-treatment
T1-weighted dynamic contrast-enhanced MRI cases, including expert annotations
of primary tumors and non-mass-enhanced regions. The dataset integrates imaging
data from four collections in The Cancer Imaging Archive (TCIA), where only 163
cases with expert segmentations were initially available. To facilitate the
annotation process, a deep learning model was trained to produce preliminary
segmentations for the remaining cases. These were subsequently corrected and
verified by 16 breast cancer experts (averaging 9 years of experience),
creating a fully annotated dataset. Additionally, the dataset includes 49
harmonized clinical and demographic variables, as well as pre-trained weights
for a baseline nnU-Net model trained on the annotated data. This resource
addresses a critical gap in publicly available breast cancer datasets, enabling
the development, validation, and benchmarking of advanced deep learning models,
thus driving progress in breast cancer diagnostics, treatment response
prediction, and personalized care.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 21:11:46 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Jul 2024 20:16:23 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Feb 2025 11:20:47 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Garrucho",
"Lidia",
""
],
[
"Kushibar",
"Kaisar",
""
],
[
"Reidel",
"Claire-Anne",
""
],
[
"Joshi",
"Smriti",
""
],
[
"Osuala",
"Richard",
""
],
[
"Tsirikoglou",
"Apostolia",
""
],
[
"Bobowicz",
"Maciej",
""
],
[
"del Riego",
"Javier",
""
],
[
"Catanese",
"Alessandro",
""
],
[
"Gwoździewicz",
"Katarzyna",
""
],
[
"Cosaka",
"Maria-Laura",
""
],
[
"Abo-Elhoda",
"Pasant M.",
""
],
[
"Tantawy",
"Sara W.",
""
],
[
"Sakrana",
"Shorouq S.",
""
],
[
"Shawky-Abdelfatah",
"Norhan O.",
""
],
[
"Abdo-Salem",
"Amr Muhammad",
""
],
[
"Kozana",
"Androniki",
""
],
[
"Divjak",
"Eugen",
""
],
[
"Ivanac",
"Gordana",
""
],
[
"Nikiforaki",
"Katerina",
""
],
[
"Klontzas",
"Michail E.",
""
],
[
"García-Dosdá",
"Rosa",
""
],
[
"Gulsun-Akpinar",
"Meltem",
""
],
[
"Lafcı",
"Oğuz",
""
],
[
"Mann",
"Ritse",
""
],
[
"Martín-Isla",
"Carlos",
""
],
[
"Prior",
"Fred",
""
],
[
"Marias",
"Kostas",
""
],
[
"Starmans",
"Martijn P. A.",
""
],
[
"Strand",
"Fredrik",
""
],
[
"Díaz",
"Oliver",
""
],
[
"Igual",
"Laura",
""
],
[
"Lekadir",
"Karim",
""
]
] | TITLE: A large-scale multicenter breast cancer DCE-MRI benchmark dataset with
expert segmentations
ABSTRACT: Artificial Intelligence (AI) research in breast cancer Magnetic Resonance
Imaging (MRI) faces challenges due to limited expert-labeled segmentations. To
address this, we present a multicenter dataset of 1506 pre-treatment
T1-weighted dynamic contrast-enhanced MRI cases, including expert annotations
of primary tumors and non-mass-enhanced regions. The dataset integrates imaging
data from four collections in The Cancer Imaging Archive (TCIA), where only 163
cases with expert segmentations were initially available. To facilitate the
annotation process, a deep learning model was trained to produce preliminary
segmentations for the remaining cases. These were subsequently corrected and
verified by 16 breast cancer experts (averaging 9 years of experience),
creating a fully annotated dataset. Additionally, the dataset includes 49
harmonized clinical and demographic variables, as well as pre-trained weights
for a baseline nnU-Net model trained on the annotated data. This resource
addresses a critical gap in publicly available breast cancer datasets, enabling
the development, validation, and benchmarking of advanced deep learning models,
thus driving progress in breast cancer diagnostics, treatment response
prediction, and personalized care.
|
2407.07356 | Junliang Guo | Wentao Zhang, Junliang Guo, Tianyu He, Li Zhao, Linli Xu, Jiang Bian | Video In-context Learning: Autoregressive Transformers are Zero-Shot
Video Imitators | ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | People interact with the real-world largely dependent on visual signal, which
are ubiquitous and illustrate detailed demonstrations. In this paper, we
explore utilizing visual signals as a new interface for models to interact with
the environment. Specifically, we choose videos as a representative visual
signal. And by training autoregressive Transformers on video datasets in a
self-supervised objective, we find that the model emerges a zero-shot
capability to infer the semantics from a demonstration video, and imitate the
semantics to an unseen scenario. This allows the models to perform unseen tasks
by watching the demonstration video in an in-context manner, without further
fine-tuning. To validate the imitation capacity, we design various evaluation
metrics including both objective and subjective measures. The results show that
our models can generate high-quality video clips that accurately align with the
semantic guidance provided by the demonstration videos, and we also show that
the imitation capacity follows the scaling law. Code and models have been
open-sourced.
| [
{
"version": "v1",
"created": "Wed, 10 Jul 2024 04:27:06 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 10:22:15 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhang",
"Wentao",
""
],
[
"Guo",
"Junliang",
""
],
[
"He",
"Tianyu",
""
],
[
"Zhao",
"Li",
""
],
[
"Xu",
"Linli",
""
],
[
"Bian",
"Jiang",
""
]
] | TITLE: Video In-context Learning: Autoregressive Transformers are Zero-Shot
Video Imitators
ABSTRACT: People interact with the real-world largely dependent on visual signal, which
are ubiquitous and illustrate detailed demonstrations. In this paper, we
explore utilizing visual signals as a new interface for models to interact with
the environment. Specifically, we choose videos as a representative visual
signal. And by training autoregressive Transformers on video datasets in a
self-supervised objective, we find that the model emerges a zero-shot
capability to infer the semantics from a demonstration video, and imitate the
semantics to an unseen scenario. This allows the models to perform unseen tasks
by watching the demonstration video in an in-context manner, without further
fine-tuning. To validate the imitation capacity, we design various evaluation
metrics including both objective and subjective measures. The results show that
our models can generate high-quality video clips that accurately align with the
semantic guidance provided by the demonstration videos, and we also show that
the imitation capacity follows the scaling law. Code and models have been
open-sourced.
|
2407.11705 | Jianzhu Huai | Jianzhu Huai, Binliang Wang, Yuan Zhuang, Yiwen Chen, Qipeng Li,
Yulong Han | SNAIL Radar: A large-scale diverse benchmark for evaluating
4D-radar-based SLAM | 16 pages, 5 figures, 7 tables | null | null | null | cs.RO eess.SP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 4D radars are increasingly favored for odometry and mapping of autonomous
systems due to their robustness in harsh weather and dynamic environments.
Existing datasets, however, often cover limited areas and are typically
captured using a single platform. To address this gap, we present a diverse
large-scale dataset specifically designed for 4D radar-based localization and
mapping. This dataset was gathered using three different platforms: a handheld
device, an e-bike, and an SUV, under a variety of environmental conditions,
including clear days, nighttime, and heavy rain. The data collection occurred
from September 2023 to February 2024, encompassing diverse settings such as
roads in a vegetated campus and tunnels on highways. Each route was traversed
multiple times to facilitate place recognition evaluations. The sensor suite
included a 3D lidar, 4D radars, stereo cameras, consumer-grade IMUs, and a
GNSS/INS system. Sensor data packets were synchronized to GNSS time using a
two-step process including a convex-hull-based smoothing and a
correlation-based correction. The reference motion for the platforms was
generated by registering lidar scans to a terrestrial laser scanner (TLS) point
cloud map by a lidar inertial sequential localizer which supports forward and
backward processing. The backward pass enables detailed quantitative and
qualitative assessments of reference motion accuracy. To demonstrate the
dataset's utility, we evaluated several state-of-the-art radar-based odometry
and place recognition methods, indicating existing challenges in radar-based
SLAM.
| [
{
"version": "v1",
"created": "Tue, 16 Jul 2024 13:22:33 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Jul 2024 12:34:22 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 01:13:51 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Huai",
"Jianzhu",
""
],
[
"Wang",
"Binliang",
""
],
[
"Zhuang",
"Yuan",
""
],
[
"Chen",
"Yiwen",
""
],
[
"Li",
"Qipeng",
""
],
[
"Han",
"Yulong",
""
]
] | TITLE: SNAIL Radar: A large-scale diverse benchmark for evaluating
4D-radar-based SLAM
ABSTRACT: 4D radars are increasingly favored for odometry and mapping of autonomous
systems due to their robustness in harsh weather and dynamic environments.
Existing datasets, however, often cover limited areas and are typically
captured using a single platform. To address this gap, we present a diverse
large-scale dataset specifically designed for 4D radar-based localization and
mapping. This dataset was gathered using three different platforms: a handheld
device, an e-bike, and an SUV, under a variety of environmental conditions,
including clear days, nighttime, and heavy rain. The data collection occurred
from September 2023 to February 2024, encompassing diverse settings such as
roads in a vegetated campus and tunnels on highways. Each route was traversed
multiple times to facilitate place recognition evaluations. The sensor suite
included a 3D lidar, 4D radars, stereo cameras, consumer-grade IMUs, and a
GNSS/INS system. Sensor data packets were synchronized to GNSS time using a
two-step process including a convex-hull-based smoothing and a
correlation-based correction. The reference motion for the platforms was
generated by registering lidar scans to a terrestrial laser scanner (TLS) point
cloud map by a lidar inertial sequential localizer which supports forward and
backward processing. The backward pass enables detailed quantitative and
qualitative assessments of reference motion accuracy. To demonstrate the
dataset's utility, we evaluated several state-of-the-art radar-based odometry
and place recognition methods, indicating existing challenges in radar-based
SLAM.
|
2407.18112 | Vladimir Somers | Vladimir Somers, Christophe De Vleeschouwer, Alexandre Alahi | Keypoint Promptable Re-Identification | null | Proceedings of the 2024 IEEE/CVF European Conference on Computer
Vision (ECCV24) | 10.1007/978-3-031-72986-7_13 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Occluded Person Re-Identification (ReID) is a metric learning task that
involves matching occluded individuals based on their appearance. While many
studies have tackled occlusions caused by objects, multi-person occlusions
remain less explored. In this work, we identify and address a critical
challenge overlooked by previous occluded ReID methods: the Multi-Person
Ambiguity (MPA) arising when multiple individuals are visible in the same
bounding box, making it impossible to determine the intended ReID target among
the candidates. Inspired by recent work on prompting in vision, we introduce
Keypoint Promptable ReID (KPR), a novel formulation of the ReID problem that
explicitly complements the input bounding box with a set of semantic keypoints
indicating the intended target. Since promptable re-identification is an
unexplored paradigm, existing ReID datasets lack the pixel-level annotations
necessary for prompting. To bridge this gap and foster further research on this
topic, we introduce Occluded-PoseTrack ReID, a novel ReID dataset with
keypoints labels, that features strong inter-person occlusions. Furthermore, we
release custom keypoint labels for four popular ReID benchmarks. Experiments on
person retrieval, but also on pose tracking, demonstrate that our method
systematically surpasses previous state-of-the-art approaches on various
occluded scenarios. Our code, dataset and annotations are available at
https://github.com/VlSomers/keypoint_promptable_reidentification.
| [
{
"version": "v1",
"created": "Thu, 25 Jul 2024 15:20:58 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Somers",
"Vladimir",
""
],
[
"De Vleeschouwer",
"Christophe",
""
],
[
"Alahi",
"Alexandre",
""
]
] | TITLE: Keypoint Promptable Re-Identification
ABSTRACT: Occluded Person Re-Identification (ReID) is a metric learning task that
involves matching occluded individuals based on their appearance. While many
studies have tackled occlusions caused by objects, multi-person occlusions
remain less explored. In this work, we identify and address a critical
challenge overlooked by previous occluded ReID methods: the Multi-Person
Ambiguity (MPA) arising when multiple individuals are visible in the same
bounding box, making it impossible to determine the intended ReID target among
the candidates. Inspired by recent work on prompting in vision, we introduce
Keypoint Promptable ReID (KPR), a novel formulation of the ReID problem that
explicitly complements the input bounding box with a set of semantic keypoints
indicating the intended target. Since promptable re-identification is an
unexplored paradigm, existing ReID datasets lack the pixel-level annotations
necessary for prompting. To bridge this gap and foster further research on this
topic, we introduce Occluded-PoseTrack ReID, a novel ReID dataset with
keypoints labels, that features strong inter-person occlusions. Furthermore, we
release custom keypoint labels for four popular ReID benchmarks. Experiments on
person retrieval, but also on pose tracking, demonstrate that our method
systematically surpasses previous state-of-the-art approaches on various
occluded scenarios. Our code, dataset and annotations are available at
https://github.com/VlSomers/keypoint_promptable_reidentification.
|
2407.18865 | Melih Can Zerin | Melih Can Zerin, Elif Vural and Ali \"Ozg\"ur Y{\i}lmaz | Downlink Channel Covariance Matrix Estimation via Representation
Learning with Graph Regularization | null | null | null | null | cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an algorithm for downlink (DL) channel covariance
matrix (CCM) estimation for frequency division duplexing (FDD) massive
multiple-input multiple-output (MIMO) communication systems with base station
(BS) possessing a uniform linear array (ULA) antenna structure. We consider a
setting where the UL CCM is mapped to DL CCM by a mapping function. We first
present a theoretical error analysis of learning a nonlinear embedding by
constructing a mapping function, which points to the importance of the
Lipschitz regularity of the mapping function for achieving high estimation
performance. Then, based on the theoretical ground, we propose a representation
learning algorithm as a solution for the estimation problem, where Gaussian RBF
kernel interpolators are chosen to map UL CCMs to their DL counterparts. The
proposed algorithm is based on the optimization of an objective function that
fits a regression model between the DL CCM and UL CCM samples in the training
dataset and preserves the local geometric structure of the data in the UL CCM
space, while explicitly regulating the Lipschitz continuity of the mapping
function in light of our theoretical findings. The proposed algorithm surpasses
benchmark methods in terms of three error metrics as shown by simulations.
| [
{
"version": "v1",
"created": "Fri, 26 Jul 2024 16:52:30 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Sep 2024 06:39:14 GMT"
},
{
"version": "v3",
"created": "Sun, 16 Feb 2025 17:33:51 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2025 21:48:14 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zerin",
"Melih Can",
""
],
[
"Vural",
"Elif",
""
],
[
"Yılmaz",
"Ali Özgür",
""
]
] | TITLE: Downlink Channel Covariance Matrix Estimation via Representation
Learning with Graph Regularization
ABSTRACT: In this paper, we propose an algorithm for downlink (DL) channel covariance
matrix (CCM) estimation for frequency division duplexing (FDD) massive
multiple-input multiple-output (MIMO) communication systems with base station
(BS) possessing a uniform linear array (ULA) antenna structure. We consider a
setting where the UL CCM is mapped to DL CCM by a mapping function. We first
present a theoretical error analysis of learning a nonlinear embedding by
constructing a mapping function, which points to the importance of the
Lipschitz regularity of the mapping function for achieving high estimation
performance. Then, based on the theoretical ground, we propose a representation
learning algorithm as a solution for the estimation problem, where Gaussian RBF
kernel interpolators are chosen to map UL CCMs to their DL counterparts. The
proposed algorithm is based on the optimization of an objective function that
fits a regression model between the DL CCM and UL CCM samples in the training
dataset and preserves the local geometric structure of the data in the UL CCM
space, while explicitly regulating the Lipschitz continuity of the mapping
function in light of our theoretical findings. The proposed algorithm surpasses
benchmark methods in terms of three error metrics as shown by simulations.
|
2408.01812 | Weijia Li | Junyan Ye, Jun He, Weijia Li, Zhutao Lv, Yi Lin, Jinhua Yu, Haote
Yang, Conghui He | Leveraging BEV Paradigm for Ground-to-Aerial Image Synthesis | 10 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ground-to-aerial image synthesis focuses on generating realistic aerial
images from corresponding ground street view images while maintaining
consistent content layout, simulating a top-down view. The significant
viewpoint difference leads to domain gaps between views, and dense urban scenes
limit the visible range of street views, making this cross-view generation task
particularly challenging. In this paper, we introduce SkyDiffusion, a novel
cross-view generation method for synthesizing aerial images from street view
images, utilizing a diffusion model and the Bird's-Eye View (BEV) paradigm. The
Curved-BEV method in SkyDiffusion converts street-view images into a BEV
perspective, effectively bridging the domain gap, and employs a "multi-to-one"
mapping strategy to address occlusion issues in dense urban scenes. Next,
SkyDiffusion designed a BEV-guided diffusion model to generate
content-consistent and realistic aerial images. Additionally, we introduce a
novel dataset, Ground2Aerial-3, designed for diverse ground-to-aerial image
synthesis applications, including disaster scene aerial synthesis, low-altitude
UAV image synthesis, and historical high-resolution satellite image synthesis
tasks. Experimental results demonstrate that SkyDiffusion outperforms
state-of-the-art methods on cross-view datasets across natural (CVUSA),
suburban (CVACT), urban (VIGOR-Chicago), and various application scenarios
(G2A-3), achieving realistic and content-consistent aerial image generation.
The code, datasets and more information of this work can be found at
https://opendatalab.github.io/skydiffusion/ .
| [
{
"version": "v1",
"created": "Sat, 3 Aug 2024 15:43:56 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Aug 2024 08:05:02 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Dec 2024 11:29:09 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Mar 2025 05:50:20 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ye",
"Junyan",
""
],
[
"He",
"Jun",
""
],
[
"Li",
"Weijia",
""
],
[
"Lv",
"Zhutao",
""
],
[
"Lin",
"Yi",
""
],
[
"Yu",
"Jinhua",
""
],
[
"Yang",
"Haote",
""
],
[
"He",
"Conghui",
""
]
] | TITLE: Leveraging BEV Paradigm for Ground-to-Aerial Image Synthesis
ABSTRACT: Ground-to-aerial image synthesis focuses on generating realistic aerial
images from corresponding ground street view images while maintaining
consistent content layout, simulating a top-down view. The significant
viewpoint difference leads to domain gaps between views, and dense urban scenes
limit the visible range of street views, making this cross-view generation task
particularly challenging. In this paper, we introduce SkyDiffusion, a novel
cross-view generation method for synthesizing aerial images from street view
images, utilizing a diffusion model and the Bird's-Eye View (BEV) paradigm. The
Curved-BEV method in SkyDiffusion converts street-view images into a BEV
perspective, effectively bridging the domain gap, and employs a "multi-to-one"
mapping strategy to address occlusion issues in dense urban scenes. Next,
SkyDiffusion designed a BEV-guided diffusion model to generate
content-consistent and realistic aerial images. Additionally, we introduce a
novel dataset, Ground2Aerial-3, designed for diverse ground-to-aerial image
synthesis applications, including disaster scene aerial synthesis, low-altitude
UAV image synthesis, and historical high-resolution satellite image synthesis
tasks. Experimental results demonstrate that SkyDiffusion outperforms
state-of-the-art methods on cross-view datasets across natural (CVUSA),
suburban (CVACT), urban (VIGOR-Chicago), and various application scenarios
(G2A-3), achieving realistic and content-consistent aerial image generation.
The code, datasets and more information of this work can be found at
https://opendatalab.github.io/skydiffusion/ .
|
2408.07246 | Junxian Li | Junxian Li, Di Zhang, Xunzhi Wang, Zeying Hao, Jingdi Lei, Qian Tan,
Cai Zhou, Wei Liu, Yaotian Yang, Xinrui Xiong, Weiyun Wang, Zhe Chen, Wenhai
Wang, Wei Li, Shufei Zhang, Mao Su, Wanli Ouyang, Yuqiang Li, Dongzhan Zhou | ChemVLM: Exploring the Power of Multimodal Large Language Models in
Chemistry Area | 11 pages, updated version | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have achieved remarkable success and have been
applied across various scientific fields, including chemistry. However, many
chemical tasks require the processing of visual information, which cannot be
successfully handled by existing chemical LLMs. This brings a growing need for
models capable of integrating multimodal information in the chemical domain. In
this paper, we introduce \textbf{ChemVLM}, an open-source chemical multimodal
large language model specifically designed for chemical applications. ChemVLM
is trained on a carefully curated bilingual multimodal dataset that enhances
its ability to understand both textual and visual chemical information,
including molecular structures, reactions, and chemistry examination questions.
We develop three datasets for comprehensive evaluation, tailored to Chemical
Optical Character Recognition (OCR), Multimodal Chemical Reasoning (MMCR), and
Multimodal Molecule Understanding tasks. We benchmark ChemVLM against a range
of open-source and proprietary multimodal large language models on various
tasks. Experimental results demonstrate that ChemVLM achieves competitive
performance across all evaluated tasks. Our model can be found at
https://huggingface.co/AI4Chem/ChemVLM-26B.
| [
{
"version": "v1",
"created": "Wed, 14 Aug 2024 01:16:40 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Aug 2024 16:46:32 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 08:43:44 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Mar 2025 11:46:58 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Junxian",
""
],
[
"Zhang",
"Di",
""
],
[
"Wang",
"Xunzhi",
""
],
[
"Hao",
"Zeying",
""
],
[
"Lei",
"Jingdi",
""
],
[
"Tan",
"Qian",
""
],
[
"Zhou",
"Cai",
""
],
[
"Liu",
"Wei",
""
],
[
"Yang",
"Yaotian",
""
],
[
"Xiong",
"Xinrui",
""
],
[
"Wang",
"Weiyun",
""
],
[
"Chen",
"Zhe",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Li",
"Wei",
""
],
[
"Zhang",
"Shufei",
""
],
[
"Su",
"Mao",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Li",
"Yuqiang",
""
],
[
"Zhou",
"Dongzhan",
""
]
] | TITLE: ChemVLM: Exploring the Power of Multimodal Large Language Models in
Chemistry Area
ABSTRACT: Large Language Models (LLMs) have achieved remarkable success and have been
applied across various scientific fields, including chemistry. However, many
chemical tasks require the processing of visual information, which cannot be
successfully handled by existing chemical LLMs. This brings a growing need for
models capable of integrating multimodal information in the chemical domain. In
this paper, we introduce \textbf{ChemVLM}, an open-source chemical multimodal
large language model specifically designed for chemical applications. ChemVLM
is trained on a carefully curated bilingual multimodal dataset that enhances
its ability to understand both textual and visual chemical information,
including molecular structures, reactions, and chemistry examination questions.
We develop three datasets for comprehensive evaluation, tailored to Chemical
Optical Character Recognition (OCR), Multimodal Chemical Reasoning (MMCR), and
Multimodal Molecule Understanding tasks. We benchmark ChemVLM against a range
of open-source and proprietary multimodal large language models on various
tasks. Experimental results demonstrate that ChemVLM achieves competitive
performance across all evaluated tasks. Our model can be found at
https://huggingface.co/AI4Chem/ChemVLM-26B.
|
2408.14506 | Haoxuan Wang | Zhenghao Zhao, Haoxuan Wang, Yuzhang Shang, Kai Wang, Yan Yan | Distilling Long-tailed Datasets | CVPR 2025. Code is available at https://github.com/ichbill/LTDD | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Dataset distillation aims to synthesize a small, information-rich dataset
from a large one for efficient model training. However, existing dataset
distillation methods struggle with long-tailed datasets, which are prevalent in
real-world scenarios. By investigating the reasons behind this unexpected
result, we identified two main causes: 1) The distillation process on
imbalanced datasets develops biased gradients, leading to the synthesis of
similarly imbalanced distilled datasets. 2) The experts trained on such
datasets perform suboptimally on tail classes, resulting in misguided
distillation supervision and poor-quality soft-label initialization. To address
these issues, we first propose Distribution-agnostic Matching to avoid directly
matching the biased expert trajectories. It reduces the distance between the
student and the biased expert trajectories and prevents the tail class bias
from being distilled to the synthetic dataset. Moreover, we improve the
distillation guidance with Expert Decoupling, which jointly matches the
decoupled backbone and classifier to improve the tail class performance and
initialize reliable soft labels. This work pioneers the field of long-tailed
dataset distillation, marking the first effective effort to distill long-tailed
datasets.
| [
{
"version": "v1",
"created": "Sat, 24 Aug 2024 15:36:36 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 01:46:48 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhao",
"Zhenghao",
""
],
[
"Wang",
"Haoxuan",
""
],
[
"Shang",
"Yuzhang",
""
],
[
"Wang",
"Kai",
""
],
[
"Yan",
"Yan",
""
]
] | TITLE: Distilling Long-tailed Datasets
ABSTRACT: Dataset distillation aims to synthesize a small, information-rich dataset
from a large one for efficient model training. However, existing dataset
distillation methods struggle with long-tailed datasets, which are prevalent in
real-world scenarios. By investigating the reasons behind this unexpected
result, we identified two main causes: 1) The distillation process on
imbalanced datasets develops biased gradients, leading to the synthesis of
similarly imbalanced distilled datasets. 2) The experts trained on such
datasets perform suboptimally on tail classes, resulting in misguided
distillation supervision and poor-quality soft-label initialization. To address
these issues, we first propose Distribution-agnostic Matching to avoid directly
matching the biased expert trajectories. It reduces the distance between the
student and the biased expert trajectories and prevents the tail class bias
from being distilled to the synthetic dataset. Moreover, we improve the
distillation guidance with Expert Decoupling, which jointly matches the
decoupled backbone and classifier to improve the tail class performance and
initialize reliable soft labels. This work pioneers the field of long-tailed
dataset distillation, marking the first effective effort to distill long-tailed
datasets.
|
2409.13642 | Md Nakhla Rafi | Md Nakhla Rafi, Dong Jae Kim, Tse-Hsun Chen, Shaowei Wang | A Multi-Agent Approach to Fault Localization via Graph-Based Retrieval
and Reflexion | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Identifying and resolving software faults remains a challenging and
resource-intensive process. Traditional fault localization techniques, such as
Spectrum-Based Fault Localization (SBFL), leverage statistical analysis of test
coverage but often suffer from limited accuracy. While learning-based
approaches improve fault localization, they demand extensive training datasets
and high computational resources. Recent advances in Large Language Models
(LLMs) offer new opportunities by enhancing code understanding and reasoning.
However, existing LLM-based fault localization techniques face significant
challenges, including token limitations, performance degradation with long
inputs, and scalability issues in complex software systems. To overcome these
obstacles, we propose LLM4FL, a multi-agent fault localization framework that
utilizes three specialized LLM agents. First, the Context Extraction Agent
applies an order-sensitive segmentation strategy to partition large coverage
data within the LLM's token limit, analyze failure context, and prioritize
failure-related methods. The Debugger Agent then processes the extracted data,
which employs graph-based retrieval-augmented code navigation to reason about
failure causes and rank suspicious methods. Finally, the Reviewer Agent
re-evaluates the identified faulty methods using verbal reinforcement learning,
engaging in self-criticism and iterative refinement. Evaluated on the Defects4J
(V2.0.0) benchmark, which includes 675 faults from 14 Java projects, LLM4FL
achieves an 18.55\% improvement in Top-1 accuracy over AutoFL and 4.82\% over
SoapFL. It outperforms supervised techniques such as DeepFL and Grace, all
without requiring task-specific training. Furthermore, its coverage
segmentation and prompt chaining strategies enhance performance, increasing
Top-1 accuracy by up to 22\%.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2024 16:47:34 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 04:22:52 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Rafi",
"Md Nakhla",
""
],
[
"Kim",
"Dong Jae",
""
],
[
"Chen",
"Tse-Hsun",
""
],
[
"Wang",
"Shaowei",
""
]
] | TITLE: A Multi-Agent Approach to Fault Localization via Graph-Based Retrieval
and Reflexion
ABSTRACT: Identifying and resolving software faults remains a challenging and
resource-intensive process. Traditional fault localization techniques, such as
Spectrum-Based Fault Localization (SBFL), leverage statistical analysis of test
coverage but often suffer from limited accuracy. While learning-based
approaches improve fault localization, they demand extensive training datasets
and high computational resources. Recent advances in Large Language Models
(LLMs) offer new opportunities by enhancing code understanding and reasoning.
However, existing LLM-based fault localization techniques face significant
challenges, including token limitations, performance degradation with long
inputs, and scalability issues in complex software systems. To overcome these
obstacles, we propose LLM4FL, a multi-agent fault localization framework that
utilizes three specialized LLM agents. First, the Context Extraction Agent
applies an order-sensitive segmentation strategy to partition large coverage
data within the LLM's token limit, analyze failure context, and prioritize
failure-related methods. The Debugger Agent then processes the extracted data,
which employs graph-based retrieval-augmented code navigation to reason about
failure causes and rank suspicious methods. Finally, the Reviewer Agent
re-evaluates the identified faulty methods using verbal reinforcement learning,
engaging in self-criticism and iterative refinement. Evaluated on the Defects4J
(V2.0.0) benchmark, which includes 675 faults from 14 Java projects, LLM4FL
achieves an 18.55\% improvement in Top-1 accuracy over AutoFL and 4.82\% over
SoapFL. It outperforms supervised techniques such as DeepFL and Grace, all
without requiring task-specific training. Furthermore, its coverage
segmentation and prompt chaining strategies enhance performance, increasing
Top-1 accuracy by up to 22\%.
|
2409.18584 | Jiaming Zhou | Jiaming Zhou, Shiyao Wang, Shiwan Zhao, Jiabei He, Haoqin Sun, Hui
Wang, Cheng Liu, Aobo Kong, Yujie Guo, Xi Yang, Yequan Wang, Yonghua Lin and
Yong Qin | ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young
Children Aged 3-5 | null | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic speech recognition (ASR) systems have advanced significantly with
models like Whisper, Conformer, and self-supervised frameworks such as Wav2vec
2.0 and HuBERT. However, developing robust ASR models for young children's
speech remains challenging due to differences in pronunciation, tone, and pace
compared to adult speech. In this paper, we introduce a new Mandarin speech
dataset focused on children aged 3 to 5, addressing the scarcity of resources
in this area. The dataset comprises 41.25 hours of speech with carefully
crafted manual transcriptions, collected from 397 speakers across various
provinces in China, with balanced gender representation. We provide a
comprehensive analysis of speaker demographics, speech duration distribution
and geographic coverage. Additionally, we evaluate ASR performance on models
trained from scratch, such as Conformer, as well as fine-tuned pre-trained
models like HuBERT and Whisper, where fine-tuning demonstrates significant
performance improvements. Furthermore, we assess speaker verification (SV) on
our dataset, showing that, despite the challenges posed by the unique vocal
characteristics of young children, the dataset effectively supports both ASR
and SV tasks. This dataset is a valuable contribution to Mandarin child speech
research. The dataset is now open-source and freely available for all academic
purposes on https://github.com/flageval-baai/ChildMandarin.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2024 09:42:27 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Sep 2024 12:49:04 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 12:06:13 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhou",
"Jiaming",
""
],
[
"Wang",
"Shiyao",
""
],
[
"Zhao",
"Shiwan",
""
],
[
"He",
"Jiabei",
""
],
[
"Sun",
"Haoqin",
""
],
[
"Wang",
"Hui",
""
],
[
"Liu",
"Cheng",
""
],
[
"Kong",
"Aobo",
""
],
[
"Guo",
"Yujie",
""
],
[
"Yang",
"Xi",
""
],
[
"Wang",
"Yequan",
""
],
[
"Lin",
"Yonghua",
""
],
[
"Qin",
"Yong",
""
]
] | TITLE: ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young
Children Aged 3-5
ABSTRACT: Automatic speech recognition (ASR) systems have advanced significantly with
models like Whisper, Conformer, and self-supervised frameworks such as Wav2vec
2.0 and HuBERT. However, developing robust ASR models for young children's
speech remains challenging due to differences in pronunciation, tone, and pace
compared to adult speech. In this paper, we introduce a new Mandarin speech
dataset focused on children aged 3 to 5, addressing the scarcity of resources
in this area. The dataset comprises 41.25 hours of speech with carefully
crafted manual transcriptions, collected from 397 speakers across various
provinces in China, with balanced gender representation. We provide a
comprehensive analysis of speaker demographics, speech duration distribution
and geographic coverage. Additionally, we evaluate ASR performance on models
trained from scratch, such as Conformer, as well as fine-tuned pre-trained
models like HuBERT and Whisper, where fine-tuning demonstrates significant
performance improvements. Furthermore, we assess speaker verification (SV) on
our dataset, showing that, despite the challenges posed by the unique vocal
characteristics of young children, the dataset effectively supports both ASR
and SV tasks. This dataset is a valuable contribution to Mandarin child speech
research. The dataset is now open-source and freely available for all academic
purposes on https://github.com/flageval-baai/ChildMandarin.
|
2410.03781 | Romain Puech | Romain Puech, Jakub Macina, Julia Chatain, Mrinmaya Sachan, Manu Kapur | Towards the Pedagogical Steering of Large Language Models for Tutoring:
A Case Study with Modeling Productive Failure | 19 pages, 10 figures, 6 tables | null | null | null | cs.HC cs.AI cs.CY cs.MA | http://creativecommons.org/licenses/by-nc-sa/4.0/ | One-to-one tutoring is one of the most efficient methods of teaching. With
the growing popularity of Large Language Models (LLMs), there have been efforts
to create LLM based conversational tutors which can expand the benefits of one
to one tutoring to everyone. However, current LLMs are trained primarily to be
helpful assistants and lack crucial pedagogical skills. For example, they often
quickly reveal the solution to the student and fail to plan for a richer multi
turn pedagogical interaction. To use LLMs in pedagogical settings, they need to
be steered to use effective teaching strategies: a problem we introduce as
Pedagogical Steering. We develop StratL, an algorithm to optimize LLM prompts
and steer it to follow a predefined multi-turn tutoring plan represented as a
transition graph. As a case study, we create a prototype tutor for high school
math following Productive Failure (PF), an advanced and effective learning
design. To validate our approach in a real-world setting, we run a field study
with 17 high school students in Singapore and show that StratL succeeds in
steering the LLM to follow the PF tutoring strategy. Finally, we highlight
challenges in Pedagogical Steering of LLMs and offer opportunities for further
improvements by publishing a dataset of PF problems and our code.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 16:15:41 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 19:44:51 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Puech",
"Romain",
""
],
[
"Macina",
"Jakub",
""
],
[
"Chatain",
"Julia",
""
],
[
"Sachan",
"Mrinmaya",
""
],
[
"Kapur",
"Manu",
""
]
] | TITLE: Towards the Pedagogical Steering of Large Language Models for Tutoring:
A Case Study with Modeling Productive Failure
ABSTRACT: One-to-one tutoring is one of the most efficient methods of teaching. With
the growing popularity of Large Language Models (LLMs), there have been efforts
to create LLM based conversational tutors which can expand the benefits of one
to one tutoring to everyone. However, current LLMs are trained primarily to be
helpful assistants and lack crucial pedagogical skills. For example, they often
quickly reveal the solution to the student and fail to plan for a richer multi
turn pedagogical interaction. To use LLMs in pedagogical settings, they need to
be steered to use effective teaching strategies: a problem we introduce as
Pedagogical Steering. We develop StratL, an algorithm to optimize LLM prompts
and steer it to follow a predefined multi-turn tutoring plan represented as a
transition graph. As a case study, we create a prototype tutor for high school
math following Productive Failure (PF), an advanced and effective learning
design. To validate our approach in a real-world setting, we run a field study
with 17 high school students in Singapore and show that StratL succeeds in
steering the LLM to follow the PF tutoring strategy. Finally, we highlight
challenges in Pedagogical Steering of LLMs and offer opportunities for further
improvements by publishing a dataset of PF problems and our code.
|
2410.06385 | James Pope | James Pope, Md Hassanuzzaman, William Chapman, Huw Day, Mingmar
Sherpa, Omar Emara, Nirmala Adhikari, Ayush Joshi | Skin Cancer Machine Learning Model Tone Bias | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Background: Many open-source skin cancer image datasets are the result of
clinical trials conducted in countries with lighter skin tones. Due to this
tone imbalance, machine learning models derived from these datasets can perform
well at detecting skin cancer for lighter skin tones. Any tone bias in these
models could introduce fairness concerns and reduce public trust in the
artificial intelligence health field.
Methods: We examine a subset of images from the International Skin Imaging
Collaboration (ISIC) archive that provide tone information. The subset has a
significant tone imbalance. These imbalances could explain a model's tone bias.
To address this, we train models using the imbalanced dataset and a balanced
dataset to compare against. The datasets are used to train a deep convolutional
neural network model to classify the images as malignant or benign. We then
evaluate the models' disparate impact, based on selection rate, relative to
dark or light skin tone.
Results: Using the imbalanced dataset, we found that the model is
significantly better at detecting malignant images in lighter tone resulting in
a disparate impact of 0.577. Using the balanced dataset, we found that the
model is also significantly better at detecting malignant images in lighter
versus darker tones with a disparate impact of 0.684. Using the imbalanced or
balanced dataset to train the model still results in a disparate impact well
below the standard threshold of 0.80 which suggests the model is biased with
respect to skin tone.
Conclusion: The results show that typical skin cancer machine learning models
can be tone biased. These results provide evidence that diagnosis or tone
imbalance is not the cause of the bias. Other techniques will be necessary to
identify and address the bias in these models, an area of future investigation.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 21:33:02 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 13:12:12 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Pope",
"James",
""
],
[
"Hassanuzzaman",
"Md",
""
],
[
"Chapman",
"William",
""
],
[
"Day",
"Huw",
""
],
[
"Sherpa",
"Mingmar",
""
],
[
"Emara",
"Omar",
""
],
[
"Adhikari",
"Nirmala",
""
],
[
"Joshi",
"Ayush",
""
]
] | TITLE: Skin Cancer Machine Learning Model Tone Bias
ABSTRACT: Background: Many open-source skin cancer image datasets are the result of
clinical trials conducted in countries with lighter skin tones. Due to this
tone imbalance, machine learning models derived from these datasets can perform
well at detecting skin cancer for lighter skin tones. Any tone bias in these
models could introduce fairness concerns and reduce public trust in the
artificial intelligence health field.
Methods: We examine a subset of images from the International Skin Imaging
Collaboration (ISIC) archive that provide tone information. The subset has a
significant tone imbalance. These imbalances could explain a model's tone bias.
To address this, we train models using the imbalanced dataset and a balanced
dataset to compare against. The datasets are used to train a deep convolutional
neural network model to classify the images as malignant or benign. We then
evaluate the models' disparate impact, based on selection rate, relative to
dark or light skin tone.
Results: Using the imbalanced dataset, we found that the model is
significantly better at detecting malignant images in lighter tone resulting in
a disparate impact of 0.577. Using the balanced dataset, we found that the
model is also significantly better at detecting malignant images in lighter
versus darker tones with a disparate impact of 0.684. Using the imbalanced or
balanced dataset to train the model still results in a disparate impact well
below the standard threshold of 0.80 which suggests the model is biased with
respect to skin tone.
Conclusion: The results show that typical skin cancer machine learning models
can be tone biased. These results provide evidence that diagnosis or tone
imbalance is not the cause of the bias. Other techniques will be necessary to
identify and address the bias in these models, an area of future investigation.
|
2410.09907 | Alexandru-Iulius Jerpelea | Ecaterina \c{S}tef\u{a}nescu and Alexandru-Iulius Jerpelea | Reddit is all you need: Authorship profiling for Romanian | 10 pages, 5 tables and 1 figure, published and presented at The 19th
International Conference on Linguistic Resources and Tools for Natural
Language Processing (ConsILR 2024) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Authorship profiling is the process of identifying an author's
characteristics based on their writings. This centuries old problem has become
more intriguing especially with recent developments in Natural Language
Processing (NLP). In this paper, we introduce a corpus of short texts in the
Romanian language, annotated with certain author characteristic keywords; to
our knowledge, the first of its kind. In order to do this, we exploit a social
media platform called Reddit. We leverage its thematic community-based
structure (subreddits structure), which offers information about the author's
background. We infer an user's demographic and some broad personal traits, such
as age category, employment status, interests, and social orientation based on
the subreddit and other cues. We thus obtain a 23k+ samples corpus, extracted
from 100+ Romanian subreddits. We analyse our dataset, and finally, we
fine-tune and evaluate Large Language Models (LLMs) to prove baselines
capabilities for authorship profiling using the corpus, indicating the need for
further research in the field. We publicly release all our resources.
| [
{
"version": "v1",
"created": "Sun, 13 Oct 2024 16:27:31 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 19:48:28 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ştefănescu",
"Ecaterina",
""
],
[
"Jerpelea",
"Alexandru-Iulius",
""
]
] | TITLE: Reddit is all you need: Authorship profiling for Romanian
ABSTRACT: Authorship profiling is the process of identifying an author's
characteristics based on their writings. This centuries old problem has become
more intriguing especially with recent developments in Natural Language
Processing (NLP). In this paper, we introduce a corpus of short texts in the
Romanian language, annotated with certain author characteristic keywords; to
our knowledge, the first of its kind. In order to do this, we exploit a social
media platform called Reddit. We leverage its thematic community-based
structure (subreddits structure), which offers information about the author's
background. We infer an user's demographic and some broad personal traits, such
as age category, employment status, interests, and social orientation based on
the subreddit and other cues. We thus obtain a 23k+ samples corpus, extracted
from 100+ Romanian subreddits. We analyse our dataset, and finally, we
fine-tune and evaluate Large Language Models (LLMs) to prove baselines
capabilities for authorship profiling using the corpus, indicating the need for
further research in the field. We publicly release all our resources.
|
2410.11666 | Zhiqiang Yan | Zhengxue Wang and Zhiqiang Yan and Jinshan Pan and Guangwei Gao and
Kai Zhang and Jian Yang | DORNet: A Degradation Oriented and Regularized Network for Blind Depth
Super-Resolution | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent RGB-guided depth super-resolution methods have achieved impressive
performance under the assumption of fixed and known degradation (e.g., bicubic
downsampling). However, in real-world scenarios, captured depth data often
suffer from unconventional and unknown degradation due to sensor limitations
and complex imaging environments (e.g., low reflective surfaces, varying
illumination). Consequently, the performance of these methods significantly
declines when real-world degradation deviate from their assumptions. In this
paper, we propose the Degradation Oriented and Regularized Network (DORNet), a
novel framework designed to adaptively address unknown degradation in
real-world scenes through implicit degradation representations. Our approach
begins with the development of a self-supervised degradation learning strategy,
which models the degradation representations of low-resolution depth data using
routing selection-based degradation regularization. To facilitate effective
RGB-D fusion, we further introduce a degradation-oriented feature
transformation module that selectively propagates RGB content into the depth
data based on the learned degradation priors. Extensive experimental results on
both real and synthetic datasets demonstrate the superiority of our DORNet in
handling unknown degradation, outperforming existing methods. The code is
available at https://github.com/yanzq95/DORNet.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 14:53:07 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Oct 2024 08:28:42 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Nov 2024 12:00:44 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Mar 2025 11:57:01 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Zhengxue",
""
],
[
"Yan",
"Zhiqiang",
""
],
[
"Pan",
"Jinshan",
""
],
[
"Gao",
"Guangwei",
""
],
[
"Zhang",
"Kai",
""
],
[
"Yang",
"Jian",
""
]
] | TITLE: DORNet: A Degradation Oriented and Regularized Network for Blind Depth
Super-Resolution
ABSTRACT: Recent RGB-guided depth super-resolution methods have achieved impressive
performance under the assumption of fixed and known degradation (e.g., bicubic
downsampling). However, in real-world scenarios, captured depth data often
suffer from unconventional and unknown degradation due to sensor limitations
and complex imaging environments (e.g., low reflective surfaces, varying
illumination). Consequently, the performance of these methods significantly
declines when real-world degradation deviate from their assumptions. In this
paper, we propose the Degradation Oriented and Regularized Network (DORNet), a
novel framework designed to adaptively address unknown degradation in
real-world scenes through implicit degradation representations. Our approach
begins with the development of a self-supervised degradation learning strategy,
which models the degradation representations of low-resolution depth data using
routing selection-based degradation regularization. To facilitate effective
RGB-D fusion, we further introduce a degradation-oriented feature
transformation module that selectively propagates RGB content into the depth
data based on the learned degradation priors. Extensive experimental results on
both real and synthetic datasets demonstrate the superiority of our DORNet in
handling unknown degradation, outperforming existing methods. The code is
available at https://github.com/yanzq95/DORNet.
|
2410.11761 | Ying Chen | Ying Chen, Guoan Wang, Yuanfeng Ji, Yanjun Li, Jin Ye, Tianbin Li,
Ming Hu, Rongshan Yu, Yu Qiao, and Junjun He | SlideChat: A Large Vision-Language Assistant for Whole-Slide Pathology
Image Understanding | Accepted by CVPR2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the progress made by multimodal large language models (MLLMs) in
computational pathology, they remain limited by a predominant focus on
patch-level analysis, missing essential contextual information at the
whole-slide level. The lack of large-scale instruction datasets and the
gigapixel scale of whole slide images (WSIs) pose significant developmental
challenges. In this paper, we present SlideChat, the first vision-language
assistant capable of understanding gigapixel whole-slide images, exhibiting
excellent multimodal conversational capability and response complex instruction
across diverse pathology scenarios. To support its development, we created
SlideInstruction, the largest instruction-following dataset for WSIs consisting
of 4.2K WSI captions and 176K VQA pairs with multiple categories. Furthermore,
we propose SlideBench, a multimodal benchmark that incorporates captioning and
VQA tasks to assess SlideChat's capabilities in varied clinical settings such
as microscopy, diagnosis. Compared to both general and specialized MLLMs,
SlideChat exhibits exceptional capabilities achieving state-of-the-art
performance on 18 of 22 tasks. For example, it achieved an overall accuracy of
81.17% on SlideBench-VQA (TCGA), and 54.15% on SlideBench-VQA (BCNB). Our code,
data, and model is publicly accessible at
https://uni-medical.github.io/SlideChat.github.io.
| [
{
"version": "v1",
"created": "Tue, 15 Oct 2024 16:33:33 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Oct 2024 08:35:28 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 17:56:39 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Ying",
""
],
[
"Wang",
"Guoan",
""
],
[
"Ji",
"Yuanfeng",
""
],
[
"Li",
"Yanjun",
""
],
[
"Ye",
"Jin",
""
],
[
"Li",
"Tianbin",
""
],
[
"Hu",
"Ming",
""
],
[
"Yu",
"Rongshan",
""
],
[
"Qiao",
"Yu",
""
],
[
"He",
"Junjun",
""
]
] | TITLE: SlideChat: A Large Vision-Language Assistant for Whole-Slide Pathology
Image Understanding
ABSTRACT: Despite the progress made by multimodal large language models (MLLMs) in
computational pathology, they remain limited by a predominant focus on
patch-level analysis, missing essential contextual information at the
whole-slide level. The lack of large-scale instruction datasets and the
gigapixel scale of whole slide images (WSIs) pose significant developmental
challenges. In this paper, we present SlideChat, the first vision-language
assistant capable of understanding gigapixel whole-slide images, exhibiting
excellent multimodal conversational capability and response complex instruction
across diverse pathology scenarios. To support its development, we created
SlideInstruction, the largest instruction-following dataset for WSIs consisting
of 4.2K WSI captions and 176K VQA pairs with multiple categories. Furthermore,
we propose SlideBench, a multimodal benchmark that incorporates captioning and
VQA tasks to assess SlideChat's capabilities in varied clinical settings such
as microscopy, diagnosis. Compared to both general and specialized MLLMs,
SlideChat exhibits exceptional capabilities achieving state-of-the-art
performance on 18 of 22 tasks. For example, it achieved an overall accuracy of
81.17% on SlideBench-VQA (TCGA), and 54.15% on SlideBench-VQA (BCNB). Our code,
data, and model is publicly accessible at
https://uni-medical.github.io/SlideChat.github.io.
|
2410.18113 | Zihan Wu Mr | Zihan Wu, Zhaoke Huang and Hong Yan | Scalable Co-Clustering for Large-Scale Data through Dynamic Partitioning
and Hierarchical Merging | 8 pages, 2 figures | null | 10.1109/SMC54092.2024.10832071 | null | cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Co-clustering simultaneously clusters rows and columns, revealing more
fine-grained groups. However, existing co-clustering methods suffer from poor
scalability and cannot handle large-scale data. This paper presents a novel and
scalable co-clustering method designed to uncover intricate patterns in
high-dimensional, large-scale datasets. Specifically, we first propose a large
matrix partitioning algorithm that partitions a large matrix into smaller
submatrices, enabling parallel co-clustering. This method employs a
probabilistic model to optimize the configuration of submatrices, balancing the
computational efficiency and depth of analysis. Additionally, we propose a
hierarchical co-cluster merging algorithm that efficiently identifies and
merges co-clusters from these submatrices, enhancing the robustness and
reliability of the process. Extensive evaluations validate the effectiveness
and efficiency of our method. Experimental results demonstrate a significant
reduction in computation time, with an approximate 83% decrease for dense
matrices and up to 30% for sparse matrices.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 04:47:22 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 04:30:02 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 14:36:56 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wu",
"Zihan",
""
],
[
"Huang",
"Zhaoke",
""
],
[
"Yan",
"Hong",
""
]
] | TITLE: Scalable Co-Clustering for Large-Scale Data through Dynamic Partitioning
and Hierarchical Merging
ABSTRACT: Co-clustering simultaneously clusters rows and columns, revealing more
fine-grained groups. However, existing co-clustering methods suffer from poor
scalability and cannot handle large-scale data. This paper presents a novel and
scalable co-clustering method designed to uncover intricate patterns in
high-dimensional, large-scale datasets. Specifically, we first propose a large
matrix partitioning algorithm that partitions a large matrix into smaller
submatrices, enabling parallel co-clustering. This method employs a
probabilistic model to optimize the configuration of submatrices, balancing the
computational efficiency and depth of analysis. Additionally, we propose a
hierarchical co-cluster merging algorithm that efficiently identifies and
merges co-clusters from these submatrices, enhancing the robustness and
reliability of the process. Extensive evaluations validate the effectiveness
and efficiency of our method. Experimental results demonstrate a significant
reduction in computation time, with an approximate 83% decrease for dense
matrices and up to 30% for sparse matrices.
|
2411.05060 | Kellin Pelrine | Camille Thibault, Jacob-Junqi Tian, Gabrielle Peloquin-Skulski, Taylor
Lynn Curtis, James Zhou, Florence Laflamme, Yuxiang Guan, Reihaneh Rabbany,
Jean-Fran\c{c}ois Godbout, Kellin Pelrine | A Guide to Misinformation Detection Data and Evaluation | null | null | null | null | cs.SI cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Misinformation is a complex societal issue, and mitigating solutions are
difficult to create due to data deficiencies. To address this, we have curated
the largest collection of (mis)information datasets in the literature, totaling
75. From these, we evaluated the quality of 36 datasets that consist of
statements or claims, as well as the 9 datasets that consist of data in purely
paragraph form. We assess these datasets to identify those with solid
foundations for empirical work and those with flaws that could result in
misleading and non-generalizable results, such as spurious correlations, or
examples that are ambiguous or otherwise impossible to assess for veracity. We
find the latter issue is particularly severe and affects most datasets in the
literature. We further provide state-of-the-art baselines on all these
datasets, but show that regardless of label quality, categorical labels may no
longer give an accurate evaluation of detection model performance. Finally, we
we propose and highlight Evaluation Quality Assessment (EQA) as a tool to guide
the field toward systemic solutions rather than inadvertently propagating
issues in evaluation. Overall, this guide aims to provide a roadmap for higher
quality data and better grounded evaluations, ultimately improving research in
misinformation detection. All datasets and other artifacts are available at
misinfo-datasets.complexdatalab.com.
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2024 18:47:39 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 06:52:27 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Thibault",
"Camille",
""
],
[
"Tian",
"Jacob-Junqi",
""
],
[
"Peloquin-Skulski",
"Gabrielle",
""
],
[
"Curtis",
"Taylor Lynn",
""
],
[
"Zhou",
"James",
""
],
[
"Laflamme",
"Florence",
""
],
[
"Guan",
"Yuxiang",
""
],
[
"Rabbany",
"Reihaneh",
""
],
[
"Godbout",
"Jean-François",
""
],
[
"Pelrine",
"Kellin",
""
]
] | TITLE: A Guide to Misinformation Detection Data and Evaluation
ABSTRACT: Misinformation is a complex societal issue, and mitigating solutions are
difficult to create due to data deficiencies. To address this, we have curated
the largest collection of (mis)information datasets in the literature, totaling
75. From these, we evaluated the quality of 36 datasets that consist of
statements or claims, as well as the 9 datasets that consist of data in purely
paragraph form. We assess these datasets to identify those with solid
foundations for empirical work and those with flaws that could result in
misleading and non-generalizable results, such as spurious correlations, or
examples that are ambiguous or otherwise impossible to assess for veracity. We
find the latter issue is particularly severe and affects most datasets in the
literature. We further provide state-of-the-art baselines on all these
datasets, but show that regardless of label quality, categorical labels may no
longer give an accurate evaluation of detection model performance. Finally, we
we propose and highlight Evaluation Quality Assessment (EQA) as a tool to guide
the field toward systemic solutions rather than inadvertently propagating
issues in evaluation. Overall, this guide aims to provide a roadmap for higher
quality data and better grounded evaluations, ultimately improving research in
misinformation detection. All datasets and other artifacts are available at
misinfo-datasets.complexdatalab.com.
|
2411.05740 | Manas Mejari | Manas Mejari, Valentina Breschi, Simone Formentin, Dario Piga | Bias correction and instrumental variables for direct data-driven
model-reference control | 8 pages, 3 figures, preprint submitted to the European Control
Conference, ECC 2025 | null | null | null | eess.SY cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Managing noisy data is a central challenge in direct data-driven control
design. We propose an approach for synthesizing model-reference controllers for
linear time-invariant (LTI) systems using noisy state-input data, employing
novel noise mitigation techniques. Specifically, we demonstrate that using
data-based covariance parameterization of the controller enables
bias-correction and instrumental variable techniques within the data-driven
optimization, thus reducing measurement noise effects as data volume increases.
The number of decision variables remains independent of dataset size, making
this method scalable to large datasets. The approach's effectiveness is
demonstrated with a numerical example.
| [
{
"version": "v1",
"created": "Fri, 8 Nov 2024 17:55:24 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Mejari",
"Manas",
""
],
[
"Breschi",
"Valentina",
""
],
[
"Formentin",
"Simone",
""
],
[
"Piga",
"Dario",
""
]
] | TITLE: Bias correction and instrumental variables for direct data-driven
model-reference control
ABSTRACT: Managing noisy data is a central challenge in direct data-driven control
design. We propose an approach for synthesizing model-reference controllers for
linear time-invariant (LTI) systems using noisy state-input data, employing
novel noise mitigation techniques. Specifically, we demonstrate that using
data-based covariance parameterization of the controller enables
bias-correction and instrumental variable techniques within the data-driven
optimization, thus reducing measurement noise effects as data volume increases.
The number of decision variables remains independent of dataset size, making
this method scalable to large datasets. The approach's effectiveness is
demonstrated with a numerical example.
|
2411.09361 | Zepeng Frazier Huo | Zepeng Huo, Jason Alan Fries, Alejandro Lozano, Jeya Maria Jose
Valanarasu, Ethan Steinberg, Louis Blankemeier, Akshay S. Chaudhari, Curtis
Langlotz, and Nigam H. Shah | Time-to-Event Pretraining for 3D Medical Imaging | 34 pages, 19 figures | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | With the rise of medical foundation models and the growing availability of
imaging data, scalable pretraining techniques offer a promising way to identify
imaging biomarkers predictive of future disease risk. While current
self-supervised methods for 3D medical imaging models capture local structural
features like organ morphology, they fail to link pixel biomarkers with
long-term health outcomes due to a missing context problem. Current approaches
lack the temporal context necessary to identify biomarkers correlated with
disease progression, as they rely on supervision derived only from images and
concurrent text descriptions. To address this, we introduce time-to-event
pretraining, a pretraining framework for 3D medical imaging models that
leverages large-scale temporal supervision from paired, longitudinal electronic
health records (EHRs). Using a dataset of 18,945 CT scans (4.2 million 2D
images) and time-to-event distributions across thousands of EHR-derived tasks,
our method improves outcome prediction, achieving an average AUROC increase of
23.7% and a 29.4% gain in Harrell's C-index across 8 benchmark tasks.
Importantly, these gains are achieved without sacrificing diagnostic
classification performance. This study lays the foundation for integrating
longitudinal EHR and 3D imaging data to advance clinical risk prediction.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2024 11:08:54 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 07:33:47 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Huo",
"Zepeng",
""
],
[
"Fries",
"Jason Alan",
""
],
[
"Lozano",
"Alejandro",
""
],
[
"Valanarasu",
"Jeya Maria Jose",
""
],
[
"Steinberg",
"Ethan",
""
],
[
"Blankemeier",
"Louis",
""
],
[
"Chaudhari",
"Akshay S.",
""
],
[
"Langlotz",
"Curtis",
""
],
[
"Shah",
"Nigam H.",
""
]
] | TITLE: Time-to-Event Pretraining for 3D Medical Imaging
ABSTRACT: With the rise of medical foundation models and the growing availability of
imaging data, scalable pretraining techniques offer a promising way to identify
imaging biomarkers predictive of future disease risk. While current
self-supervised methods for 3D medical imaging models capture local structural
features like organ morphology, they fail to link pixel biomarkers with
long-term health outcomes due to a missing context problem. Current approaches
lack the temporal context necessary to identify biomarkers correlated with
disease progression, as they rely on supervision derived only from images and
concurrent text descriptions. To address this, we introduce time-to-event
pretraining, a pretraining framework for 3D medical imaging models that
leverages large-scale temporal supervision from paired, longitudinal electronic
health records (EHRs). Using a dataset of 18,945 CT scans (4.2 million 2D
images) and time-to-event distributions across thousands of EHR-derived tasks,
our method improves outcome prediction, achieving an average AUROC increase of
23.7% and a 29.4% gain in Harrell's C-index across 8 benchmark tasks.
Importantly, these gains are achieved without sacrificing diagnostic
classification performance. This study lays the foundation for integrating
longitudinal EHR and 3D imaging data to advance clinical risk prediction.
|
2411.09587 | Akari Haga | Akari Haga, Akiyo Fukatsu, Miyu Oba, Arianna Bisazza, Yohei Oseki | BabyLM Challenge: Exploring the Effect of Variation Sets on Language
Model Training Efficiency | Accepted by BabyLM challenge 2024 at CONLL 2024 (
https://aclanthology.org/2024.conll-babylm.23 ) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | While current large language models have achieved a remarkable success, their
data efficiency remains a challenge to overcome. Recently it has been suggested
that child-directed speech (CDS) can improve training data efficiency of modern
language models based on Transformer neural networks. However, it is not yet
understood which specific properties of CDS are effective for training these
models. In the context of the BabyLM Challenge, we focus on Variation Sets
(VSs), sets of consecutive utterances expressing a similar intent with slightly
different words and structures, which are ubiquitous in CDS. To assess the
impact of VSs on training data efficiency, we augment CDS data with different
proportions of artificial VSs and use these datasets to train an
auto-regressive model, GPT-2. We find that the best proportion of VSs depends
on the evaluation benchmark: BLiMP and GLUE scores benefit from the presence of
VSs, but EWOK scores do not. Additionally, the results vary depending on
multiple factors such as the number of epochs and the order of utterance
presentation. Taken together, these findings suggest that VSs can have a
beneficial influence on language models, while leaving room for further
investigation.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2024 16:57:46 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 13:51:54 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Haga",
"Akari",
""
],
[
"Fukatsu",
"Akiyo",
""
],
[
"Oba",
"Miyu",
""
],
[
"Bisazza",
"Arianna",
""
],
[
"Oseki",
"Yohei",
""
]
] | TITLE: BabyLM Challenge: Exploring the Effect of Variation Sets on Language
Model Training Efficiency
ABSTRACT: While current large language models have achieved a remarkable success, their
data efficiency remains a challenge to overcome. Recently it has been suggested
that child-directed speech (CDS) can improve training data efficiency of modern
language models based on Transformer neural networks. However, it is not yet
understood which specific properties of CDS are effective for training these
models. In the context of the BabyLM Challenge, we focus on Variation Sets
(VSs), sets of consecutive utterances expressing a similar intent with slightly
different words and structures, which are ubiquitous in CDS. To assess the
impact of VSs on training data efficiency, we augment CDS data with different
proportions of artificial VSs and use these datasets to train an
auto-regressive model, GPT-2. We find that the best proportion of VSs depends
on the evaluation benchmark: BLiMP and GLUE scores benefit from the presence of
VSs, but EWOK scores do not. Additionally, the results vary depending on
multiple factors such as the number of epochs and the order of utterance
presentation. Taken together, these findings suggest that VSs can have a
beneficial influence on language models, while leaving room for further
investigation.
|
2411.11361 | Jinhong Wang | Jinhong Wang, Jian Liu, Dongqi Tang, Weiqiang Wang, Wentong Li, Danny
Chen, Jintai Chen and Jian Wu | Scalable Autoregressive Monocular Depth Estimation | Accepted by CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper shows that the autoregressive model is an effective and scalable
monocular depth estimator. Our idea is simple: We tackle the monocular depth
estimation (MDE) task with an autoregressive prediction paradigm, based on two
core designs. First, our depth autoregressive model (DAR) treats the depth map
of different resolutions as a set of tokens, and conducts the low-to-high
resolution autoregressive objective with a patch-wise casual mask. Second, our
DAR recursively discretizes the entire depth range into more compact intervals,
and attains the coarse-to-fine granularity autoregressive objective in an
ordinal-regression manner. By coupling these two autoregressive objectives, our
DAR establishes new state-of-the-art (SOTA) on KITTI and NYU Depth v2 by clear
margins. Further, our scalable approach allows us to scale the model up to 2.0B
and achieve the best RMSE of 1.799 on the KITTI dataset (5% improvement)
compared to 1.896 by the current SOTA (Depth Anything). DAR further showcases
zero-shot generalization ability on unseen datasets. These results suggest that
DAR yields superior performance with an autoregressive prediction paradigm,
providing a promising approach to equip modern autoregressive large models
(e.g., GPT-4o) with depth estimation capabilities.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2024 08:12:54 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 03:25:27 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 07:04:00 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Jinhong",
""
],
[
"Liu",
"Jian",
""
],
[
"Tang",
"Dongqi",
""
],
[
"Wang",
"Weiqiang",
""
],
[
"Li",
"Wentong",
""
],
[
"Chen",
"Danny",
""
],
[
"Chen",
"Jintai",
""
],
[
"Wu",
"Jian",
""
]
] | TITLE: Scalable Autoregressive Monocular Depth Estimation
ABSTRACT: This paper shows that the autoregressive model is an effective and scalable
monocular depth estimator. Our idea is simple: We tackle the monocular depth
estimation (MDE) task with an autoregressive prediction paradigm, based on two
core designs. First, our depth autoregressive model (DAR) treats the depth map
of different resolutions as a set of tokens, and conducts the low-to-high
resolution autoregressive objective with a patch-wise casual mask. Second, our
DAR recursively discretizes the entire depth range into more compact intervals,
and attains the coarse-to-fine granularity autoregressive objective in an
ordinal-regression manner. By coupling these two autoregressive objectives, our
DAR establishes new state-of-the-art (SOTA) on KITTI and NYU Depth v2 by clear
margins. Further, our scalable approach allows us to scale the model up to 2.0B
and achieve the best RMSE of 1.799 on the KITTI dataset (5% improvement)
compared to 1.896 by the current SOTA (Depth Anything). DAR further showcases
zero-shot generalization ability on unseen datasets. These results suggest that
DAR yields superior performance with an autoregressive prediction paradigm,
providing a promising approach to equip modern autoregressive large models
(e.g., GPT-4o) with depth estimation capabilities.
|
2411.14628 | Zimo Wang | Zimo Wang, Cheng Wang, Taiki Yoshino, Sirui Tao, Ziyang Fu, Tzu-Mao Li | HotSpot: Signed Distance Function Optimization with an Asymptotically
Sufficient Condition | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose a method, HotSpot, for optimizing neural signed distance
functions. Existing losses, such as the eikonal loss, act as necessary but
insufficient constraints and cannot guarantee that the recovered implicit
function represents a true distance function, even if the output minimizes
these losses almost everywhere. Furthermore, the eikonal loss suffers from
stability issues in optimization. Finally, in conventional methods,
regularization losses that penalize surface area distort the reconstructed
signed distance function. We address these challenges by designing a loss
function using the solution of a screened Poisson equation. Our loss, when
minimized, provides an asymptotically sufficient condition to ensure the output
converges to a true distance function. Our loss also leads to stable
optimization and naturally penalizes large surface areas. We present
theoretical analysis and experiments on both challenging 2D and 3D datasets and
show that our method provides better surface reconstruction and a more accurate
distance approximation.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 23:06:15 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 03:10:27 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Zimo",
""
],
[
"Wang",
"Cheng",
""
],
[
"Yoshino",
"Taiki",
""
],
[
"Tao",
"Sirui",
""
],
[
"Fu",
"Ziyang",
""
],
[
"Li",
"Tzu-Mao",
""
]
] | TITLE: HotSpot: Signed Distance Function Optimization with an Asymptotically
Sufficient Condition
ABSTRACT: We propose a method, HotSpot, for optimizing neural signed distance
functions. Existing losses, such as the eikonal loss, act as necessary but
insufficient constraints and cannot guarantee that the recovered implicit
function represents a true distance function, even if the output minimizes
these losses almost everywhere. Furthermore, the eikonal loss suffers from
stability issues in optimization. Finally, in conventional methods,
regularization losses that penalize surface area distort the reconstructed
signed distance function. We address these challenges by designing a loss
function using the solution of a screened Poisson equation. Our loss, when
minimized, provides an asymptotically sufficient condition to ensure the output
converges to a true distance function. Our loss also leads to stable
optimization and naturally penalizes large surface areas. We present
theoretical analysis and experiments on both challenging 2D and 3D datasets and
show that our method provides better surface reconstruction and a more accurate
distance approximation.
|
2411.15778 | Yassine Machta | Yassine Machta, Omar Ali, Kevin Hakkakian, Ana Vlasceanu, Amaury
Facque, Nicolas Golse, Irene Vignon-Clementel | Enhancing the automatic segmentation and analysis of 3D liver
vasculature models | Paper presented at MICCAI 2024 Workshop: ADSMI. This work was done in
the context of an internship at Simbiotx, Inria | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Surgical assessment of liver cancer patients requires identification of the
vessel trees from medical images. Specifically, the venous trees - the portal
(perfusing) and the hepatic (draining) trees are important for understanding
the liver anatomy and disease state, and perform surgery planning. This
research aims to improve the 3D segmentation, skeletonization, and subsequent
analysis of vessel trees, by creating an automatic pipeline based on deep
learning and image processing techniques.
The first part of this work explores the impact of differentiable
skeletonization methods such as ClDice and morphological skeletonization loss,
on the overall liver vessel segmentation performance. To this aim, it studies
how to improve vessel tree connectivity.
The second part of this study converts a single class vessel segmentation
into multi-class ones, separating the two venous trees. It builds on the
previous two-class vessel segmentation model, which vessel tree outputs might
be entangled, and on connected components and skeleton analyses of the trees.
After providing sub-labeling of the specific anatomical branches of each
venous tree, these algorithms also enable a morphometric analysis of the vessel
trees by extracting various geometrical markers.
In conclusion, we propose a method that successfully improves current
skeletonization methods, for extensive vascular trees that contain vessels of
different calibers. The separation algorithm creates a clean multi-class
segmentation of the vessels, validated by surgeons to provide low error. A new,
publicly shared high-quality liver vessel dataset of 77 cases is thus created.
Finally a method to annotate vessel trees according to anatomy is provided,
enabling a unique liver vessel morphometry analysis.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 10:58:48 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Dec 2024 09:06:32 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Jan 2025 07:31:00 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Mar 2025 12:48:55 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Machta",
"Yassine",
""
],
[
"Ali",
"Omar",
""
],
[
"Hakkakian",
"Kevin",
""
],
[
"Vlasceanu",
"Ana",
""
],
[
"Facque",
"Amaury",
""
],
[
"Golse",
"Nicolas",
""
],
[
"Vignon-Clementel",
"Irene",
""
]
] | TITLE: Enhancing the automatic segmentation and analysis of 3D liver
vasculature models
ABSTRACT: Surgical assessment of liver cancer patients requires identification of the
vessel trees from medical images. Specifically, the venous trees - the portal
(perfusing) and the hepatic (draining) trees are important for understanding
the liver anatomy and disease state, and perform surgery planning. This
research aims to improve the 3D segmentation, skeletonization, and subsequent
analysis of vessel trees, by creating an automatic pipeline based on deep
learning and image processing techniques.
The first part of this work explores the impact of differentiable
skeletonization methods such as ClDice and morphological skeletonization loss,
on the overall liver vessel segmentation performance. To this aim, it studies
how to improve vessel tree connectivity.
The second part of this study converts a single class vessel segmentation
into multi-class ones, separating the two venous trees. It builds on the
previous two-class vessel segmentation model, which vessel tree outputs might
be entangled, and on connected components and skeleton analyses of the trees.
After providing sub-labeling of the specific anatomical branches of each
venous tree, these algorithms also enable a morphometric analysis of the vessel
trees by extracting various geometrical markers.
In conclusion, we propose a method that successfully improves current
skeletonization methods, for extensive vascular trees that contain vessels of
different calibers. The separation algorithm creates a clean multi-class
segmentation of the vessels, validated by surgeons to provide low error. A new,
publicly shared high-quality liver vessel dataset of 77 cases is thus created.
Finally a method to annotate vessel trees according to anatomy is provided,
enabling a unique liver vessel morphometry analysis.
|
2411.16898 | Kunyi Li | Kunyi Li, Michael Niemeyer, Zeyu Chen, Nassir Navab, Federico Tombari | MonoGSDF: Exploring Monocular Geometric Cues for Gaussian
Splatting-Guided Implicit Surface Reconstruction | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate meshing from monocular images remains a key challenge in 3D vision.
While state-of-the-art 3D Gaussian Splatting (3DGS) methods excel at
synthesizing photorealistic novel views through rasterization-based rendering,
their reliance on sparse, explicit primitives severely limits their ability to
recover watertight and topologically consistent 3D surfaces.We introduce
MonoGSDF, a novel method that couples Gaussian-based primitives with a neural
Signed Distance Field (SDF) for high-quality reconstruction. During training,
the SDF guides Gaussians' spatial distribution, while at inference, Gaussians
serve as priors to reconstruct surfaces, eliminating the need for
memory-intensive Marching Cubes. To handle arbitrary-scale scenes, we propose a
scaling strategy for robust generalization. A multi-resolution training scheme
further refines details and monocular geometric cues from off-the-shelf
estimators enhance reconstruction quality. Experiments on real-world datasets
show MonoGSDF outperforms prior methods while maintaining efficiency.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 20:07:07 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 10:40:34 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Kunyi",
""
],
[
"Niemeyer",
"Michael",
""
],
[
"Chen",
"Zeyu",
""
],
[
"Navab",
"Nassir",
""
],
[
"Tombari",
"Federico",
""
]
] | TITLE: MonoGSDF: Exploring Monocular Geometric Cues for Gaussian
Splatting-Guided Implicit Surface Reconstruction
ABSTRACT: Accurate meshing from monocular images remains a key challenge in 3D vision.
While state-of-the-art 3D Gaussian Splatting (3DGS) methods excel at
synthesizing photorealistic novel views through rasterization-based rendering,
their reliance on sparse, explicit primitives severely limits their ability to
recover watertight and topologically consistent 3D surfaces.We introduce
MonoGSDF, a novel method that couples Gaussian-based primitives with a neural
Signed Distance Field (SDF) for high-quality reconstruction. During training,
the SDF guides Gaussians' spatial distribution, while at inference, Gaussians
serve as priors to reconstruct surfaces, eliminating the need for
memory-intensive Marching Cubes. To handle arbitrary-scale scenes, we propose a
scaling strategy for robust generalization. A multi-resolution training scheme
further refines details and monocular geometric cues from off-the-shelf
estimators enhance reconstruction quality. Experiments on real-world datasets
show MonoGSDF outperforms prior methods while maintaining efficiency.
|
2412.03258 | Mianchu Wang | Mianchu Wang, Yue Jin, Giovanni Montana | Learning on One Mode: Addressing Multi-modality in Offline Reinforcement
Learning | Published as a conference paper at ICLR 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Offline reinforcement learning (RL) seeks to learn optimal policies from
static datasets without interacting with the environment. A common challenge is
handling multi-modal action distributions, where multiple behaviours are
represented in the data. Existing methods often assume unimodal behaviour
policies, leading to suboptimal performance when this assumption is violated.
We propose weighted imitation Learning on One Mode (LOM), a novel approach that
focuses on learning from a single, promising mode of the behaviour policy. By
using a Gaussian mixture model to identify modes and selecting the best mode
based on expected returns, LOM avoids the pitfalls of averaging over
conflicting actions. Theoretically, we show that LOM improves performance while
maintaining simplicity in policy learning. Empirically, LOM outperforms
existing methods on standard D4RL benchmarks and demonstrates its effectiveness
in complex, multi-modal scenarios.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 11:57:36 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 13:48:52 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Mianchu",
""
],
[
"Jin",
"Yue",
""
],
[
"Montana",
"Giovanni",
""
]
] | TITLE: Learning on One Mode: Addressing Multi-modality in Offline Reinforcement
Learning
ABSTRACT: Offline reinforcement learning (RL) seeks to learn optimal policies from
static datasets without interacting with the environment. A common challenge is
handling multi-modal action distributions, where multiple behaviours are
represented in the data. Existing methods often assume unimodal behaviour
policies, leading to suboptimal performance when this assumption is violated.
We propose weighted imitation Learning on One Mode (LOM), a novel approach that
focuses on learning from a single, promising mode of the behaviour policy. By
using a Gaussian mixture model to identify modes and selecting the best mode
based on expected returns, LOM avoids the pitfalls of averaging over
conflicting actions. Theoretically, we show that LOM improves performance while
maintaining simplicity in policy learning. Empirically, LOM outperforms
existing methods on standard D4RL benchmarks and demonstrates its effectiveness
in complex, multi-modal scenarios.
|
2412.08591 | Mingfei Han | Mingfei Han, Liang Ma, Kamila Zhumakhanova, Ekaterina Radionova,
Jingyi Zhang, Xiaojun Chang, Xiaodan Liang, Ivan Laptev | RoomTour3D: Geometry-Aware Video-Instruction Tuning for Embodied
Navigation | CVPR2025 | null | null | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Vision-and-Language Navigation (VLN) suffers from the limited diversity and
scale of training data, primarily constrained by the manual curation of
existing simulators. To address this, we introduce RoomTour3D, a
video-instruction dataset derived from web-based room tour videos that capture
real-world indoor spaces and human walking demonstrations. Unlike existing VLN
datasets, RoomTour3D leverages the scale and diversity of online videos to
generate open-ended human walking trajectories and open-world navigable
instructions. To compensate for the lack of navigation data in online videos,
we perform 3D reconstruction and obtain 3D trajectories of walking paths
augmented with additional information on the room types, object locations and
3D shape of surrounding scenes. Our dataset includes $\sim$100K open-ended
description-enriched trajectories with $\sim$200K instructions, and 17K
action-enriched trajectories from 1847 room tour environments. We demonstrate
experimentally that RoomTour3D enables significant improvements across multiple
VLN tasks including CVDN, SOON, R2R, and REVERIE. Moreover, RoomTour3D
facilitates the development of trainable zero-shot VLN agents, showcasing the
potential and challenges of advancing towards open-world navigation.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 18:10:21 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 10:05:05 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Han",
"Mingfei",
""
],
[
"Ma",
"Liang",
""
],
[
"Zhumakhanova",
"Kamila",
""
],
[
"Radionova",
"Ekaterina",
""
],
[
"Zhang",
"Jingyi",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Laptev",
"Ivan",
""
]
] | TITLE: RoomTour3D: Geometry-Aware Video-Instruction Tuning for Embodied
Navigation
ABSTRACT: Vision-and-Language Navigation (VLN) suffers from the limited diversity and
scale of training data, primarily constrained by the manual curation of
existing simulators. To address this, we introduce RoomTour3D, a
video-instruction dataset derived from web-based room tour videos that capture
real-world indoor spaces and human walking demonstrations. Unlike existing VLN
datasets, RoomTour3D leverages the scale and diversity of online videos to
generate open-ended human walking trajectories and open-world navigable
instructions. To compensate for the lack of navigation data in online videos,
we perform 3D reconstruction and obtain 3D trajectories of walking paths
augmented with additional information on the room types, object locations and
3D shape of surrounding scenes. Our dataset includes $\sim$100K open-ended
description-enriched trajectories with $\sim$200K instructions, and 17K
action-enriched trajectories from 1847 room tour environments. We demonstrate
experimentally that RoomTour3D enables significant improvements across multiple
VLN tasks including CVDN, SOON, R2R, and REVERIE. Moreover, RoomTour3D
facilitates the development of trainable zero-shot VLN agents, showcasing the
potential and challenges of advancing towards open-world navigation.
|
2412.09049 | Mengze Hong | Mengze Hong, Di Jiang, Yuanfeng Song, Lu Wang, Wailing Ng, Yanjie Sun,
Chen Jason Zhang, Qing Li | Dial-In LLM: Human-Aligned LLM-in-the-loop Intent Clustering for
Customer Service Dialogues | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering customer intentions in dialogue conversations is crucial for
automated service agents. Yet, existing intent clustering methods often fail to
align with human perceptions due to the heavy reliance on embedding distance
metrics and sentence embeddings. To address these limitations, we propose
integrating the semantic understanding capabilities of LLMs into an
$\textbf{LLM-in-the-loop (LLM-ITL)}$ intent clustering framework. Specifically,
this paper (1) investigates the effectiveness of fine-tuned LLMs in semantic
coherence evaluation and intent cluster naming, achieving over 95% accuracy;
(2) designs an LLM-ITL clustering algorithm that facilitates the iterative
discovery of coherent intent clusters; and (3) proposes task-specific
techniques tailored for customer service dialogue intent clustering. Since
existing English benchmarks pose limited semantic diversity and intent labels,
we introduced a comprehensive Chinese dialogue intent dataset, comprising over
100,000 real customer service calls and 1,507 human-annotated intent clusters.
The proposed approaches significantly outperformed LLM-guided baselines,
achieving notable improvements in clustering quality and a 12% boost in the
downstream intent classification task. Combined with several best practices,
our findings highlight the potential of LLM-in-the-loop techniques for scalable
and human-aligned problem-solving. Sample code and datasets are available at:
https://anonymous.4open.science/r/Dial-in-LLM-0410.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 08:19:01 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 06:14:04 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Hong",
"Mengze",
""
],
[
"Jiang",
"Di",
""
],
[
"Song",
"Yuanfeng",
""
],
[
"Wang",
"Lu",
""
],
[
"Ng",
"Wailing",
""
],
[
"Sun",
"Yanjie",
""
],
[
"Zhang",
"Chen Jason",
""
],
[
"Li",
"Qing",
""
]
] | TITLE: Dial-In LLM: Human-Aligned LLM-in-the-loop Intent Clustering for
Customer Service Dialogues
ABSTRACT: Discovering customer intentions in dialogue conversations is crucial for
automated service agents. Yet, existing intent clustering methods often fail to
align with human perceptions due to the heavy reliance on embedding distance
metrics and sentence embeddings. To address these limitations, we propose
integrating the semantic understanding capabilities of LLMs into an
$\textbf{LLM-in-the-loop (LLM-ITL)}$ intent clustering framework. Specifically,
this paper (1) investigates the effectiveness of fine-tuned LLMs in semantic
coherence evaluation and intent cluster naming, achieving over 95% accuracy;
(2) designs an LLM-ITL clustering algorithm that facilitates the iterative
discovery of coherent intent clusters; and (3) proposes task-specific
techniques tailored for customer service dialogue intent clustering. Since
existing English benchmarks pose limited semantic diversity and intent labels,
we introduced a comprehensive Chinese dialogue intent dataset, comprising over
100,000 real customer service calls and 1,507 human-annotated intent clusters.
The proposed approaches significantly outperformed LLM-guided baselines,
achieving notable improvements in clustering quality and a 12% boost in the
downstream intent classification task. Combined with several best practices,
our findings highlight the potential of LLM-in-the-loop techniques for scalable
and human-aligned problem-solving. Sample code and datasets are available at:
https://anonymous.4open.science/r/Dial-in-LLM-0410.
|
2412.09082 | Yang Liu | Xinshuai Song, Weixing Chen, Yang Liu, Weikai Chen, Guanbin Li, Liang
Lin | Towards Long-Horizon Vision-Language Navigation: Platform, Benchmark and
Method | Accepted by CVPR 2025. A novel Long-Horizon Vision-Language
Navigation task, project page: https://hcplab-sysu.github.io/LH-VLN/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing Vision-Language Navigation (VLN) methods primarily focus on
single-stage navigation, limiting their effectiveness in multi-stage and
long-horizon tasks within complex and dynamic environments. To address these
limitations, we propose a novel VLN task, named Long-Horizon Vision-Language
Navigation (LH-VLN), which emphasizes long-term planning and decision
consistency across consecutive subtasks. Furthermore, to support LH-VLN, we
develop an automated data generation platform NavGen, which constructs datasets
with complex task structures and improves data utility through a bidirectional,
multi-granularity generation approach. To accurately evaluate complex tasks, we
construct the Long-Horizon Planning and Reasoning in VLN (LHPR-VLN) benchmark
consisting of 3,260 tasks with an average of 150 task steps, serving as the
first dataset specifically designed for the long-horizon vision-language
navigation task. Furthermore, we propose Independent Success Rate (ISR),
Conditional Success Rate (CSR), and CSR weight by Ground Truth (CGT) metrics,
to provide fine-grained assessments of task completion. To improve model
adaptability in complex tasks, we propose a novel Multi-Granularity Dynamic
Memory (MGDM) module that integrates short-term memory blurring with long-term
memory retrieval to enable flexible navigation in dynamic environments. Our
platform, benchmark and method supply LH-VLN with a robust data generation
pipeline, comprehensive model evaluation dataset, reasonable metrics, and a
novel VLN model, establishing a foundational framework for advancing LH-VLN.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 09:08:13 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jan 2025 06:43:48 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 13:31:16 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Song",
"Xinshuai",
""
],
[
"Chen",
"Weixing",
""
],
[
"Liu",
"Yang",
""
],
[
"Chen",
"Weikai",
""
],
[
"Li",
"Guanbin",
""
],
[
"Lin",
"Liang",
""
]
] | TITLE: Towards Long-Horizon Vision-Language Navigation: Platform, Benchmark and
Method
ABSTRACT: Existing Vision-Language Navigation (VLN) methods primarily focus on
single-stage navigation, limiting their effectiveness in multi-stage and
long-horizon tasks within complex and dynamic environments. To address these
limitations, we propose a novel VLN task, named Long-Horizon Vision-Language
Navigation (LH-VLN), which emphasizes long-term planning and decision
consistency across consecutive subtasks. Furthermore, to support LH-VLN, we
develop an automated data generation platform NavGen, which constructs datasets
with complex task structures and improves data utility through a bidirectional,
multi-granularity generation approach. To accurately evaluate complex tasks, we
construct the Long-Horizon Planning and Reasoning in VLN (LHPR-VLN) benchmark
consisting of 3,260 tasks with an average of 150 task steps, serving as the
first dataset specifically designed for the long-horizon vision-language
navigation task. Furthermore, we propose Independent Success Rate (ISR),
Conditional Success Rate (CSR), and CSR weight by Ground Truth (CGT) metrics,
to provide fine-grained assessments of task completion. To improve model
adaptability in complex tasks, we propose a novel Multi-Granularity Dynamic
Memory (MGDM) module that integrates short-term memory blurring with long-term
memory retrieval to enable flexible navigation in dynamic environments. Our
platform, benchmark and method supply LH-VLN with a robust data generation
pipeline, comprehensive model evaluation dataset, reasonable metrics, and a
novel VLN model, establishing a foundational framework for advancing LH-VLN.
|
2412.10831 | Dengyang Jiang | Dengyang Jiang, Haoyu Wang, Lei Zhang, Wei Wei, Guang Dai, Mengmeng
Wang, Jingdong Wang, Yanning Zhang | Low-Biased General Annotated Dataset Generation | CVPR2025 Accepted Paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-training backbone networks on a general annotated dataset (e.g.,
ImageNet) that comprises numerous manually collected images with category
annotations has proven to be indispensable for enhancing the generalization
capacity of downstream visual tasks. However, those manually collected images
often exhibit bias, which is non-transferable across either categories or
domains, thus causing the model's generalization capacity degeneration. To
mitigate this problem, we present a low-biased general annotated dataset
generation framework (lbGen). Instead of expensive manual collection, we aim at
directly generating low-biased images with category annotations. To achieve
this goal, we propose to leverage the advantage of a multimodal foundation
model (e.g., CLIP), in terms of aligning images in a low-biased semantic space
defined by language. Specifically, we develop a bi-level semantic alignment
loss, which not only forces all generated images to be consistent with the
semantic distribution of all categories belonging to the target dataset in an
adversarial learning manner, but also requires each generated image to match
the semantic description of its category name. In addition, we further cast an
existing image quality scoring model into a quality assurance loss to preserve
the quality of the generated image. By leveraging these two loss functions, we
can obtain a low-biased image generation model by simply fine-tuning a
pre-trained diffusion model using only all category names in the target dataset
as input. Experimental results confirm that, compared with the manually labeled
dataset or other synthetic datasets, the utilization of our generated
low-biased dataset leads to stable generalization capacity enhancement of
different backbone networks across various tasks, especially in tasks where the
manually labeled samples are scarce.
| [
{
"version": "v1",
"created": "Sat, 14 Dec 2024 13:28:40 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 06:13:35 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 12:36:47 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Jiang",
"Dengyang",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Zhang",
"Lei",
""
],
[
"Wei",
"Wei",
""
],
[
"Dai",
"Guang",
""
],
[
"Wang",
"Mengmeng",
""
],
[
"Wang",
"Jingdong",
""
],
[
"Zhang",
"Yanning",
""
]
] | TITLE: Low-Biased General Annotated Dataset Generation
ABSTRACT: Pre-training backbone networks on a general annotated dataset (e.g.,
ImageNet) that comprises numerous manually collected images with category
annotations has proven to be indispensable for enhancing the generalization
capacity of downstream visual tasks. However, those manually collected images
often exhibit bias, which is non-transferable across either categories or
domains, thus causing the model's generalization capacity degeneration. To
mitigate this problem, we present a low-biased general annotated dataset
generation framework (lbGen). Instead of expensive manual collection, we aim at
directly generating low-biased images with category annotations. To achieve
this goal, we propose to leverage the advantage of a multimodal foundation
model (e.g., CLIP), in terms of aligning images in a low-biased semantic space
defined by language. Specifically, we develop a bi-level semantic alignment
loss, which not only forces all generated images to be consistent with the
semantic distribution of all categories belonging to the target dataset in an
adversarial learning manner, but also requires each generated image to match
the semantic description of its category name. In addition, we further cast an
existing image quality scoring model into a quality assurance loss to preserve
the quality of the generated image. By leveraging these two loss functions, we
can obtain a low-biased image generation model by simply fine-tuning a
pre-trained diffusion model using only all category names in the target dataset
as input. Experimental results confirm that, compared with the manually labeled
dataset or other synthetic datasets, the utilization of our generated
low-biased dataset leads to stable generalization capacity enhancement of
different backbone networks across various tasks, especially in tasks where the
manually labeled samples are scarce.
|
2412.13393 | Muhammad Usama Saleem | Muhammad Usama Saleem, Ekkasit Pinyoanuntapong, Mayur Jagdishbhai
Patel, Hongfei Xue, Ahmed Helmy, Srijan Das, Pu Wang | MaskHand: Generative Masked Modeling for Robust Hand Mesh Reconstruction
in the Wild | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Reconstructing a 3D hand mesh from a single RGB image is challenging due to
complex articulations, self-occlusions, and depth ambiguities. Traditional
discriminative methods, which learn a deterministic mapping from a 2D image to
a single 3D mesh, often struggle with the inherent ambiguities in 2D-to-3D
mapping. To address this challenge, we propose MaskHand, a novel generative
masked model for hand mesh recovery that synthesizes plausible 3D hand meshes
by learning and sampling from the probabilistic distribution of the ambiguous
2D-to-3D mapping process. MaskHand consists of two key components: (1) a
VQ-MANO, which encodes 3D hand articulations as discrete pose tokens in a
latent space, and (2) a Context-Guided Masked Transformer that randomly masks
out pose tokens and learns their joint distribution, conditioned on corrupted
token sequence, image context, and 2D pose cues. This learned distribution
facilitates confidence-guided sampling during inference, producing mesh
reconstructions with low uncertainty and high precision. Extensive evaluations
on benchmark and real-world datasets demonstrate that MaskHand achieves
state-of-the-art accuracy, robustness, and realism in 3D hand mesh
reconstruction. Project website:
https://m-usamasaleem.github.io/publication/MaskHand/MaskHand.html.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 00:10:00 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 14:49:31 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Saleem",
"Muhammad Usama",
""
],
[
"Pinyoanuntapong",
"Ekkasit",
""
],
[
"Patel",
"Mayur Jagdishbhai",
""
],
[
"Xue",
"Hongfei",
""
],
[
"Helmy",
"Ahmed",
""
],
[
"Das",
"Srijan",
""
],
[
"Wang",
"Pu",
""
]
] | TITLE: MaskHand: Generative Masked Modeling for Robust Hand Mesh Reconstruction
in the Wild
ABSTRACT: Reconstructing a 3D hand mesh from a single RGB image is challenging due to
complex articulations, self-occlusions, and depth ambiguities. Traditional
discriminative methods, which learn a deterministic mapping from a 2D image to
a single 3D mesh, often struggle with the inherent ambiguities in 2D-to-3D
mapping. To address this challenge, we propose MaskHand, a novel generative
masked model for hand mesh recovery that synthesizes plausible 3D hand meshes
by learning and sampling from the probabilistic distribution of the ambiguous
2D-to-3D mapping process. MaskHand consists of two key components: (1) a
VQ-MANO, which encodes 3D hand articulations as discrete pose tokens in a
latent space, and (2) a Context-Guided Masked Transformer that randomly masks
out pose tokens and learns their joint distribution, conditioned on corrupted
token sequence, image context, and 2D pose cues. This learned distribution
facilitates confidence-guided sampling during inference, producing mesh
reconstructions with low uncertainty and high precision. Extensive evaluations
on benchmark and real-world datasets demonstrate that MaskHand achieves
state-of-the-art accuracy, robustness, and realism in 3D hand mesh
reconstruction. Project website:
https://m-usamasaleem.github.io/publication/MaskHand/MaskHand.html.
|
2412.14672 | Estelle Aflalo Guez | Estelle Aflalo, Gabriela Ben Melech Stan, Tiep Le, Man Luo, Shachar
Rosenman, Sayak Paul, Shao-Yen Tseng, Vasudev Lal | FiVL: A Framework for Improved Vision-Language Alignment through the
Lens of Training, Evaluation and Explainability | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Vision Language Models (LVLMs) have achieved significant progress in
integrating visual and textual inputs for multimodal reasoning. However, a
recurring challenge is ensuring these models utilize visual information as
effectively as linguistic content when both modalities are necessary to
formulate an accurate answer. We hypothesize that hallucinations arise due to
the lack of effective visual grounding in current LVLMs. Furthermore, current
vision-language benchmarks are not specifically measuring the degree to which
the answer require the visual input. This limitation makes it challenging to
confirm that the image is truly necessary, particularly in tasks like visual
question answering. In this work, we introduce FiVL, a novel method for
constructing datasets designed to train LVLMs for enhanced visual grounding and
also evaluate their effectiveness in achieving it. We demonstrate the value of
our datasets through three approaches. First, we introduce a novel training
task based on our augmented training dataset, resulting in better performance
than the baseline. Second, we present benchmarks to assess the model's ability
to use image as substantive evidence, rather than relying solely on linguistic
priors. Finally, we identify attention heads with the strongest vision-language
alignment, enabling explainability on visual-driven hallucinations. The code is
available at https://github.com/IntelLabs/fivl.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 09:24:10 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 12:04:30 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Aflalo",
"Estelle",
""
],
[
"Stan",
"Gabriela Ben Melech",
""
],
[
"Le",
"Tiep",
""
],
[
"Luo",
"Man",
""
],
[
"Rosenman",
"Shachar",
""
],
[
"Paul",
"Sayak",
""
],
[
"Tseng",
"Shao-Yen",
""
],
[
"Lal",
"Vasudev",
""
]
] | TITLE: FiVL: A Framework for Improved Vision-Language Alignment through the
Lens of Training, Evaluation and Explainability
ABSTRACT: Large Vision Language Models (LVLMs) have achieved significant progress in
integrating visual and textual inputs for multimodal reasoning. However, a
recurring challenge is ensuring these models utilize visual information as
effectively as linguistic content when both modalities are necessary to
formulate an accurate answer. We hypothesize that hallucinations arise due to
the lack of effective visual grounding in current LVLMs. Furthermore, current
vision-language benchmarks are not specifically measuring the degree to which
the answer require the visual input. This limitation makes it challenging to
confirm that the image is truly necessary, particularly in tasks like visual
question answering. In this work, we introduce FiVL, a novel method for
constructing datasets designed to train LVLMs for enhanced visual grounding and
also evaluate their effectiveness in achieving it. We demonstrate the value of
our datasets through three approaches. First, we introduce a novel training
task based on our augmented training dataset, resulting in better performance
than the baseline. Second, we present benchmarks to assess the model's ability
to use image as substantive evidence, rather than relying solely on linguistic
priors. Finally, we identify attention heads with the strongest vision-language
alignment, enabling explainability on visual-driven hallucinations. The code is
available at https://github.com/IntelLabs/fivl.
|
2412.15021 | Gabriel B\'ena | Gabriel B\'ena, Timo Wunderlich, Mahmoud Akl, Bernhard Vogginger,
Christian Mayr, Hector Andres Gonzalez | Event-based backpropagation on the neuromorphic platform SpiNNaker2 | 38th Second Workshop on Machine Learning with New Compute Paradigms
at NeurIPS 2024(MLNCP 2024) : Poster Presentation. NICE 2025 Neuromorphic
Conference: Flash Talk Presentation | null | null | null | cs.NE cs.AR cs.ET | http://creativecommons.org/licenses/by/4.0/ | Neuromorphic computing aims to replicate the brain's capabilities for energy
efficient and parallel information processing, promising a solution to the
increasing demand for faster and more efficient computational systems.
Efficient training of neural networks on neuromorphic hardware requires the
development of training algorithms that retain the sparsity of spike-based
communication during training. Here, we report on the first implementation of
event-based backpropagation on the SpiNNaker2 neuromorphic hardware platform.
We use EventProp, an algorithm for event-based backpropagation in spiking
neural networks (SNNs), to compute exact gradients using sparse communication
of error signals between neurons. Our implementation computes multi-layer
networks of leaky integrate-and-fire neurons using discretized versions of the
differential equations and their adjoints, and uses event packets to transmit
spikes and error signals between network layers. We demonstrate a
proof-of-concept of batch-parallelized, on-chip training of SNNs using the Yin
Yang dataset, and provide an off-chip implementation for efficient prototyping,
hyper-parameter search, and hybrid training methods.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 16:31:42 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2025 10:07:41 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jan 2025 19:40:21 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Mar 2025 11:22:45 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Béna",
"Gabriel",
""
],
[
"Wunderlich",
"Timo",
""
],
[
"Akl",
"Mahmoud",
""
],
[
"Vogginger",
"Bernhard",
""
],
[
"Mayr",
"Christian",
""
],
[
"Gonzalez",
"Hector Andres",
""
]
] | TITLE: Event-based backpropagation on the neuromorphic platform SpiNNaker2
ABSTRACT: Neuromorphic computing aims to replicate the brain's capabilities for energy
efficient and parallel information processing, promising a solution to the
increasing demand for faster and more efficient computational systems.
Efficient training of neural networks on neuromorphic hardware requires the
development of training algorithms that retain the sparsity of spike-based
communication during training. Here, we report on the first implementation of
event-based backpropagation on the SpiNNaker2 neuromorphic hardware platform.
We use EventProp, an algorithm for event-based backpropagation in spiking
neural networks (SNNs), to compute exact gradients using sparse communication
of error signals between neurons. Our implementation computes multi-layer
networks of leaky integrate-and-fire neurons using discretized versions of the
differential equations and their adjoints, and uses event packets to transmit
spikes and error signals between network layers. We demonstrate a
proof-of-concept of batch-parallelized, on-chip training of SNNs using the Yin
Yang dataset, and provide an off-chip implementation for efficient prototyping,
hyper-parameter search, and hybrid training methods.
|
2412.20227 | Shuguang Chen | Shuguang Chen and Guang Lin | LLM Reasoning Engine: Specialized Training for Enhanced Mathematical
Reasoning | Accepted to NAACL 2025 KnowledgeNLP | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have shown remarkable performance in various
natural language processing tasks but face challenges in mathematical
reasoning, where complex problem-solving requires both linguistic understanding
and mathematical reasoning skills. Existing approaches to address this
challenge often rely on ensemble methods and suffer from the problem of data
scarcity in target domains. In this work, we present a novel method to enhance
LLMs' capabilities in mathematical reasoning tasks. Motivated by the need to
bridge this gap, our approach incorporates a question paraphrase strategy,
which aims at diversifying the linguistic forms of mathematical questions to
improve generalization. Additionally, specialized training objectives are
employed to guide the model's learning process, focusing on enhancing its
understanding of mathematical concepts and reasoning processes. We conduct
experiments on four datasets using different LLMs, and demonstrate the
effectiveness of our approach in improving LLMs' performance on mathematical
reasoning tasks. Our findings underscore the significance of our methodology in
the advancement of large language models and its potential implications for
real-world applications that require mathematical reasoning abilities.
| [
{
"version": "v1",
"created": "Sat, 28 Dec 2024 17:48:33 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 15:56:49 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Shuguang",
""
],
[
"Lin",
"Guang",
""
]
] | TITLE: LLM Reasoning Engine: Specialized Training for Enhanced Mathematical
Reasoning
ABSTRACT: Large Language Models (LLMs) have shown remarkable performance in various
natural language processing tasks but face challenges in mathematical
reasoning, where complex problem-solving requires both linguistic understanding
and mathematical reasoning skills. Existing approaches to address this
challenge often rely on ensemble methods and suffer from the problem of data
scarcity in target domains. In this work, we present a novel method to enhance
LLMs' capabilities in mathematical reasoning tasks. Motivated by the need to
bridge this gap, our approach incorporates a question paraphrase strategy,
which aims at diversifying the linguistic forms of mathematical questions to
improve generalization. Additionally, specialized training objectives are
employed to guide the model's learning process, focusing on enhancing its
understanding of mathematical concepts and reasoning processes. We conduct
experiments on four datasets using different LLMs, and demonstrate the
effectiveness of our approach in improving LLMs' performance on mathematical
reasoning tasks. Our findings underscore the significance of our methodology in
the advancement of large language models and its potential implications for
real-world applications that require mathematical reasoning abilities.
|
2501.14342 | Liang Wang | Liang Wang, Haonan Chen, Nan Yang, Xiaolong Huang, Zhicheng Dou, Furu
Wei | Chain-of-Retrieval Augmented Generation | 18 pages | null | null | null | cs.IR cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper introduces an approach for training o1-like RAG models that
retrieve and reason over relevant information step by step before generating
the final answer. Conventional RAG methods usually perform a single retrieval
step before the generation process, which limits their effectiveness in
addressing complex queries due to imperfect retrieval results. In contrast, our
proposed method, CoRAG (Chain-of-Retrieval Augmented Generation), allows the
model to dynamically reformulate the query based on the evolving state. To
train CoRAG effectively, we utilize rejection sampling to automatically
generate intermediate retrieval chains, thereby augmenting existing RAG
datasets that only provide the correct final answer. At test time, we propose
various decoding strategies to scale the model's test-time compute by
controlling the length and number of sampled retrieval chains. Experimental
results across multiple benchmarks validate the efficacy of CoRAG, particularly
in multi-hop question answering tasks, where we observe more than 10 points
improvement in EM score compared to strong baselines. On the KILT benchmark,
CoRAG establishes a new state-of-the-art performance across a diverse range of
knowledge-intensive tasks. Furthermore, we offer comprehensive analyses to
understand the scaling behavior of CoRAG, laying the groundwork for future
research aimed at developing factual and grounded foundation models.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2025 09:12:52 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 02:48:55 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Liang",
""
],
[
"Chen",
"Haonan",
""
],
[
"Yang",
"Nan",
""
],
[
"Huang",
"Xiaolong",
""
],
[
"Dou",
"Zhicheng",
""
],
[
"Wei",
"Furu",
""
]
] | TITLE: Chain-of-Retrieval Augmented Generation
ABSTRACT: This paper introduces an approach for training o1-like RAG models that
retrieve and reason over relevant information step by step before generating
the final answer. Conventional RAG methods usually perform a single retrieval
step before the generation process, which limits their effectiveness in
addressing complex queries due to imperfect retrieval results. In contrast, our
proposed method, CoRAG (Chain-of-Retrieval Augmented Generation), allows the
model to dynamically reformulate the query based on the evolving state. To
train CoRAG effectively, we utilize rejection sampling to automatically
generate intermediate retrieval chains, thereby augmenting existing RAG
datasets that only provide the correct final answer. At test time, we propose
various decoding strategies to scale the model's test-time compute by
controlling the length and number of sampled retrieval chains. Experimental
results across multiple benchmarks validate the efficacy of CoRAG, particularly
in multi-hop question answering tasks, where we observe more than 10 points
improvement in EM score compared to strong baselines. On the KILT benchmark,
CoRAG establishes a new state-of-the-art performance across a diverse range of
knowledge-intensive tasks. Furthermore, we offer comprehensive analyses to
understand the scaling behavior of CoRAG, laying the groundwork for future
research aimed at developing factual and grounded foundation models.
|
2501.18137 | Shaan Pakala | Shaan Pakala, Dawon Ahn, Evangelos Papalexakis | Tensor Completion for Surrogate Modeling of Material Property Prediction | 2 page paper presented at the AAAI 2025 Bridge on Knowledge-Guided
Machine Learning | null | null | null | cs.LG cond-mat.mtrl-sci cs.AI | http://creativecommons.org/licenses/by/4.0/ | When designing materials to optimize certain properties, there are often many
possible configurations of designs that need to be explored. For example, the
materials' composition of elements will affect properties such as strength or
conductivity, which are necessary to know when developing new materials.
Exploring all combinations of elements to find optimal materials becomes very
time consuming, especially when there are more design variables. For this
reason, there is growing interest in using machine learning (ML) to predict a
material's properties. In this work, we model the optimization of certain
material properties as a tensor completion problem, to leverage the structure
of our datasets and navigate the vast number of combinations of material
configurations. Across a variety of material property prediction tasks, our
experiments show tensor completion methods achieving 10-20% decreased error
compared with baseline ML models such as GradientBoosting and Multilayer
Perceptron (MLP), while maintaining similar training speed.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2025 04:59:21 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 20:34:39 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Mar 2025 03:35:59 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Pakala",
"Shaan",
""
],
[
"Ahn",
"Dawon",
""
],
[
"Papalexakis",
"Evangelos",
""
]
] | TITLE: Tensor Completion for Surrogate Modeling of Material Property Prediction
ABSTRACT: When designing materials to optimize certain properties, there are often many
possible configurations of designs that need to be explored. For example, the
materials' composition of elements will affect properties such as strength or
conductivity, which are necessary to know when developing new materials.
Exploring all combinations of elements to find optimal materials becomes very
time consuming, especially when there are more design variables. For this
reason, there is growing interest in using machine learning (ML) to predict a
material's properties. In this work, we model the optimization of certain
material properties as a tensor completion problem, to leverage the structure
of our datasets and navigate the vast number of combinations of material
configurations. Across a variety of material property prediction tasks, our
experiments show tensor completion methods achieving 10-20% decreased error
compared with baseline ML models such as GradientBoosting and Multilayer
Perceptron (MLP), while maintaining similar training speed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.