Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.01197 | Eleni Adamidi | Eleni Adamidi, Panayiotis Deligiannis, Nikos Foutris, Thanasis
Vergoulis | A Virtual Laboratory for Managing Computational Experiments | 6 pages, 5 figures | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | Computational experiments have become essential for scientific discovery,
allowing researchers to test hypotheses, analyze complex datasets, and validate
findings. However, as computational experiments grow in scale and complexity,
ensuring reproducibility and managing detailed metadata becomes increasingly
challenging, especially when orchestrating complex sequence of computational
tasks. To address these challenges we have developed a virtual laboratory
called SCHEMA lab, focusing on capturing rich metadata such as experiment
configurations and performance metrics, to support computational
reproducibility. SCHEMA lab enables researchers to create experiments by
grouping together multiple executions and manage them throughout their life
cycle. In this demonstration paper, we present the SCHEMA lab architecture,
core functionalities, and implementation, emphasizing its potential to
significantly enhance reproducibility and efficiency in computational research.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 21:25:23 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Adamidi",
"Eleni",
""
],
[
"Deligiannis",
"Panayiotis",
""
],
[
"Foutris",
"Nikos",
""
],
[
"Vergoulis",
"Thanasis",
""
]
] | TITLE: A Virtual Laboratory for Managing Computational Experiments
ABSTRACT: Computational experiments have become essential for scientific discovery,
allowing researchers to test hypotheses, analyze complex datasets, and validate
findings. However, as computational experiments grow in scale and complexity,
ensuring reproducibility and managing detailed metadata becomes increasingly
challenging, especially when orchestrating complex sequence of computational
tasks. To address these challenges we have developed a virtual laboratory
called SCHEMA lab, focusing on capturing rich metadata such as experiment
configurations and performance metrics, to support computational
reproducibility. SCHEMA lab enables researchers to create experiments by
grouping together multiple executions and manage them throughout their life
cycle. In this demonstration paper, we present the SCHEMA lab architecture,
core functionalities, and implementation, emphasizing its potential to
significantly enhance reproducibility and efficiency in computational research.
|
2504.01206 | Pavel Vesel\'y | Aleksander {\L}ukasiewicz and Jakub T\v{e}tek and Pavel Vesel\'y | SplineSketch: Even More Accurate Quantiles with Error Guarantees | null | null | null | null | cs.DS cs.DB stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Space-efficient estimation of quantiles in massive datasets is a fundamental
problem with numerous applications in data monitoring and analysis. While
theoretical research led to optimal algorithms, such as the Greenwald-Khanna
algorithm or the KLL sketch, practitioners often use other sketches that
perform significantly better in practice but lack theoretical guarantees. Most
notably, the widely used t-digest has unbounded worst-case error.
In this paper, we seek to get the best of both worlds. We present a new
quantile summary, SplineSketch, for numeric data, offering near-optimal
theoretical guarantees and outperforming t-digest by a factor of 2-20 on a
range of synthetic and real-world datasets with non-skewed frequency
distributions. To achieve such performance, we develop a novel approach that
maintains a dynamic subdivision of the input range into buckets while fitting
the input distribution using monotone cubic spline interpolation. The core
challenge is implementing this method in a space-efficient manner while
ensuring strong worst-case guarantees.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 21:39:50 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Łukasiewicz",
"Aleksander",
""
],
[
"Tětek",
"Jakub",
""
],
[
"Veselý",
"Pavel",
""
]
] | TITLE: SplineSketch: Even More Accurate Quantiles with Error Guarantees
ABSTRACT: Space-efficient estimation of quantiles in massive datasets is a fundamental
problem with numerous applications in data monitoring and analysis. While
theoretical research led to optimal algorithms, such as the Greenwald-Khanna
algorithm or the KLL sketch, practitioners often use other sketches that
perform significantly better in practice but lack theoretical guarantees. Most
notably, the widely used t-digest has unbounded worst-case error.
In this paper, we seek to get the best of both worlds. We present a new
quantile summary, SplineSketch, for numeric data, offering near-optimal
theoretical guarantees and outperforming t-digest by a factor of 2-20 on a
range of synthetic and real-world datasets with non-skewed frequency
distributions. To achieve such performance, we develop a novel approach that
maintains a dynamic subdivision of the input range into buckets while fitting
the input distribution using monotone cubic spline interpolation. The core
challenge is implementing this method in a space-efficient manner while
ensuring strong worst-case guarantees.
|
2504.01208 | Jes\'us Garc\'ia-Ram\'irez | Ian Mateos Gonzalez, Estefani Jaramilla Nava, Abraham S\'anchez
Morales, Jes\'us Garc\'ia-Ram\'irez and Ricardo Ramos-Aguilar | Lightweight Deep Models for Dermatological Disease Detection: A Study on
Instance Selection and Channel Optimization | Submitted to Mexican Conference on Pattern Recognition 2025 | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of dermatological disease is an important problem in
Mexico according with different studies. Several works in literature use the
datasets of different repositories without applying a study of the data
behavior, especially in medical images domain. In this work, we propose a
methodology to preprocess dermaMNIST dataset in order to improve its quality
for the classification stage, where we use lightweight convolutional neural
networks. In our results, we reduce the number of instances for the neural
network training obtaining a similar performance of models as ResNet.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 21:47:57 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Gonzalez",
"Ian Mateos",
""
],
[
"Nava",
"Estefani Jaramilla",
""
],
[
"Morales",
"Abraham Sánchez",
""
],
[
"García-Ramírez",
"Jesús",
""
],
[
"Ramos-Aguilar",
"Ricardo",
""
]
] | TITLE: Lightweight Deep Models for Dermatological Disease Detection: A Study on
Instance Selection and Channel Optimization
ABSTRACT: The identification of dermatological disease is an important problem in
Mexico according with different studies. Several works in literature use the
datasets of different repositories without applying a study of the data
behavior, especially in medical images domain. In this work, we propose a
methodology to preprocess dermaMNIST dataset in order to improve its quality
for the classification stage, where we use lightweight convolutional neural
networks. In our results, we reduce the number of instances for the neural
network training obtaining a similar performance of models as ResNet.
|
2504.01213 | Banafsheh Adami | Banafsheh Adami, Nima Karimian | GRU-AUNet: A Domain Adaptation Framework for Contactless Fingerprint
Presentation Attack Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Although contactless fingerprints offer user comfort, they are more
vulnerable to spoofing. The current solution for anti-spoofing in the area of
contactless fingerprints relies on domain adaptation learning, limiting their
generalization and scalability. To address these limitations, we introduce
GRU-AUNet, a domain adaptation approach that integrates a Swin
Transformer-based UNet architecture with GRU-enhanced attention mechanisms, a
Dynamic Filter Network in the bottleneck, and a combined Focal and Contrastive
Loss function. Trained in both genuine and spoof fingerprint images, GRU-AUNet
demonstrates robust resilience against presentation attacks, achieving an
average BPCER of 0.09\% and APCER of 1.2\% in the CLARKSON, COLFISPOOF, and
IIITD datasets, outperforming state-of-the-art domain adaptation methods.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 22:02:41 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Adami",
"Banafsheh",
""
],
[
"Karimian",
"Nima",
""
]
] | TITLE: GRU-AUNet: A Domain Adaptation Framework for Contactless Fingerprint
Presentation Attack Detection
ABSTRACT: Although contactless fingerprints offer user comfort, they are more
vulnerable to spoofing. The current solution for anti-spoofing in the area of
contactless fingerprints relies on domain adaptation learning, limiting their
generalization and scalability. To address these limitations, we introduce
GRU-AUNet, a domain adaptation approach that integrates a Swin
Transformer-based UNet architecture with GRU-enhanced attention mechanisms, a
Dynamic Filter Network in the bottleneck, and a combined Focal and Contrastive
Loss function. Trained in both genuine and spoof fingerprint images, GRU-AUNet
demonstrates robust resilience against presentation attacks, achieving an
average BPCER of 0.09\% and APCER of 1.2\% in the CLARKSON, COLFISPOOF, and
IIITD datasets, outperforming state-of-the-art domain adaptation methods.
|
2504.01214 | Salim Khazem | Salim Khazem, Jeremy Fix, C\'edric Pradalier | PolygoNet: Leveraging Simplified Polygonal Representation for Effective
Image Classification | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep learning models have achieved significant success in various image
related tasks. However, they often encounter challenges related to
computational complexity and overfitting. In this paper, we propose an
efficient approach that leverages polygonal representations of images using
dominant points or contour coordinates. By transforming input images into these
compact forms, our method significantly reduces computational requirements,
accelerates training, and conserves resources making it suitable for real time
and resource constrained applications. These representations inherently capture
essential image features while filtering noise, providing a natural
regularization effect that mitigates overfitting. The resulting lightweight
models achieve performance comparable to state of the art methods using full
resolution images while enabling deployment on edge devices. Extensive
experiments on benchmark datasets validate the effectiveness of our approach in
reducing complexity, improving generalization, and facilitating edge computing
applications. This work demonstrates the potential of polygonal representations
in advancing efficient and scalable deep learning solutions for real world
scenarios. The code for the experiments of the paper is provided in
https://github.com/salimkhazem/PolygoNet.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 22:05:00 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Khazem",
"Salim",
""
],
[
"Fix",
"Jeremy",
""
],
[
"Pradalier",
"Cédric",
""
]
] | TITLE: PolygoNet: Leveraging Simplified Polygonal Representation for Effective
Image Classification
ABSTRACT: Deep learning models have achieved significant success in various image
related tasks. However, they often encounter challenges related to
computational complexity and overfitting. In this paper, we propose an
efficient approach that leverages polygonal representations of images using
dominant points or contour coordinates. By transforming input images into these
compact forms, our method significantly reduces computational requirements,
accelerates training, and conserves resources making it suitable for real time
and resource constrained applications. These representations inherently capture
essential image features while filtering noise, providing a natural
regularization effect that mitigates overfitting. The resulting lightweight
models achieve performance comparable to state of the art methods using full
resolution images while enabling deployment on edge devices. Extensive
experiments on benchmark datasets validate the effectiveness of our approach in
reducing complexity, improving generalization, and facilitating edge computing
applications. This work demonstrates the potential of polygonal representations
in advancing efficient and scalable deep learning solutions for real world
scenarios. The code for the experiments of the paper is provided in
https://github.com/salimkhazem/PolygoNet.
|
2504.01216 | Feng Chen | Feng Chen, Dror Ben-Zeev, Gillian Sparks, Arya Kadakia, Trevor Cohen | Detecting PTSD in Clinical Interviews: A Comparative Analysis of NLP
Methods and Large Language Models | 10 pages, 4 tables, 1 figure | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Post-Traumatic Stress Disorder (PTSD) remains underdiagnosed in clinical
settings, presenting opportunities for automated detection to identify
patients. This study evaluates natural language processing approaches for
detecting PTSD from clinical interview transcripts. We compared general and
mental health-specific transformer models (BERT/RoBERTa), embedding-based
methods (SentenceBERT/LLaMA), and large language model prompting strategies
(zero-shot/few-shot/chain-of-thought) using the DAIC-WOZ dataset.
Domain-specific models significantly outperformed general models
(Mental-RoBERTa F1=0.643 vs. RoBERTa-base 0.485). LLaMA embeddings with neural
networks achieved the highest performance (F1=0.700). Zero-shot prompting using
DSM-5 criteria yielded competitive results without training data (F1=0.657).
Performance varied significantly across symptom severity and comorbidity
status, with higher accuracy for severe PTSD cases and patients with comorbid
depression. Our findings highlight the potential of domain-adapted embeddings
and LLMs for scalable screening while underscoring the need for improved
detection of nuanced presentations and offering insights for developing
clinically viable AI tools for PTSD assessment.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 22:06:28 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Chen",
"Feng",
""
],
[
"Ben-Zeev",
"Dror",
""
],
[
"Sparks",
"Gillian",
""
],
[
"Kadakia",
"Arya",
""
],
[
"Cohen",
"Trevor",
""
]
] | TITLE: Detecting PTSD in Clinical Interviews: A Comparative Analysis of NLP
Methods and Large Language Models
ABSTRACT: Post-Traumatic Stress Disorder (PTSD) remains underdiagnosed in clinical
settings, presenting opportunities for automated detection to identify
patients. This study evaluates natural language processing approaches for
detecting PTSD from clinical interview transcripts. We compared general and
mental health-specific transformer models (BERT/RoBERTa), embedding-based
methods (SentenceBERT/LLaMA), and large language model prompting strategies
(zero-shot/few-shot/chain-of-thought) using the DAIC-WOZ dataset.
Domain-specific models significantly outperformed general models
(Mental-RoBERTa F1=0.643 vs. RoBERTa-base 0.485). LLaMA embeddings with neural
networks achieved the highest performance (F1=0.700). Zero-shot prompting using
DSM-5 criteria yielded competitive results without training data (F1=0.657).
Performance varied significantly across symptom severity and comorbidity
status, with higher accuracy for severe PTSD cases and patients with comorbid
depression. Our findings highlight the potential of domain-adapted embeddings
and LLMs for scalable screening while underscoring the need for improved
detection of nuanced presentations and offering insights for developing
clinically viable AI tools for PTSD assessment.
|
2504.01218 | Piyush Nagasubramaniam | Piyush Nagasubramaniam (1), Neeraj Karamchandani (1), Chen Wu (2),
Sencun Zhu (1) ((1) The Pennsylvania State University, (2) Meta) | Prompting Forgetting: Unlearning in GANs via Textual Guidance | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | State-of-the-art generative models exhibit powerful image-generation
capabilities, introducing various ethical and legal challenges to service
providers hosting these models. Consequently, Content Removal Techniques (CRTs)
have emerged as a growing area of research to control outputs without
full-scale retraining. Recent work has explored the use of Machine Unlearning
in generative models to address content removal. However, the focus of such
research has been on diffusion models, and unlearning in Generative Adversarial
Networks (GANs) has remained largely unexplored. We address this gap by
proposing Text-to-Unlearn, a novel framework that selectively unlearns concepts
from pre-trained GANs using only text prompts, enabling feature unlearning,
identity unlearning, and fine-grained tasks like expression and multi-attribute
removal in models trained on human faces. Leveraging natural language
descriptions, our approach guides the unlearning process without requiring
additional datasets or supervised fine-tuning, offering a scalable and
efficient solution. To evaluate its effectiveness, we introduce an automatic
unlearning assessment method adapted from state-of-the-art image-text alignment
metrics, providing a comprehensive analysis of the unlearning methodology. To
our knowledge, Text-to-Unlearn is the first cross-modal unlearning framework
for GANs, representing a flexible and efficient advancement in managing
generative model behavior.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 22:18:40 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Nagasubramaniam",
"Piyush",
"",
"The Pennsylvania State University"
],
[
"Karamchandani",
"Neeraj",
"",
"The Pennsylvania State University"
],
[
"Wu",
"Chen",
"",
"Meta"
],
[
"Zhu",
"Sencun",
"",
"The Pennsylvania State University"
]
] | TITLE: Prompting Forgetting: Unlearning in GANs via Textual Guidance
ABSTRACT: State-of-the-art generative models exhibit powerful image-generation
capabilities, introducing various ethical and legal challenges to service
providers hosting these models. Consequently, Content Removal Techniques (CRTs)
have emerged as a growing area of research to control outputs without
full-scale retraining. Recent work has explored the use of Machine Unlearning
in generative models to address content removal. However, the focus of such
research has been on diffusion models, and unlearning in Generative Adversarial
Networks (GANs) has remained largely unexplored. We address this gap by
proposing Text-to-Unlearn, a novel framework that selectively unlearns concepts
from pre-trained GANs using only text prompts, enabling feature unlearning,
identity unlearning, and fine-grained tasks like expression and multi-attribute
removal in models trained on human faces. Leveraging natural language
descriptions, our approach guides the unlearning process without requiring
additional datasets or supervised fine-tuning, offering a scalable and
efficient solution. To evaluate its effectiveness, we introduce an automatic
unlearning assessment method adapted from state-of-the-art image-text alignment
metrics, providing a comprehensive analysis of the unlearning methodology. To
our knowledge, Text-to-Unlearn is the first cross-modal unlearning framework
for GANs, representing a flexible and efficient advancement in managing
generative model behavior.
|
2504.01223 | Alexey Miroshnikov | Ryan Franks, Alexey Miroshnikov | Explainable post-training bias mitigation with distribution-based
fairness metrics | 37 pages, 6 figures | null | null | null | cs.LG math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a novel optimization framework with distribution-based fairness
constraints for efficiently producing demographically blind, explainable models
across a wide range of fairness levels. This is accomplished through
post-processing, avoiding the need for retraining. Our framework, which is
based on stochastic gradient descent, can be applied to a wide range of model
types, with a particular emphasis on the post-processing of gradient-boosted
decision trees. Additionally, we design a broad class of interpretable global
bias metrics compatible with our method by building on previous work. We
empirically test our methodology on a variety of datasets and compare it to
other methods.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 22:22:25 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Franks",
"Ryan",
""
],
[
"Miroshnikov",
"Alexey",
""
]
] | TITLE: Explainable post-training bias mitigation with distribution-based
fairness metrics
ABSTRACT: We develop a novel optimization framework with distribution-based fairness
constraints for efficiently producing demographically blind, explainable models
across a wide range of fairness levels. This is accomplished through
post-processing, avoiding the need for retraining. Our framework, which is
based on stochastic gradient descent, can be applied to a wide range of model
types, with a particular emphasis on the post-processing of gradient-boosted
decision trees. Additionally, we design a broad class of interpretable global
bias metrics compatible with our method by building on previous work. We
empirically test our methodology on a variety of datasets and compare it to
other methods.
|
2504.01228 | Mansoor Rezghi | Kimia haghjooei, Mansoor Rezghi | TenAd: A Tensor-based Low-rank Black Box Adversarial Attack for Video
Classification | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning models have achieved remarkable success in computer vision but
remain vulnerable to adversarial attacks, particularly in black-box settings
where model details are unknown. Existing adversarial attack methods(even those
works with key frames) often treat video data as simple vectors, ignoring their
inherent multi-dimensional structure, and require a large number of queries,
making them inefficient and detectable. In this paper, we propose
\textbf{TenAd}, a novel tensor-based low-rank adversarial attack that leverages
the multi-dimensional properties of video data by representing videos as
fourth-order tensors. By exploiting low-rank attack, our method significantly
reduces the search space and the number of queries needed to generate
adversarial examples in black-box settings. Experimental results on standard
video classification datasets demonstrate that \textbf{TenAd} effectively
generates imperceptible adversarial perturbations while achieving higher attack
success rates and query efficiency compared to state-of-the-art methods. Our
approach outperforms existing black-box adversarial attacks in terms of success
rate, query efficiency, and perturbation imperceptibility, highlighting the
potential of tensor-based methods for adversarial attacks on video models.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 22:35:28 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"haghjooei",
"Kimia",
""
],
[
"Rezghi",
"Mansoor",
""
]
] | TITLE: TenAd: A Tensor-based Low-rank Black Box Adversarial Attack for Video
Classification
ABSTRACT: Deep learning models have achieved remarkable success in computer vision but
remain vulnerable to adversarial attacks, particularly in black-box settings
where model details are unknown. Existing adversarial attack methods(even those
works with key frames) often treat video data as simple vectors, ignoring their
inherent multi-dimensional structure, and require a large number of queries,
making them inefficient and detectable. In this paper, we propose
\textbf{TenAd}, a novel tensor-based low-rank adversarial attack that leverages
the multi-dimensional properties of video data by representing videos as
fourth-order tensors. By exploiting low-rank attack, our method significantly
reduces the search space and the number of queries needed to generate
adversarial examples in black-box settings. Experimental results on standard
video classification datasets demonstrate that \textbf{TenAd} effectively
generates imperceptible adversarial perturbations while achieving higher attack
success rates and query efficiency compared to state-of-the-art methods. Our
approach outperforms existing black-box adversarial attacks in terms of success
rate, query efficiency, and perturbation imperceptibility, highlighting the
potential of tensor-based methods for adversarial attacks on video models.
|
2504.01243 | Jaskaran Singh Walia | Jaskaran Singh Walia, Shravan Venkatraman, Pavithra LK | FUSION: Frequency-guided Underwater Spatial Image recOnstructioN | null | null | null | null | cs.CV cs.AI cs.LG cs.RO eess.IV | http://creativecommons.org/licenses/by/4.0/ | Underwater images suffer from severe degradations, including color
distortions, reduced visibility, and loss of structural details due to
wavelength-dependent attenuation and scattering. Existing enhancement methods
primarily focus on spatial-domain processing, neglecting the frequency domain's
potential to capture global color distributions and long-range dependencies. To
address these limitations, we propose FUSION, a dual-domain deep learning
framework that jointly leverages spatial and frequency domain information.
FUSION independently processes each RGB channel through multi-scale
convolutional kernels and adaptive attention mechanisms in the spatial domain,
while simultaneously extracting global structural information via FFT-based
frequency attention. A Frequency Guided Fusion module integrates complementary
features from both domains, followed by inter-channel fusion and adaptive
channel recalibration to ensure balanced color distributions. Extensive
experiments on benchmark datasets (UIEB, EUVP, SUIM-E) demonstrate that FUSION
achieves state-of-the-art performance, consistently outperforming existing
methods in reconstruction fidelity (highest PSNR of 23.717 dB and SSIM of 0.883
on UIEB), perceptual quality (lowest LPIPS of 0.112 on UIEB), and visual
enhancement metrics (best UIQM of 3.414 on UIEB), while requiring significantly
fewer parameters (0.28M) and lower computational complexity, demonstrating its
suitability for real-time underwater imaging applications.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 23:16:19 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Walia",
"Jaskaran Singh",
""
],
[
"Venkatraman",
"Shravan",
""
],
[
"LK",
"Pavithra",
""
]
] | TITLE: FUSION: Frequency-guided Underwater Spatial Image recOnstructioN
ABSTRACT: Underwater images suffer from severe degradations, including color
distortions, reduced visibility, and loss of structural details due to
wavelength-dependent attenuation and scattering. Existing enhancement methods
primarily focus on spatial-domain processing, neglecting the frequency domain's
potential to capture global color distributions and long-range dependencies. To
address these limitations, we propose FUSION, a dual-domain deep learning
framework that jointly leverages spatial and frequency domain information.
FUSION independently processes each RGB channel through multi-scale
convolutional kernels and adaptive attention mechanisms in the spatial domain,
while simultaneously extracting global structural information via FFT-based
frequency attention. A Frequency Guided Fusion module integrates complementary
features from both domains, followed by inter-channel fusion and adaptive
channel recalibration to ensure balanced color distributions. Extensive
experiments on benchmark datasets (UIEB, EUVP, SUIM-E) demonstrate that FUSION
achieves state-of-the-art performance, consistently outperforming existing
methods in reconstruction fidelity (highest PSNR of 23.717 dB and SSIM of 0.883
on UIEB), perceptual quality (lowest LPIPS of 0.112 on UIEB), and visual
enhancement metrics (best UIQM of 3.414 on UIEB), while requiring significantly
fewer parameters (0.28M) and lower computational complexity, demonstrating its
suitability for real-time underwater imaging applications.
|
2504.01246 | Biswadeep Chakraborty | Biswadeep Chakraborty, Hemant Kumawat, Beomseok Kang, Saibal
Mukhopadhyay | Dynamic Graph Structure Estimation for Learning Multivariate Point
Process using Spiking Neural Networks | 18 pages, 3 figures | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Modeling and predicting temporal point processes (TPPs) is critical in
domains such as neuroscience, epidemiology, finance, and social sciences. We
introduce the Spiking Dynamic Graph Network (SDGN), a novel framework that
leverages the temporal processing capabilities of spiking neural networks
(SNNs) and spike-timing-dependent plasticity (STDP) to dynamically estimate
underlying spatio-temporal functional graphs. Unlike existing methods that rely
on predefined or static graph structures, SDGN adapts to any dataset by
learning dynamic spatio-temporal dependencies directly from the event data,
enhancing generalizability and robustness. While SDGN offers significant
improvements over prior methods, we acknowledge its limitations in handling
dense graphs and certain non-Gaussian dependencies, providing opportunities for
future refinement. Our evaluations, conducted on both synthetic and real-world
datasets including NYC Taxi, 911, Reddit, and Stack Overflow, demonstrate that
SDGN achieves superior predictive accuracy while maintaining computational
efficiency. Furthermore, we include ablation studies to highlight the
contributions of its core components.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 23:23:10 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Chakraborty",
"Biswadeep",
""
],
[
"Kumawat",
"Hemant",
""
],
[
"Kang",
"Beomseok",
""
],
[
"Mukhopadhyay",
"Saibal",
""
]
] | TITLE: Dynamic Graph Structure Estimation for Learning Multivariate Point
Process using Spiking Neural Networks
ABSTRACT: Modeling and predicting temporal point processes (TPPs) is critical in
domains such as neuroscience, epidemiology, finance, and social sciences. We
introduce the Spiking Dynamic Graph Network (SDGN), a novel framework that
leverages the temporal processing capabilities of spiking neural networks
(SNNs) and spike-timing-dependent plasticity (STDP) to dynamically estimate
underlying spatio-temporal functional graphs. Unlike existing methods that rely
on predefined or static graph structures, SDGN adapts to any dataset by
learning dynamic spatio-temporal dependencies directly from the event data,
enhancing generalizability and robustness. While SDGN offers significant
improvements over prior methods, we acknowledge its limitations in handling
dense graphs and certain non-Gaussian dependencies, providing opportunities for
future refinement. Our evaluations, conducted on both synthetic and real-world
datasets including NYC Taxi, 911, Reddit, and Stack Overflow, demonstrate that
SDGN achieves superior predictive accuracy while maintaining computational
efficiency. Furthermore, we include ablation studies to highlight the
contributions of its core components.
|
2504.01248 | Lev Sorokin | Rafael Giebisch and Ken E. Friedl and Lev Sorokin and Andrea Stocco | Automated Factual Benchmarking for In-Car Conversational Systems using
Large Language Models | Accepted in IEEE Intelligent Vehicles Symposium Conference (IV 2025) | null | null | null | cs.CL cs.AI cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | In-car conversational systems bring the promise to improve the in-vehicle
user experience. Modern conversational systems are based on Large Language
Models (LLMs), which makes them prone to errors such as hallucinations, i.e.,
inaccurate, fictitious, and therefore factually incorrect information. In this
paper, we present an LLM-based methodology for the automatic factual
benchmarking of in-car conversational systems. We instantiate our methodology
with five LLM-based methods, leveraging ensembling techniques and diverse
personae to enhance agreement and minimize hallucinations. We use our
methodology to evaluate CarExpert, an in-car retrieval-augmented conversational
question answering system, with respect to the factual correctness to a
vehicle's manual. We produced a novel dataset specifically created for the
in-car domain, and tested our methodology against an expert evaluation. Our
results show that the combination of GPT-4 with the Input Output Prompting
achieves over 90 per cent factual correctness agreement rate with expert
evaluations, other than being the most efficient approach yielding an average
response time of 4.5s. Our findings suggest that LLM-based testing constitutes
a viable approach for the validation of conversational systems regarding their
factual correctness.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 23:25:30 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Giebisch",
"Rafael",
""
],
[
"Friedl",
"Ken E.",
""
],
[
"Sorokin",
"Lev",
""
],
[
"Stocco",
"Andrea",
""
]
] | TITLE: Automated Factual Benchmarking for In-Car Conversational Systems using
Large Language Models
ABSTRACT: In-car conversational systems bring the promise to improve the in-vehicle
user experience. Modern conversational systems are based on Large Language
Models (LLMs), which makes them prone to errors such as hallucinations, i.e.,
inaccurate, fictitious, and therefore factually incorrect information. In this
paper, we present an LLM-based methodology for the automatic factual
benchmarking of in-car conversational systems. We instantiate our methodology
with five LLM-based methods, leveraging ensembling techniques and diverse
personae to enhance agreement and minimize hallucinations. We use our
methodology to evaluate CarExpert, an in-car retrieval-augmented conversational
question answering system, with respect to the factual correctness to a
vehicle's manual. We produced a novel dataset specifically created for the
in-car domain, and tested our methodology against an expert evaluation. Our
results show that the combination of GPT-4 with the Input Output Prompting
achieves over 90 per cent factual correctness agreement rate with expert
evaluations, other than being the most efficient approach yielding an average
response time of 4.5s. Our findings suggest that LLM-based testing constitutes
a viable approach for the validation of conversational systems regarding their
factual correctness.
|
2504.01253 | Niharika Dadu | Niharika Dadu, Harsh Vardhan Singh, Romi Banerjee (Indian Institute of
Technology Jodhpur) | Grade Guard: A Smart System for Short Answer Automated Grading | 11 pages, 18 figures | null | 10.36227/techrxiv.174114489.93670234/v1 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The advent of large language models (LLMs) in the education sector has
provided impetus to automate grading short answer questions. LLMs make
evaluating short answers very efficient, thus addressing issues like staff
shortage. However, in the task of Automated Short Answer Grading (ASAG), LLM
responses are influenced by diverse perspectives in their training dataset,
leading to inaccuracies in evaluating nuanced or partially correct answers. To
address this challenge, we propose a novel framework, Grade Guard.
1. To enhance the task-based specialization of the LLMs, the temperature
parameter has been fine-tuned using Root Mean Square Error (RMSE).
2. Unlike traditional approaches, LLMs in Grade Guard compute an
Indecisiveness Score (IS) along with the grade to reflect uncertainty in
predicted grades.
3. Introduced Confidence-Aware Loss (CAL) to generate an optimized
Indecisiveness Score (IS).
4. To improve reliability, self-reflection based on the optimized IS has been
introduced into the framework, enabling human re-evaluation to minimize
incorrect grade assignments.
Our experimentation shows that the best setting of Grade Guard outperforms
traditional methods by 19.16% RMSE in Upstage Solar Pro, 23.64% RMSE in Upstage
Solar Mini, 4.00% RMSE in Gemini 1.5 Flash, and 10.20% RMSE in GPT 4-o Mini.
Future work includes improving interpretability by generating rationales for
grades to enhance accuracy. Expanding benchmark datasets and annotating them
with domain-specific nuances will enhance grading accuracy. Finally, analyzing
feedback to enhance confidence in predicted grades, reduce biases, optimize
grading criteria, and personalize learning while supporting multilingual
grading systems will make the solution more accurate, adaptable, fair, and
inclusive.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 23:45:44 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Dadu",
"Niharika",
"",
"Indian Institute of\n Technology Jodhpur"
],
[
"Singh",
"Harsh Vardhan",
"",
"Indian Institute of\n Technology Jodhpur"
],
[
"Banerjee",
"Romi",
"",
"Indian Institute of\n Technology Jodhpur"
]
] | TITLE: Grade Guard: A Smart System for Short Answer Automated Grading
ABSTRACT: The advent of large language models (LLMs) in the education sector has
provided impetus to automate grading short answer questions. LLMs make
evaluating short answers very efficient, thus addressing issues like staff
shortage. However, in the task of Automated Short Answer Grading (ASAG), LLM
responses are influenced by diverse perspectives in their training dataset,
leading to inaccuracies in evaluating nuanced or partially correct answers. To
address this challenge, we propose a novel framework, Grade Guard.
1. To enhance the task-based specialization of the LLMs, the temperature
parameter has been fine-tuned using Root Mean Square Error (RMSE).
2. Unlike traditional approaches, LLMs in Grade Guard compute an
Indecisiveness Score (IS) along with the grade to reflect uncertainty in
predicted grades.
3. Introduced Confidence-Aware Loss (CAL) to generate an optimized
Indecisiveness Score (IS).
4. To improve reliability, self-reflection based on the optimized IS has been
introduced into the framework, enabling human re-evaluation to minimize
incorrect grade assignments.
Our experimentation shows that the best setting of Grade Guard outperforms
traditional methods by 19.16% RMSE in Upstage Solar Pro, 23.64% RMSE in Upstage
Solar Mini, 4.00% RMSE in Gemini 1.5 Flash, and 10.20% RMSE in GPT 4-o Mini.
Future work includes improving interpretability by generating rationales for
grades to enhance accuracy. Expanding benchmark datasets and annotating them
with domain-specific nuances will enhance grading accuracy. Finally, analyzing
feedback to enhance confidence in predicted grades, reduce biases, optimize
grading criteria, and personalize learning while supporting multilingual
grading systems will make the solution more accurate, adaptable, fair, and
inclusive.
|
2504.01257 | Biswadeep Chakraborty | Biswadeep Chakraborty, Saibal Mukhopadhyay | FLAMES: A Hybrid Spiking-State Space Model for Adaptive Memory Retention
in Event-Based Learning | 9 pages, 6 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose \textbf{FLAMES (Fast Long-range Adaptive Memory for Event-based
Systems)}, a novel hybrid framework integrating structured state-space dynamics
with event-driven computation. At its core, the \textit{Spike-Aware HiPPO
(SA-HiPPO) mechanism} dynamically adjusts memory retention based on inter-spike
intervals, preserving both short- and long-range dependencies. To maintain
computational efficiency, we introduce a normal-plus-low-rank (NPLR)
decomposition, reducing complexity from $\mathcal{O}(N^2)$ to
$\mathcal{O}(Nr)$. FLAMES achieves state-of-the-art results on the Long Range
Arena benchmark and event datasets like HAR-DVS and Celex-HAR. By bridging
neuromorphic computing and structured sequence modeling, FLAMES enables
scalable long-range reasoning in event-driven systems.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 00:08:19 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Chakraborty",
"Biswadeep",
""
],
[
"Mukhopadhyay",
"Saibal",
""
]
] | TITLE: FLAMES: A Hybrid Spiking-State Space Model for Adaptive Memory Retention
in Event-Based Learning
ABSTRACT: We propose \textbf{FLAMES (Fast Long-range Adaptive Memory for Event-based
Systems)}, a novel hybrid framework integrating structured state-space dynamics
with event-driven computation. At its core, the \textit{Spike-Aware HiPPO
(SA-HiPPO) mechanism} dynamically adjusts memory retention based on inter-spike
intervals, preserving both short- and long-range dependencies. To maintain
computational efficiency, we introduce a normal-plus-low-rank (NPLR)
decomposition, reducing complexity from $\mathcal{O}(N^2)$ to
$\mathcal{O}(Nr)$. FLAMES achieves state-of-the-art results on the Long Range
Arena benchmark and event datasets like HAR-DVS and Celex-HAR. By bridging
neuromorphic computing and structured sequence modeling, FLAMES enables
scalable long-range reasoning in event-driven systems.
|
2504.01261 | Bahadir Kocer | Thomas Pritchard and Saifullah Ijaz and Ronald Clark and Basaran
Bahadir Kocer | ForestVO: Enhancing Visual Odometry in Forest Environments through
ForestGlue | Accepted to the IEEE Robotics and Automation Letters | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in visual odometry systems have improved autonomous
navigation; however, challenges persist in complex environments like forests,
where dense foliage, variable lighting, and repetitive textures compromise
feature correspondence accuracy. To address these challenges, we introduce
ForestGlue, enhancing the SuperPoint feature detector through four
configurations - grayscale, RGB, RGB-D, and stereo-vision - optimised for
various sensing modalities. For feature matching, we employ LightGlue or
SuperGlue, retrained with synthetic forest data. ForestGlue achieves comparable
pose estimation accuracy to baseline models but requires only 512 keypoints -
just 25% of the baseline's 2048 - to reach an LO-RANSAC AUC score of 0.745 at a
10{\deg} threshold. With only a quarter of keypoints needed, ForestGlue
significantly reduces computational overhead, demonstrating effectiveness in
dynamic forest environments, and making it suitable for real-time deployment on
resource-constrained platforms. By combining ForestGlue with a
transformer-based pose estimation model, we propose ForestVO, which estimates
relative camera poses using matched 2D pixel coordinates between frames. On
challenging TartanAir forest sequences, ForestVO achieves an average relative
pose error (RPE) of 1.09 m and a kitti_score of 2.33%, outperforming
direct-based methods like DSO by 40% in dynamic scenes. Despite using only 10%
of the dataset for training, ForestVO maintains competitive performance with
TartanVO while being a significantly lighter model. This work establishes an
end-to-end deep learning pipeline specifically tailored for visual odometry in
forested environments, leveraging forest-specific training data to optimise
feature correspondence and pose estimation, thereby enhancing the accuracy and
robustness of autonomous navigation systems.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 00:20:05 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Pritchard",
"Thomas",
""
],
[
"Ijaz",
"Saifullah",
""
],
[
"Clark",
"Ronald",
""
],
[
"Kocer",
"Basaran Bahadir",
""
]
] | TITLE: ForestVO: Enhancing Visual Odometry in Forest Environments through
ForestGlue
ABSTRACT: Recent advancements in visual odometry systems have improved autonomous
navigation; however, challenges persist in complex environments like forests,
where dense foliage, variable lighting, and repetitive textures compromise
feature correspondence accuracy. To address these challenges, we introduce
ForestGlue, enhancing the SuperPoint feature detector through four
configurations - grayscale, RGB, RGB-D, and stereo-vision - optimised for
various sensing modalities. For feature matching, we employ LightGlue or
SuperGlue, retrained with synthetic forest data. ForestGlue achieves comparable
pose estimation accuracy to baseline models but requires only 512 keypoints -
just 25% of the baseline's 2048 - to reach an LO-RANSAC AUC score of 0.745 at a
10{\deg} threshold. With only a quarter of keypoints needed, ForestGlue
significantly reduces computational overhead, demonstrating effectiveness in
dynamic forest environments, and making it suitable for real-time deployment on
resource-constrained platforms. By combining ForestGlue with a
transformer-based pose estimation model, we propose ForestVO, which estimates
relative camera poses using matched 2D pixel coordinates between frames. On
challenging TartanAir forest sequences, ForestVO achieves an average relative
pose error (RPE) of 1.09 m and a kitti_score of 2.33%, outperforming
direct-based methods like DSO by 40% in dynamic scenes. Despite using only 10%
of the dataset for training, ForestVO maintains competitive performance with
TartanVO while being a significantly lighter model. This work establishes an
end-to-end deep learning pipeline specifically tailored for visual odometry in
forested environments, leveraging forest-specific training data to optimise
feature correspondence and pose estimation, thereby enhancing the accuracy and
robustness of autonomous navigation systems.
|
2504.01292 | Yongyi Liu | Yongyi Liu, Ahmed Mahmood, Amr Magdy, Minyao Zhu | SOLAR: Scalable Distributed Spatial Joins through Learning-based
Optimization | 13 pages, current in submission to VLDB | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | The proliferation of location-based services has led to massive spatial data
generation. Spatial join is a crucial database operation that identifies pairs
of objects from two spatial datasets based on spatial relationships. Due to the
intensive computational demands, spatial joins are often executed in a
distributed manner across clusters. However, current systems fail to recognize
similarities in the partitioning of spatial data, leading to redundant
computations and increased overhead. Recently, incorporating machine learning
optimizations into database operations has enhanced efficiency in traditional
joins by predicting optimal strategies. However, applying these optimizations
to spatial joins poses challenges due to the complex nature of spatial
relationships and the variability of spatial data. This paper introduces SOLAR,
scalable distributed spatial joins through learning-based optimization. SOLAR
operates through offline and online phases. In the offline phase, it learns
balanced spatial partitioning based on the similarities between datasets in
query workloads seen so far. In the online phase, when a new join query is
received, SOLAR evaluates the similarity between the datasets in the new query
and the already-seen workloads using the trained learning model. Then, it
decides to either reuse an existing partitioner, avoiding unnecessary
computational overhead, or partition from scratch. Our extensive experimental
evaluation on real-world datasets demonstrates that SOLAR achieves up to 3.6X
speedup in overall join runtime and 2.71X speedup in partitioning time compared
to state-of-the-art systems.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 01:52:52 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Liu",
"Yongyi",
""
],
[
"Mahmood",
"Ahmed",
""
],
[
"Magdy",
"Amr",
""
],
[
"Zhu",
"Minyao",
""
]
] | TITLE: SOLAR: Scalable Distributed Spatial Joins through Learning-based
Optimization
ABSTRACT: The proliferation of location-based services has led to massive spatial data
generation. Spatial join is a crucial database operation that identifies pairs
of objects from two spatial datasets based on spatial relationships. Due to the
intensive computational demands, spatial joins are often executed in a
distributed manner across clusters. However, current systems fail to recognize
similarities in the partitioning of spatial data, leading to redundant
computations and increased overhead. Recently, incorporating machine learning
optimizations into database operations has enhanced efficiency in traditional
joins by predicting optimal strategies. However, applying these optimizations
to spatial joins poses challenges due to the complex nature of spatial
relationships and the variability of spatial data. This paper introduces SOLAR,
scalable distributed spatial joins through learning-based optimization. SOLAR
operates through offline and online phases. In the offline phase, it learns
balanced spatial partitioning based on the similarities between datasets in
query workloads seen so far. In the online phase, when a new join query is
received, SOLAR evaluates the similarity between the datasets in the new query
and the already-seen workloads using the trained learning model. Then, it
decides to either reuse an existing partitioner, avoiding unnecessary
computational overhead, or partition from scratch. Our extensive experimental
evaluation on real-world datasets demonstrates that SOLAR achieves up to 3.6X
speedup in overall join runtime and 2.71X speedup in partitioning time compared
to state-of-the-art systems.
|
2504.01294 | Hui Li | Hui Li, Zhen Dong, Siao Wang, Hui Zhang, Liwei Shen, Xin Peng,
Dongdong She | Extracting Formal Specifications from Documents Using LLMs for Automated
Testing | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated testing plays a crucial role in ensuring software security. It
heavily relies on formal specifications to validate the correctness of the
system behavior. However, the main approach to defining these formal
specifications is through manual analysis of software documents, which requires
a significant amount of engineering effort from experienced researchers and
engineers. Meanwhile, system update further increases the human labor cost to
maintain a corresponding formal specification, making the manual analysis
approach a time-consuming and error-prone task. Recent advances in Large
Language Models (LLMs) have demonstrated promising capabilities in natural
language understanding. Yet, the feasibility of using LLMs to automate the
extraction of formal specifications from software documents remains unexplored.
We conduct an empirical study by constructing a comprehensive dataset
comprising 603 specifications from 37 documents across three representative
open-source software. We then evaluate the most recent LLMs' capabilities in
extracting formal specifications from documents in an end-to-end fashion,
including GPT-4o, Claude, and Llama. Our study demonstrates the application of
LLMs in formal specification extraction tasks while identifying two major
limitations: specification oversimplification and specification fabrication. We
attribute these deficiencies to the LLMs' inherent limitations in processing
and expressive capabilities, as well as their tendency to fabricate fictional
information. Inspired by human cognitive processes, we propose a two-stage
method, annotation-then-conversion, to address these challenges. Our method
demonstrates significant improvements over the end-to-end method, with a 29.2%
increase in the number of correctly extracted specifications and a 14.0%
improvement in average accuracy. In particular, our best-performing LLM
achieves an accuracy of 71.6%.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 01:58:11 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Li",
"Hui",
""
],
[
"Dong",
"Zhen",
""
],
[
"Wang",
"Siao",
""
],
[
"Zhang",
"Hui",
""
],
[
"Shen",
"Liwei",
""
],
[
"Peng",
"Xin",
""
],
[
"She",
"Dongdong",
""
]
] | TITLE: Extracting Formal Specifications from Documents Using LLMs for Automated
Testing
ABSTRACT: Automated testing plays a crucial role in ensuring software security. It
heavily relies on formal specifications to validate the correctness of the
system behavior. However, the main approach to defining these formal
specifications is through manual analysis of software documents, which requires
a significant amount of engineering effort from experienced researchers and
engineers. Meanwhile, system update further increases the human labor cost to
maintain a corresponding formal specification, making the manual analysis
approach a time-consuming and error-prone task. Recent advances in Large
Language Models (LLMs) have demonstrated promising capabilities in natural
language understanding. Yet, the feasibility of using LLMs to automate the
extraction of formal specifications from software documents remains unexplored.
We conduct an empirical study by constructing a comprehensive dataset
comprising 603 specifications from 37 documents across three representative
open-source software. We then evaluate the most recent LLMs' capabilities in
extracting formal specifications from documents in an end-to-end fashion,
including GPT-4o, Claude, and Llama. Our study demonstrates the application of
LLMs in formal specification extraction tasks while identifying two major
limitations: specification oversimplification and specification fabrication. We
attribute these deficiencies to the LLMs' inherent limitations in processing
and expressive capabilities, as well as their tendency to fabricate fictional
information. Inspired by human cognitive processes, we propose a two-stage
method, annotation-then-conversion, to address these challenges. Our method
demonstrates significant improvements over the end-to-end method, with a 29.2%
increase in the number of correctly extracted specifications and a 14.0%
improvement in average accuracy. In particular, our best-performing LLM
achieves an accuracy of 71.6%.
|
2504.01296 | Bairu Hou | Bairu Hou, Yang Zhang, Jiabao Ji, Yujian Liu, Kaizhi Qian, Jacob
Andreas, Shiyu Chang | ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement
Learning | 15 pages, 7 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present ThinkPrune, a simple yet effective method for pruning the thinking
length for long-thinking LLMs, which has been found to often produce
inefficient and redundant thinking processes. Existing preliminary explorations
of reducing thinking length primarily focus on forcing the thinking process to
early exit, rather than adapting the LLM to optimize and consolidate the
thinking process, and therefore the length-performance tradeoff observed so far
is sub-optimal. To fill this gap, ThinkPrune offers a simple solution that
continuously trains the long-thinking LLMs via reinforcement learning (RL) with
an added token limit, beyond which any unfinished thoughts and answers will be
discarded, resulting in a zero reward. To further preserve model performance,
we introduce an iterative length pruning approach, where multiple rounds of RL
are conducted, each with an increasingly more stringent token limit. We
observed that ThinkPrune results in a remarkable performance-length tradeoff --
on the AIME24 dataset, the reasoning length of DeepSeek-R1-Distill-Qwen-1.5B
can be reduced by half with only 2% drop in performance. We also observed that
after pruning, the LLMs can bypass unnecessary steps while keeping the core
reasoning process complete. Code is available at
https://github.com/UCSB-NLP-Chang/ThinkPrune.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 01:59:26 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Hou",
"Bairu",
""
],
[
"Zhang",
"Yang",
""
],
[
"Ji",
"Jiabao",
""
],
[
"Liu",
"Yujian",
""
],
[
"Qian",
"Kaizhi",
""
],
[
"Andreas",
"Jacob",
""
],
[
"Chang",
"Shiyu",
""
]
] | TITLE: ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement
Learning
ABSTRACT: We present ThinkPrune, a simple yet effective method for pruning the thinking
length for long-thinking LLMs, which has been found to often produce
inefficient and redundant thinking processes. Existing preliminary explorations
of reducing thinking length primarily focus on forcing the thinking process to
early exit, rather than adapting the LLM to optimize and consolidate the
thinking process, and therefore the length-performance tradeoff observed so far
is sub-optimal. To fill this gap, ThinkPrune offers a simple solution that
continuously trains the long-thinking LLMs via reinforcement learning (RL) with
an added token limit, beyond which any unfinished thoughts and answers will be
discarded, resulting in a zero reward. To further preserve model performance,
we introduce an iterative length pruning approach, where multiple rounds of RL
are conducted, each with an increasingly more stringent token limit. We
observed that ThinkPrune results in a remarkable performance-length tradeoff --
on the AIME24 dataset, the reasoning length of DeepSeek-R1-Distill-Qwen-1.5B
can be reduced by half with only 2% drop in performance. We also observed that
after pruning, the LLMs can bypass unnecessary steps while keeping the core
reasoning process complete. Code is available at
https://github.com/UCSB-NLP-Chang/ThinkPrune.
|
2504.01321 | Chunhui Zhang | Chunhui Zhang, Li Liu, Jialin Gao, Xin Sun, Hao Wen, Xi Zhou, Shiming
Ge, Yanfeng Wang | COST: Contrastive One-Stage Transformer for Vision-Language Small Object
Tracking | Preprint submitted to Elsevier.
https://github.com/983632847/Awesome-Multimodal-Object-Tracking | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer has recently demonstrated great potential in improving
vision-language (VL) tracking algorithms. However, most of the existing VL
trackers rely on carefully designed mechanisms to perform the multi-stage
multi-modal fusion. Additionally, direct multi-modal fusion without alignment
ignores distribution discrepancy between modalities in feature space,
potentially leading to suboptimal representations. In this work, we propose
COST, a contrastive one-stage transformer fusion framework for VL tracking,
aiming to learn semantically consistent and unified VL representations.
Specifically, we introduce a contrastive alignment strategy that maximizes
mutual information (MI) between a video and its corresponding language
description. This enables effective cross-modal alignment, yielding
semantically consistent features in the representation space. By leveraging a
visual-linguistic transformer, we establish an efficient multi-modal fusion and
reasoning mechanism, empirically demonstrating that a simple stack of
transformer encoders effectively enables unified VL representations. Moreover,
we contribute a newly collected VL tracking benchmark dataset for small object
tracking, named VL-SOT500, with bounding boxes and language descriptions. Our
dataset comprises two challenging subsets, VL-SOT230 and VL-SOT270, dedicated
to evaluating generic and high-speed small object tracking, respectively. Small
object tracking is notoriously challenging due to weak appearance and limited
features, and this dataset is, to the best of our knowledge, the first to
explore the usage of language cues to enhance visual representation for small
object tracking. Extensive experiments demonstrate that COST achieves
state-of-the-art performance on five existing VL tracking datasets, as well as
on our proposed VL-SOT500 dataset. Source codes and dataset will be made
publicly available.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 03:12:38 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhang",
"Chunhui",
""
],
[
"Liu",
"Li",
""
],
[
"Gao",
"Jialin",
""
],
[
"Sun",
"Xin",
""
],
[
"Wen",
"Hao",
""
],
[
"Zhou",
"Xi",
""
],
[
"Ge",
"Shiming",
""
],
[
"Wang",
"Yanfeng",
""
]
] | TITLE: COST: Contrastive One-Stage Transformer for Vision-Language Small Object
Tracking
ABSTRACT: Transformer has recently demonstrated great potential in improving
vision-language (VL) tracking algorithms. However, most of the existing VL
trackers rely on carefully designed mechanisms to perform the multi-stage
multi-modal fusion. Additionally, direct multi-modal fusion without alignment
ignores distribution discrepancy between modalities in feature space,
potentially leading to suboptimal representations. In this work, we propose
COST, a contrastive one-stage transformer fusion framework for VL tracking,
aiming to learn semantically consistent and unified VL representations.
Specifically, we introduce a contrastive alignment strategy that maximizes
mutual information (MI) between a video and its corresponding language
description. This enables effective cross-modal alignment, yielding
semantically consistent features in the representation space. By leveraging a
visual-linguistic transformer, we establish an efficient multi-modal fusion and
reasoning mechanism, empirically demonstrating that a simple stack of
transformer encoders effectively enables unified VL representations. Moreover,
we contribute a newly collected VL tracking benchmark dataset for small object
tracking, named VL-SOT500, with bounding boxes and language descriptions. Our
dataset comprises two challenging subsets, VL-SOT230 and VL-SOT270, dedicated
to evaluating generic and high-speed small object tracking, respectively. Small
object tracking is notoriously challenging due to weak appearance and limited
features, and this dataset is, to the best of our knowledge, the first to
explore the usage of language cues to enhance visual representation for small
object tracking. Extensive experiments demonstrate that COST achieves
state-of-the-art performance on five existing VL tracking datasets, as well as
on our proposed VL-SOT500 dataset. Source codes and dataset will be made
publicly available.
|
2504.01348 | Yuji Nozawa | Yuji Nozawa, Yu-Chieh Lin, Kazumoto Nakamura, Youyang Ng | Prompt-Guided Attention Head Selection for Focus-Oriented Image
Retrieval | Accepted to CVPR 2025 PixFoundation Workshop | null | null | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this paper is to enhance pretrained Vision Transformer (ViT)
models for focus-oriented image retrieval with visual prompting. In real-world
image retrieval scenarios, both query and database images often exhibit
complexity, with multiple objects and intricate backgrounds. Users often want
to retrieve images with specific object, which we define as the Focus-Oriented
Image Retrieval (FOIR) task. While a standard image encoder can be employed to
extract image features for similarity matching, it may not perform optimally in
the multi-object-based FOIR task. This is because each image is represented by
a single global feature vector. To overcome this, a prompt-based image
retrieval solution is required. We propose an approach called Prompt-guided
attention Head Selection (PHS) to leverage the head-wise potential of the
multi-head attention mechanism in ViT in a promptable manner. PHS selects
specific attention heads by matching their attention maps with user's visual
prompts, such as a point, box, or segmentation. This empowers the model to
focus on specific object of interest while preserving the surrounding visual
context. Notably, PHS does not necessitate model re-training and avoids any
image alteration. Experimental results show that PHS substantially improves
performance on multiple datasets, offering a practical and training-free
solution to enhance model performance in the FOIR task.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 04:33:27 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Nozawa",
"Yuji",
""
],
[
"Lin",
"Yu-Chieh",
""
],
[
"Nakamura",
"Kazumoto",
""
],
[
"Ng",
"Youyang",
""
]
] | TITLE: Prompt-Guided Attention Head Selection for Focus-Oriented Image
Retrieval
ABSTRACT: The goal of this paper is to enhance pretrained Vision Transformer (ViT)
models for focus-oriented image retrieval with visual prompting. In real-world
image retrieval scenarios, both query and database images often exhibit
complexity, with multiple objects and intricate backgrounds. Users often want
to retrieve images with specific object, which we define as the Focus-Oriented
Image Retrieval (FOIR) task. While a standard image encoder can be employed to
extract image features for similarity matching, it may not perform optimally in
the multi-object-based FOIR task. This is because each image is represented by
a single global feature vector. To overcome this, a prompt-based image
retrieval solution is required. We propose an approach called Prompt-guided
attention Head Selection (PHS) to leverage the head-wise potential of the
multi-head attention mechanism in ViT in a promptable manner. PHS selects
specific attention heads by matching their attention maps with user's visual
prompts, such as a point, box, or segmentation. This empowers the model to
focus on specific object of interest while preserving the surrounding visual
context. Notably, PHS does not necessitate model re-training and avoids any
image alteration. Experimental results show that PHS substantially improves
performance on multiple datasets, offering a practical and training-free
solution to enhance model performance in the FOIR task.
|
2504.01357 | Ruihao Du | Ruihao Du, Zeshen Li, Howard H. Yang | Age-Aware Partial Gradient Update Strategy for Federated Learning Over
the Air | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an age-aware strategy to update gradients in an over-the-air
federated learning system. The system comprises an edge server and multiple
clients, collaborating to minimize a global loss function. In each
communication round, clients perform local training, modulate their gradient
updates onto a set of shared orthogonal waveforms, and simultaneously transmit
the analog signals to the edge server. The edge server then extracts a noisy
aggregated gradient from the received radio signal, updates the global model,
and broadcasts it to the clients for the next round of local computing. Despite
enabling all clients to upload information in every communication round, the
system is constrained by the limited number of available waveform carriers,
allowing only a subset of gradient parameters to be transmitted. To address
this issue, our method maintains an age vector on the edge server, tracking the
time elapsed since each coordinate of the global model was last updated. The
server leverages this information to prioritize gradient entries for
transmission, ensuring that outdated yet significant parameters are updated
more frequently. We derive the convergence rate of the proposed algorithm to
quantify its effectiveness. Furthermore, experimental evaluations on the MNIST
and CIFAR-10 datasets demonstrate that our approach achieves higher accuracy
and more stable convergence performance compared to baseline methods,
highlighting its potential for improving communication efficiency in
over-the-air federated learning systems.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:01:53 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Du",
"Ruihao",
""
],
[
"Li",
"Zeshen",
""
],
[
"Yang",
"Howard H.",
""
]
] | TITLE: Age-Aware Partial Gradient Update Strategy for Federated Learning Over
the Air
ABSTRACT: We propose an age-aware strategy to update gradients in an over-the-air
federated learning system. The system comprises an edge server and multiple
clients, collaborating to minimize a global loss function. In each
communication round, clients perform local training, modulate their gradient
updates onto a set of shared orthogonal waveforms, and simultaneously transmit
the analog signals to the edge server. The edge server then extracts a noisy
aggregated gradient from the received radio signal, updates the global model,
and broadcasts it to the clients for the next round of local computing. Despite
enabling all clients to upload information in every communication round, the
system is constrained by the limited number of available waveform carriers,
allowing only a subset of gradient parameters to be transmitted. To address
this issue, our method maintains an age vector on the edge server, tracking the
time elapsed since each coordinate of the global model was last updated. The
server leverages this information to prioritize gradient entries for
transmission, ensuring that outdated yet significant parameters are updated
more frequently. We derive the convergence rate of the proposed algorithm to
quantify its effectiveness. Furthermore, experimental evaluations on the MNIST
and CIFAR-10 datasets demonstrate that our approach achieves higher accuracy
and more stable convergence performance compared to baseline methods,
highlighting its potential for improving communication efficiency in
over-the-air federated learning systems.
|
2504.01373 | Emadeldeen Eldele | Emadeldeen Eldele, Mohamed Ragab, Xu Qing, Edward, Zhenghua Chen, Min
Wu, Xiaoli Li, Jay Lee | UniFault: A Fault Diagnosis Foundation Model from Bearing Data | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine fault diagnosis (FD) is a critical task for predictive maintenance,
enabling early fault detection and preventing unexpected failures. Despite its
importance, existing FD models are operation-specific with limited
generalization across diverse datasets. Foundation models (FM) have
demonstrated remarkable potential in both visual and language domains,
achieving impressive generalization capabilities even with minimal data through
few-shot or zero-shot learning. However, translating these advances to FD
presents unique hurdles. Unlike the large-scale, cohesive datasets available
for images and text, FD datasets are typically smaller and more heterogeneous,
with significant variations in sampling frequencies and the number of channels
across different systems and applications. This heterogeneity complicates the
design of a universal architecture capable of effectively processing such
diverse data while maintaining robust feature extraction and learning
capabilities. In this paper, we introduce UniFault, a foundation model for
fault diagnosis that systematically addresses these issues. Specifically, the
model incorporates a comprehensive data harmonization pipeline featuring two
key innovations. First, a unification scheme transforms multivariate inputs
into standardized univariate sequences while retaining local inter-channel
relationships. Second, a novel cross-domain temporal fusion strategy mitigates
distribution shifts and enriches sample diversity and count, improving the
model generalization across varying conditions. UniFault is pretrained on over
9 billion data points spanning diverse FD datasets, enabling superior few-shot
performance. Extensive experiments on real-world FD datasets demonstrate that
UniFault achieves SoTA performance, setting a new benchmark for fault diagnosis
models and paving the way for more scalable and robust predictive maintenance
solutions.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:34:27 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Eldele",
"Emadeldeen",
""
],
[
"Ragab",
"Mohamed",
""
],
[
"Qing",
"Xu",
""
],
[
"Edward",
"",
""
],
[
"Chen",
"Zhenghua",
""
],
[
"Wu",
"Min",
""
],
[
"Li",
"Xiaoli",
""
],
[
"Lee",
"Jay",
""
]
] | TITLE: UniFault: A Fault Diagnosis Foundation Model from Bearing Data
ABSTRACT: Machine fault diagnosis (FD) is a critical task for predictive maintenance,
enabling early fault detection and preventing unexpected failures. Despite its
importance, existing FD models are operation-specific with limited
generalization across diverse datasets. Foundation models (FM) have
demonstrated remarkable potential in both visual and language domains,
achieving impressive generalization capabilities even with minimal data through
few-shot or zero-shot learning. However, translating these advances to FD
presents unique hurdles. Unlike the large-scale, cohesive datasets available
for images and text, FD datasets are typically smaller and more heterogeneous,
with significant variations in sampling frequencies and the number of channels
across different systems and applications. This heterogeneity complicates the
design of a universal architecture capable of effectively processing such
diverse data while maintaining robust feature extraction and learning
capabilities. In this paper, we introduce UniFault, a foundation model for
fault diagnosis that systematically addresses these issues. Specifically, the
model incorporates a comprehensive data harmonization pipeline featuring two
key innovations. First, a unification scheme transforms multivariate inputs
into standardized univariate sequences while retaining local inter-channel
relationships. Second, a novel cross-domain temporal fusion strategy mitigates
distribution shifts and enriches sample diversity and count, improving the
model generalization across varying conditions. UniFault is pretrained on over
9 billion data points spanning diverse FD datasets, enabling superior few-shot
performance. Extensive experiments on real-world FD datasets demonstrate that
UniFault achieves SoTA performance, setting a new benchmark for fault diagnosis
models and paving the way for more scalable and robust predictive maintenance
solutions.
|
2504.01383 | Chang-Bin Zhang | Chang-Bin Zhang, Jinhong Ni, Yujie Zhong, Kai Han | v-CLR: View-Consistent Learning for Open-World Instance Segmentation | Accepted by CVPR 2025, Project page:
https://visual-ai.github.io/vclr, Code: https://github.com/Visual-AI/vCLR | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the challenging problem of open-world instance
segmentation. Existing works have shown that vanilla visual networks are biased
toward learning appearance information, \eg texture, to recognize objects. This
implicit bias causes the model to fail in detecting novel objects with unseen
textures in the open-world setting. To address this challenge, we propose a
learning framework, called view-Consistent LeaRning (v-CLR), which aims to
enforce the model to learn appearance-invariant representations for robust
instance segmentation. In v-CLR, we first introduce additional views for each
image, where the texture undergoes significant alterations while preserving the
image's underlying structure. We then encourage the model to learn the
appearance-invariant representation by enforcing the consistency between object
features across different views, for which we obtain class-agnostic object
proposals using off-the-shelf unsupervised models that possess strong
object-awareness. These proposals enable cross-view object feature matching,
greatly reducing the appearance dependency while enhancing the
object-awareness. We thoroughly evaluate our method on public benchmarks under
both cross-class and cross-dataset settings, achieving state-of-the-art
performance. Project page: https://visual-ai.github.io/vclr
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:52:30 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhang",
"Chang-Bin",
""
],
[
"Ni",
"Jinhong",
""
],
[
"Zhong",
"Yujie",
""
],
[
"Han",
"Kai",
""
]
] | TITLE: v-CLR: View-Consistent Learning for Open-World Instance Segmentation
ABSTRACT: In this paper, we address the challenging problem of open-world instance
segmentation. Existing works have shown that vanilla visual networks are biased
toward learning appearance information, \eg texture, to recognize objects. This
implicit bias causes the model to fail in detecting novel objects with unseen
textures in the open-world setting. To address this challenge, we propose a
learning framework, called view-Consistent LeaRning (v-CLR), which aims to
enforce the model to learn appearance-invariant representations for robust
instance segmentation. In v-CLR, we first introduce additional views for each
image, where the texture undergoes significant alterations while preserving the
image's underlying structure. We then encourage the model to learn the
appearance-invariant representation by enforcing the consistency between object
features across different views, for which we obtain class-agnostic object
proposals using off-the-shelf unsupervised models that possess strong
object-awareness. These proposals enable cross-view object feature matching,
greatly reducing the appearance dependency while enhancing the
object-awareness. We thoroughly evaluate our method on public benchmarks under
both cross-class and cross-dataset settings, achieving state-of-the-art
performance. Project page: https://visual-ai.github.io/vclr
|
2504.01386 | Junjie Wu | Junjie Wu, Jiangtao Xie, Zhaolin Zhang, Qilong Wang, Qinghua Hu,
Peihua Li, Sen Xu | DALIP: Distribution Alignment-based Language-Image Pre-Training for
Domain-Specific Data | 14 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Contrastive Language-Image Pre-training (CLIP) has shown promising
performance in domain-specific data (e.g., biology), and has attracted
increasing research attention. Existing works generally focus on collecting
extensive domain-specific data and directly tuning the original CLIP models.
Intuitively, such a paradigm takes no full consideration of the characteristics
lying in domain-specific data (e.g., fine-grained nature of biological data)
and so limits model capability, while mostly losing the original ability of
CLIP in the general domain. In this paper, we propose a Distribution
Alignment-based Language-Image Pre-Training (DALIP) method for biological data.
Specifically, DALIP optimizes CLIP models by matching the similarity between
feature distribution of image-text pairs instead of the original [cls] token,
which can capture rich yet effective information inherent in image-text pairs
as powerful representations, and so better cope with fine-grained nature of
biological data. Particularly, our DALIP efficiently approximates feature
distribution via its first- and second-order statistics, while presenting a
Multi-head Brownian Distance Covariance (MBDC) module to acquire second-order
statistics of token features efficiently. Furthermore, we collect a new dataset
for plant domain (e.g., specific data in biological domain) comprising 10M
plant data with 3M general-domain data (namely PlantMix-13M) according to data
mixing laws. Extensive experiments show that DALIP clearly outperforms existing
CLIP counterparts in biological domain, while well generalizing to remote
sensing and medical imaging domains. Besides, our PlantMix-13M dataset further
boosts performance of DALIP in plant domain, while preserving model ability in
general domain.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:56:57 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wu",
"Junjie",
""
],
[
"Xie",
"Jiangtao",
""
],
[
"Zhang",
"Zhaolin",
""
],
[
"Wang",
"Qilong",
""
],
[
"Hu",
"Qinghua",
""
],
[
"Li",
"Peihua",
""
],
[
"Xu",
"Sen",
""
]
] | TITLE: DALIP: Distribution Alignment-based Language-Image Pre-Training for
Domain-Specific Data
ABSTRACT: Recently, Contrastive Language-Image Pre-training (CLIP) has shown promising
performance in domain-specific data (e.g., biology), and has attracted
increasing research attention. Existing works generally focus on collecting
extensive domain-specific data and directly tuning the original CLIP models.
Intuitively, such a paradigm takes no full consideration of the characteristics
lying in domain-specific data (e.g., fine-grained nature of biological data)
and so limits model capability, while mostly losing the original ability of
CLIP in the general domain. In this paper, we propose a Distribution
Alignment-based Language-Image Pre-Training (DALIP) method for biological data.
Specifically, DALIP optimizes CLIP models by matching the similarity between
feature distribution of image-text pairs instead of the original [cls] token,
which can capture rich yet effective information inherent in image-text pairs
as powerful representations, and so better cope with fine-grained nature of
biological data. Particularly, our DALIP efficiently approximates feature
distribution via its first- and second-order statistics, while presenting a
Multi-head Brownian Distance Covariance (MBDC) module to acquire second-order
statistics of token features efficiently. Furthermore, we collect a new dataset
for plant domain (e.g., specific data in biological domain) comprising 10M
plant data with 3M general-domain data (namely PlantMix-13M) according to data
mixing laws. Extensive experiments show that DALIP clearly outperforms existing
CLIP counterparts in biological domain, while well generalizing to remote
sensing and medical imaging domains. Besides, our PlantMix-13M dataset further
boosts performance of DALIP in plant domain, while preserving model ability in
general domain.
|
2504.01395 | Kecen Li | Kecen Li, Chen Gong, Xiaochen Li, Yuzhong Zhao, Xinwen Hou, Tianhao
Wang | From Easy to Hard: Building a Shortcut for Differentially Private Image
Synthesis | Accepted at IEEE S&P (Oakland) 2025; code available at
https://github.com/SunnierLee/DP-FETA | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differentially private (DP) image synthesis aims to generate synthetic images
from a sensitive dataset, alleviating the privacy leakage concerns of
organizations sharing and utilizing synthetic images. Although previous methods
have significantly progressed, especially in training diffusion models on
sensitive images with DP Stochastic Gradient Descent (DP-SGD), they still
suffer from unsatisfactory performance. In this work, inspired by curriculum
learning, we propose a two-stage DP image synthesis framework, where diffusion
models learn to generate DP synthetic images from easy to hard. Unlike existing
methods that directly use DP-SGD to train diffusion models, we propose an easy
stage in the beginning, where diffusion models learn simple features of the
sensitive images. To facilitate this easy stage, we propose to use `central
images', simply aggregations of random samples of the sensitive dataset.
Intuitively, although those central images do not show details, they
demonstrate useful characteristics of all images and only incur minimal privacy
costs, thus helping early-phase model training. We conduct experiments to
present that on the average of four investigated image datasets, the fidelity
and utility metrics of our synthetic images are 33.1% and 2.1% better than the
state-of-the-art method.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 06:30:55 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Li",
"Kecen",
""
],
[
"Gong",
"Chen",
""
],
[
"Li",
"Xiaochen",
""
],
[
"Zhao",
"Yuzhong",
""
],
[
"Hou",
"Xinwen",
""
],
[
"Wang",
"Tianhao",
""
]
] | TITLE: From Easy to Hard: Building a Shortcut for Differentially Private Image
Synthesis
ABSTRACT: Differentially private (DP) image synthesis aims to generate synthetic images
from a sensitive dataset, alleviating the privacy leakage concerns of
organizations sharing and utilizing synthetic images. Although previous methods
have significantly progressed, especially in training diffusion models on
sensitive images with DP Stochastic Gradient Descent (DP-SGD), they still
suffer from unsatisfactory performance. In this work, inspired by curriculum
learning, we propose a two-stage DP image synthesis framework, where diffusion
models learn to generate DP synthetic images from easy to hard. Unlike existing
methods that directly use DP-SGD to train diffusion models, we propose an easy
stage in the beginning, where diffusion models learn simple features of the
sensitive images. To facilitate this easy stage, we propose to use `central
images', simply aggregations of random samples of the sensitive dataset.
Intuitively, although those central images do not show details, they
demonstrate useful characteristics of all images and only incur minimal privacy
costs, thus helping early-phase model training. We conduct experiments to
present that on the average of four investigated image datasets, the fidelity
and utility metrics of our synthetic images are 33.1% and 2.1% better than the
state-of-the-art method.
|
2504.01400 | Xingshan Zeng | Xingshan Zeng, Weiwen Liu, Xu Huang, Zezhong Wang, Lingzhi Wang,
Liangyou Li, Yasheng Wang, Lifeng Shang, Xin Jiang, Ruiming Tang, Qun Liu | ToolACE-R: Tool Learning with Adaptive Self-Refinement | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Tool learning, which allows Large Language Models (LLMs) to leverage external
tools for solving complex user tasks, has emerged as a promising avenue for
extending model capabilities. However, current approaches primarily focus on
data synthesis for fine-tuning LLMs to invoke tools effectively, largely
ignoring how to fully stimulate the potential of the model. In this paper, we
propose ToolACE-R, a novel method that introduces adaptive self-refinement for
tool invocations. Our approach features a model-aware iterative training
procedure that progressively incorporates more training samples based on the
model's evolving capabilities. Additionally, it allows LLMs to iteratively
refine their tool calls, optimizing performance without requiring external
feedback. To further enhance computational efficiency, we integrate an adaptive
mechanism when scaling the inference time, enabling the model to autonomously
determine when to stop the refinement process. We conduct extensive experiments
across several benchmark datasets, showing that ToolACE-R achieves competitive
performance compared to advanced API-based models, even without any refinement.
Furthermore, its performance can be further improved efficiently through
adaptive self-refinement. Our results demonstrate the effectiveness of the
proposed method, which is compatible with base models of various sizes,
offering a promising direction for more efficient tool learning.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 06:38:56 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zeng",
"Xingshan",
""
],
[
"Liu",
"Weiwen",
""
],
[
"Huang",
"Xu",
""
],
[
"Wang",
"Zezhong",
""
],
[
"Wang",
"Lingzhi",
""
],
[
"Li",
"Liangyou",
""
],
[
"Wang",
"Yasheng",
""
],
[
"Shang",
"Lifeng",
""
],
[
"Jiang",
"Xin",
""
],
[
"Tang",
"Ruiming",
""
],
[
"Liu",
"Qun",
""
]
] | TITLE: ToolACE-R: Tool Learning with Adaptive Self-Refinement
ABSTRACT: Tool learning, which allows Large Language Models (LLMs) to leverage external
tools for solving complex user tasks, has emerged as a promising avenue for
extending model capabilities. However, current approaches primarily focus on
data synthesis for fine-tuning LLMs to invoke tools effectively, largely
ignoring how to fully stimulate the potential of the model. In this paper, we
propose ToolACE-R, a novel method that introduces adaptive self-refinement for
tool invocations. Our approach features a model-aware iterative training
procedure that progressively incorporates more training samples based on the
model's evolving capabilities. Additionally, it allows LLMs to iteratively
refine their tool calls, optimizing performance without requiring external
feedback. To further enhance computational efficiency, we integrate an adaptive
mechanism when scaling the inference time, enabling the model to autonomously
determine when to stop the refinement process. We conduct extensive experiments
across several benchmark datasets, showing that ToolACE-R achieves competitive
performance compared to advanced API-based models, even without any refinement.
Furthermore, its performance can be further improved efficiently through
adaptive self-refinement. Our results demonstrate the effectiveness of the
proposed method, which is compatible with base models of various sizes,
offering a promising direction for more efficient tool learning.
|
2504.01404 | Lingxiao Tang | Lingxiao Tang, Jiakun Liu, Zhongxin Liu, Xiaohu Yang, Lingfeng Bao | LLM4SZZ: Enhancing SZZ Algorithm with Context-Enhanced Assessment on
Large Language Models | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The SZZ algorithm is the dominant technique for identifying bug-inducing
commits and serves as a foundation for many software engineering studies, such
as bug prediction and static code analysis. Researchers have proposed many
variants to enhance the SZZ algorithm's performance since its introduction. The
majority of them rely on static techniques or heuristic assumptions, making
them easy to implement, but their performance improvements are often limited.
Recently, a deep learning-based SZZ algorithm has been introduced to enhance
the original SZZ algorithm. However, it requires complex preprocessing and is
restricted to a single programming language. Additionally, while it enhances
precision, it sacrifices recall. Furthermore, most of variants overlook crucial
information, such as commit messages and patch context, and are limited to
bug-fixing commits involving deleted lines. The emergence of large language
models (LLMs) offers an opportunity to address these drawbacks. In this study,
we investigate the strengths and limitations of LLMs and propose LLM4SZZ, which
employs two approaches (i.e., rank-based identification and context-enhanced
identification) to handle different types of bug-fixing commits. We determine
which approach to adopt based on the LLM's ability to comprehend the bug and
identify whether the bug is present in a commit. The context-enhanced
identification provides the LLM with more context and requires it to find the
bug-inducing commit among a set of candidate commits. In rank-based
identification, we ask the LLM to select buggy statements from the bug-fixing
commit and rank them based on their relevance to the root cause. Experimental
results show that LLM4SZZ outperforms all baselines across three datasets,
improving F1-score by 6.9% to 16.0% without significantly sacrificing recall.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 06:40:57 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Tang",
"Lingxiao",
""
],
[
"Liu",
"Jiakun",
""
],
[
"Liu",
"Zhongxin",
""
],
[
"Yang",
"Xiaohu",
""
],
[
"Bao",
"Lingfeng",
""
]
] | TITLE: LLM4SZZ: Enhancing SZZ Algorithm with Context-Enhanced Assessment on
Large Language Models
ABSTRACT: The SZZ algorithm is the dominant technique for identifying bug-inducing
commits and serves as a foundation for many software engineering studies, such
as bug prediction and static code analysis. Researchers have proposed many
variants to enhance the SZZ algorithm's performance since its introduction. The
majority of them rely on static techniques or heuristic assumptions, making
them easy to implement, but their performance improvements are often limited.
Recently, a deep learning-based SZZ algorithm has been introduced to enhance
the original SZZ algorithm. However, it requires complex preprocessing and is
restricted to a single programming language. Additionally, while it enhances
precision, it sacrifices recall. Furthermore, most of variants overlook crucial
information, such as commit messages and patch context, and are limited to
bug-fixing commits involving deleted lines. The emergence of large language
models (LLMs) offers an opportunity to address these drawbacks. In this study,
we investigate the strengths and limitations of LLMs and propose LLM4SZZ, which
employs two approaches (i.e., rank-based identification and context-enhanced
identification) to handle different types of bug-fixing commits. We determine
which approach to adopt based on the LLM's ability to comprehend the bug and
identify whether the bug is present in a commit. The context-enhanced
identification provides the LLM with more context and requires it to find the
bug-inducing commit among a set of candidate commits. In rank-based
identification, we ask the LLM to select buggy statements from the bug-fixing
commit and rank them based on their relevance to the root cause. Experimental
results show that LLM4SZZ outperforms all baselines across three datasets,
improving F1-score by 6.9% to 16.0% without significantly sacrificing recall.
|
2504.01416 | Huai Yu | Shu Han, Xubo Zhu, Ji Wu, Ximeng Cai, Wen Yang, Huai Yu, Gui-Song Xia | DF-Calib: Targetless LiDAR-Camera Calibration via Depth Flow | 7 pages,3 figures, 3 figures | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Precise LiDAR-camera calibration is crucial for integrating these two sensors
into robotic systems to achieve robust perception. In applications like
autonomous driving, online targetless calibration enables a prompt sensor
misalignment correction from mechanical vibrations without extra targets.
However, existing methods exhibit limitations in effectively extracting
consistent features from LiDAR and camera data and fail to prioritize salient
regions, compromising cross-modal alignment robustness. To address these
issues, we propose DF-Calib, a LiDAR-camera calibration method that
reformulates calibration as an intra-modality depth flow estimation problem.
DF-Calib estimates a dense depth map from the camera image and completes the
sparse LiDAR projected depth map, using a shared feature encoder to extract
consistent depth-to-depth features, effectively bridging the 2D-3D cross-modal
gap. Additionally, we introduce a reliability map to prioritize valid pixels
and propose a perceptually weighted sparse flow loss to enhance depth flow
estimation. Experimental results across multiple datasets validate its accuracy
and generalization,with DF-Calib achieving a mean translation error of 0.635cm
and rotation error of 0.045 degrees on the KITTI dataset.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 07:09:44 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Han",
"Shu",
""
],
[
"Zhu",
"Xubo",
""
],
[
"Wu",
"Ji",
""
],
[
"Cai",
"Ximeng",
""
],
[
"Yang",
"Wen",
""
],
[
"Yu",
"Huai",
""
],
[
"Xia",
"Gui-Song",
""
]
] | TITLE: DF-Calib: Targetless LiDAR-Camera Calibration via Depth Flow
ABSTRACT: Precise LiDAR-camera calibration is crucial for integrating these two sensors
into robotic systems to achieve robust perception. In applications like
autonomous driving, online targetless calibration enables a prompt sensor
misalignment correction from mechanical vibrations without extra targets.
However, existing methods exhibit limitations in effectively extracting
consistent features from LiDAR and camera data and fail to prioritize salient
regions, compromising cross-modal alignment robustness. To address these
issues, we propose DF-Calib, a LiDAR-camera calibration method that
reformulates calibration as an intra-modality depth flow estimation problem.
DF-Calib estimates a dense depth map from the camera image and completes the
sparse LiDAR projected depth map, using a shared feature encoder to extract
consistent depth-to-depth features, effectively bridging the 2D-3D cross-modal
gap. Additionally, we introduce a reliability map to prioritize valid pixels
and propose a perceptually weighted sparse flow loss to enhance depth flow
estimation. Experimental results across multiple datasets validate its accuracy
and generalization,with DF-Calib achieving a mean translation error of 0.635cm
and rotation error of 0.045 degrees on the KITTI dataset.
|
2504.01420 | Yicheng Fu | Athena Wen, Tanush Patil, Ansh Saxena, Yicheng Fu, Sean O'Brien, Kevin
Zhu | FAIRE: Assessing Racial and Gender Bias in AI-Driven Resume Evaluations | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | In an era where AI-driven hiring is transforming recruitment practices,
concerns about fairness and bias have become increasingly important. To explore
these issues, we introduce a benchmark, FAIRE (Fairness Assessment In Resume
Evaluation), to test for racial and gender bias in large language models (LLMs)
used to evaluate resumes across different industries. We use two methods-direct
scoring and ranking-to measure how model performance changes when resumes are
slightly altered to reflect different racial or gender identities. Our findings
reveal that while every model exhibits some degree of bias, the magnitude and
direction vary considerably. This benchmark provides a clear way to examine
these differences and offers valuable insights into the fairness of AI-based
hiring tools. It highlights the urgent need for strategies to reduce bias in
AI-driven recruitment. Our benchmark code and dataset are open-sourced at our
repository:
https://github.com/athenawen/FAIRE-Fairness-Assessment-In-Resume-Evaluation.git.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 07:11:30 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wen",
"Athena",
""
],
[
"Patil",
"Tanush",
""
],
[
"Saxena",
"Ansh",
""
],
[
"Fu",
"Yicheng",
""
],
[
"O'Brien",
"Sean",
""
],
[
"Zhu",
"Kevin",
""
]
] | TITLE: FAIRE: Assessing Racial and Gender Bias in AI-Driven Resume Evaluations
ABSTRACT: In an era where AI-driven hiring is transforming recruitment practices,
concerns about fairness and bias have become increasingly important. To explore
these issues, we introduce a benchmark, FAIRE (Fairness Assessment In Resume
Evaluation), to test for racial and gender bias in large language models (LLMs)
used to evaluate resumes across different industries. We use two methods-direct
scoring and ranking-to measure how model performance changes when resumes are
slightly altered to reflect different racial or gender identities. Our findings
reveal that while every model exhibits some degree of bias, the magnitude and
direction vary considerably. This benchmark provides a clear way to examine
these differences and offers valuable insights into the fairness of AI-based
hiring tools. It highlights the urgent need for strategies to reduce bias in
AI-driven recruitment. Our benchmark code and dataset are open-sourced at our
repository:
https://github.com/athenawen/FAIRE-Fairness-Assessment-In-Resume-Evaluation.git.
|
2504.01428 | Zhuangzhuang Chen | Zhuangzhuang Chen, Hualiang Wang, Chubin Ou, Xiaomeng Li | MuTri: Multi-view Tri-alignment for OCT to OCTA 3D Image Translation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optical coherence tomography angiography (OCTA) shows its great importance in
imaging microvascular networks by providing accurate 3D imaging of blood
vessels, but it relies upon specialized sensors and expensive devices. For this
reason, previous works show the potential to translate the readily available 3D
Optical Coherence Tomography (OCT) images into 3D OCTA images. However,
existing OCTA translation methods directly learn the mapping from the OCT
domain to the OCTA domain in continuous and infinite space with guidance from
only a single view, i.e., the OCTA project map, resulting in suboptimal
results. To this end, we propose the multi-view Tri-alignment framework for OCT
to OCTA 3D image translation in discrete and finite space, named MuTri. In the
first stage, we pre-train two vector-quantized variational auto-encoder (VQ-
VAE) by reconstructing 3D OCT and 3D OCTA data, providing semantic prior for
subsequent multi-view guidances. In the second stage, our multi-view
tri-alignment facilitates another VQVAE model to learn the mapping from the OCT
domain to the OCTA domain in discrete and finite space. Specifically, a
contrastive-inspired semantic alignment is proposed to maximize the mutual
information with the pre-trained models from OCT and OCTA views, to facilitate
codebook learning. Meanwhile, a vessel structure alignment is proposed to
minimize the structure discrepancy with the pre-trained models from the OCTA
project map view, benefiting from learning the detailed vessel structure
information. We also collect the first large-scale dataset, namely, OCTA2024,
which contains a pair of OCT and OCTA volumes from 846 subjects.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 07:28:09 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Chen",
"Zhuangzhuang",
""
],
[
"Wang",
"Hualiang",
""
],
[
"Ou",
"Chubin",
""
],
[
"Li",
"Xiaomeng",
""
]
] | TITLE: MuTri: Multi-view Tri-alignment for OCT to OCTA 3D Image Translation
ABSTRACT: Optical coherence tomography angiography (OCTA) shows its great importance in
imaging microvascular networks by providing accurate 3D imaging of blood
vessels, but it relies upon specialized sensors and expensive devices. For this
reason, previous works show the potential to translate the readily available 3D
Optical Coherence Tomography (OCT) images into 3D OCTA images. However,
existing OCTA translation methods directly learn the mapping from the OCT
domain to the OCTA domain in continuous and infinite space with guidance from
only a single view, i.e., the OCTA project map, resulting in suboptimal
results. To this end, we propose the multi-view Tri-alignment framework for OCT
to OCTA 3D image translation in discrete and finite space, named MuTri. In the
first stage, we pre-train two vector-quantized variational auto-encoder (VQ-
VAE) by reconstructing 3D OCT and 3D OCTA data, providing semantic prior for
subsequent multi-view guidances. In the second stage, our multi-view
tri-alignment facilitates another VQVAE model to learn the mapping from the OCT
domain to the OCTA domain in discrete and finite space. Specifically, a
contrastive-inspired semantic alignment is proposed to maximize the mutual
information with the pre-trained models from OCT and OCTA views, to facilitate
codebook learning. Meanwhile, a vessel structure alignment is proposed to
minimize the structure discrepancy with the pre-trained models from the OCTA
project map view, benefiting from learning the detailed vessel structure
information. We also collect the first large-scale dataset, namely, OCTA2024,
which contains a pair of OCT and OCTA volumes from 846 subjects.
|
2504.01431 | Hao Zhu | Hao Zhu, Shengchao Yan, Jasper Hoffmann, Joschka Boedecker | Multi-convex Programming for Discrete Latent Factor Models Prototyping | null | null | null | null | math.OC cs.CE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Discrete latent factor models (DLFMs) are widely used in various domains such
as machine learning, economics, neuroscience, psychology, etc. Currently,
fitting a DLFM to some dataset relies on a customized solver for individual
models, which requires lots of effort to implement and is limited to the
targeted specific instance of DLFMs. In this paper, we propose a generic
framework based on CVXPY, which allows users to specify and solve the fitting
problem of a wide range of DLFMs, including both regression and classification
models, within a very short script. Our framework is flexible and inherently
supports the integration of regularization terms and constraints on the DLFM
parameters and latent factors, such that the users can easily prototype the
DLFM structure according to their dataset and application scenario. We
introduce our open-source Python implementation and illustrate the framework in
several examples.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 07:33:54 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhu",
"Hao",
""
],
[
"Yan",
"Shengchao",
""
],
[
"Hoffmann",
"Jasper",
""
],
[
"Boedecker",
"Joschka",
""
]
] | TITLE: Multi-convex Programming for Discrete Latent Factor Models Prototyping
ABSTRACT: Discrete latent factor models (DLFMs) are widely used in various domains such
as machine learning, economics, neuroscience, psychology, etc. Currently,
fitting a DLFM to some dataset relies on a customized solver for individual
models, which requires lots of effort to implement and is limited to the
targeted specific instance of DLFMs. In this paper, we propose a generic
framework based on CVXPY, which allows users to specify and solve the fitting
problem of a wide range of DLFMs, including both regression and classification
models, within a very short script. Our framework is flexible and inherently
supports the integration of regularization terms and constraints on the DLFM
parameters and latent factors, such that the users can easily prototype the
DLFM structure according to their dataset and application scenario. We
introduce our open-source Python implementation and illustrate the framework in
several examples.
|
2504.01445 | Philipp Mondorf | Philipp Mondorf, Shijia Zhou, Monica Riedler, Barbara Plank | Enabling Systematic Generalization in Abstract Spatial Reasoning through
Meta-Learning for Compositionality | 30 pages, 14 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Systematic generalization refers to the capacity to understand and generate
novel combinations from known components. Despite recent progress by large
language models (LLMs) across various domains, these models often fail to
extend their knowledge to novel compositional scenarios, revealing notable
limitations in systematic generalization. There has been an ongoing debate
about whether neural networks possess the capacity for systematic
generalization, with recent studies suggesting that meta-learning approaches
designed for compositionality can significantly enhance this ability. However,
these insights have largely been confined to linguistic problems, leaving their
applicability to other tasks an open question. In this study, we extend the
approach of meta-learning for compositionality to the domain of abstract
spatial reasoning. To this end, we introduce $\textit{SYGAR}$-a dataset
designed to evaluate the capacity of models to systematically generalize from
known geometric transformations (e.g., translation, rotation) of
two-dimensional objects to novel combinations of these transformations (e.g.,
translation+rotation). Our results show that a transformer-based
encoder-decoder model, trained via meta-learning for compositionality, can
systematically generalize to previously unseen transformation compositions,
significantly outperforming state-of-the-art LLMs, including o3-mini, GPT-4o,
and Gemini 2.0 Flash, which fail to exhibit similar systematic behavior. Our
findings highlight the effectiveness of meta-learning in promoting
systematicity beyond linguistic tasks, suggesting a promising direction toward
more robust and generalizable models.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 07:56:39 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Mondorf",
"Philipp",
""
],
[
"Zhou",
"Shijia",
""
],
[
"Riedler",
"Monica",
""
],
[
"Plank",
"Barbara",
""
]
] | TITLE: Enabling Systematic Generalization in Abstract Spatial Reasoning through
Meta-Learning for Compositionality
ABSTRACT: Systematic generalization refers to the capacity to understand and generate
novel combinations from known components. Despite recent progress by large
language models (LLMs) across various domains, these models often fail to
extend their knowledge to novel compositional scenarios, revealing notable
limitations in systematic generalization. There has been an ongoing debate
about whether neural networks possess the capacity for systematic
generalization, with recent studies suggesting that meta-learning approaches
designed for compositionality can significantly enhance this ability. However,
these insights have largely been confined to linguistic problems, leaving their
applicability to other tasks an open question. In this study, we extend the
approach of meta-learning for compositionality to the domain of abstract
spatial reasoning. To this end, we introduce $\textit{SYGAR}$-a dataset
designed to evaluate the capacity of models to systematically generalize from
known geometric transformations (e.g., translation, rotation) of
two-dimensional objects to novel combinations of these transformations (e.g.,
translation+rotation). Our results show that a transformer-based
encoder-decoder model, trained via meta-learning for compositionality, can
systematically generalize to previously unseen transformation compositions,
significantly outperforming state-of-the-art LLMs, including o3-mini, GPT-4o,
and Gemini 2.0 Flash, which fail to exhibit similar systematic behavior. Our
findings highlight the effectiveness of meta-learning in promoting
systematicity beyond linguistic tasks, suggesting a promising direction toward
more robust and generalizable models.
|
2504.01448 | Hang Li | Hang Li, Shengyao Zhuang, Bevan Koopman, Guido Zuccon | LLM-VPRF: Large Language Model Based Vector Pseudo Relevance Feedback | null | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector Pseudo Relevance Feedback (VPRF) has shown promising results in
improving BERT-based dense retrieval systems through iterative refinement of
query representations. This paper investigates the generalizability of VPRF to
Large Language Model (LLM) based dense retrievers. We introduce LLM-VPRF and
evaluate its effectiveness across multiple benchmark datasets, analyzing how
different LLMs impact the feedback mechanism. Our results demonstrate that
VPRF's benefits successfully extend to LLM architectures, establishing it as a
robust technique for enhancing dense retrieval performance regardless of the
underlying models. This work bridges the gap between VPRF with traditional
BERT-based dense retrievers and modern LLMs, while providing insights into
their future directions.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:02:01 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Li",
"Hang",
""
],
[
"Zhuang",
"Shengyao",
""
],
[
"Koopman",
"Bevan",
""
],
[
"Zuccon",
"Guido",
""
]
] | TITLE: LLM-VPRF: Large Language Model Based Vector Pseudo Relevance Feedback
ABSTRACT: Vector Pseudo Relevance Feedback (VPRF) has shown promising results in
improving BERT-based dense retrieval systems through iterative refinement of
query representations. This paper investigates the generalizability of VPRF to
Large Language Model (LLM) based dense retrievers. We introduce LLM-VPRF and
evaluate its effectiveness across multiple benchmark datasets, analyzing how
different LLMs impact the feedback mechanism. Our results demonstrate that
VPRF's benefits successfully extend to LLM architectures, establishing it as a
robust technique for enhancing dense retrieval performance regardless of the
underlying models. This work bridges the gap between VPRF with traditional
BERT-based dense retrievers and modern LLMs, while providing insights into
their future directions.
|
2504.01450 | Runlong Zhou | Runlong Zhou, Yi Zhang | CASCADE Your Datasets for Cross-Mode Knowledge Retrieval of Language
Models | null | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language models often struggle with cross-mode knowledge retrieval -- the
ability to access knowledge learned in one format (mode) when queried in
another. We demonstrate that models trained on multiple data sources (e.g.,
Wikipedia and TinyStories) exhibit significantly reduced accuracy when
retrieving knowledge in a format different from its original training mode.
This paper quantitatively investigates this phenomenon through a controlled
study of random token sequence memorization across different modes. We first
explore dataset rewriting as a solution, revealing that effective cross-mode
retrieval requires prohibitively extensive rewriting efforts that follow a
sigmoid-like relationship. As an alternative, we propose CASCADE, a novel
pretraining algorithm that uses cascading datasets with varying sequence
lengths to capture knowledge at different scales. Our experiments demonstrate
that CASCADE outperforms dataset rewriting approaches, even when compressed
into a single model with a unified loss function. This work provides both
qualitative evidence of cross-mode retrieval limitations and a practical
solution to enhance language models' ability to access knowledge independently
of its presentational format.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:02:07 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhou",
"Runlong",
""
],
[
"Zhang",
"Yi",
""
]
] | TITLE: CASCADE Your Datasets for Cross-Mode Knowledge Retrieval of Language
Models
ABSTRACT: Language models often struggle with cross-mode knowledge retrieval -- the
ability to access knowledge learned in one format (mode) when queried in
another. We demonstrate that models trained on multiple data sources (e.g.,
Wikipedia and TinyStories) exhibit significantly reduced accuracy when
retrieving knowledge in a format different from its original training mode.
This paper quantitatively investigates this phenomenon through a controlled
study of random token sequence memorization across different modes. We first
explore dataset rewriting as a solution, revealing that effective cross-mode
retrieval requires prohibitively extensive rewriting efforts that follow a
sigmoid-like relationship. As an alternative, we propose CASCADE, a novel
pretraining algorithm that uses cascading datasets with varying sequence
lengths to capture knowledge at different scales. Our experiments demonstrate
that CASCADE outperforms dataset rewriting approaches, even when compressed
into a single model with a unified loss function. This work provides both
qualitative evidence of cross-mode retrieval limitations and a practical
solution to enhance language models' ability to access knowledge independently
of its presentational format.
|
2504.01451 | Yongxin Ma | Jie Xu, Yongxin Ma, Yixuan Li, Xuanxuan Zhang, Jun Zhou, Shenghai
Yuan, and Lihua Xie | Dynamic Initialization for LiDAR-inertial SLAM | Accepted by IEEE/ASME Transactions on Mechatronics | null | 10.1109/TMECH.2025.3554878 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The accuracy of the initial state, including initial velocity, gravity
direction, and IMU biases, is critical for the initialization of LiDAR-inertial
SLAM systems. Inaccurate initial values can reduce initialization speed or lead
to failure. When the system faces urgent tasks, robust and fast initialization
is required while the robot is moving, such as during the swift assessment of
rescue environments after natural disasters, bomb disposal, and restarting
LiDAR-inertial SLAM in rescue missions. However, existing initialization
methods usually require the platform to remain stationary, which is ineffective
when the robot is in motion. To address this issue, this paper introduces a
robust and fast dynamic initialization method for LiDAR-inertial systems
(D-LI-Init). This method iteratively aligns LiDAR-based odometry with IMU
measurements to achieve system initialization. To enhance the reliability of
the LiDAR odometry module, the LiDAR and gyroscope are tightly integrated
within the ESIKF framework. The gyroscope compensates for rotational distortion
in the point cloud. Translational distortion compensation occurs during the
iterative update phase, resulting in the output of LiDAR-gyroscope odometry.
The proposed method can initialize the system no matter the robot is moving or
stationary. Experiments on public datasets and real-world environments
demonstrate that the D-LI-Init algorithm can effectively serve various
platforms, including vehicles, handheld devices, and UAVs. D-LI-Init completes
dynamic initialization regardless of specific motion patterns. To benefit the
research community, we have open-sourced our code and test datasets on GitHub.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:02:25 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Xu",
"Jie",
""
],
[
"Ma",
"Yongxin",
""
],
[
"Li",
"Yixuan",
""
],
[
"Zhang",
"Xuanxuan",
""
],
[
"Zhou",
"Jun",
""
],
[
"Yuan",
"Shenghai",
""
],
[
"Xie",
"Lihua",
""
]
] | TITLE: Dynamic Initialization for LiDAR-inertial SLAM
ABSTRACT: The accuracy of the initial state, including initial velocity, gravity
direction, and IMU biases, is critical for the initialization of LiDAR-inertial
SLAM systems. Inaccurate initial values can reduce initialization speed or lead
to failure. When the system faces urgent tasks, robust and fast initialization
is required while the robot is moving, such as during the swift assessment of
rescue environments after natural disasters, bomb disposal, and restarting
LiDAR-inertial SLAM in rescue missions. However, existing initialization
methods usually require the platform to remain stationary, which is ineffective
when the robot is in motion. To address this issue, this paper introduces a
robust and fast dynamic initialization method for LiDAR-inertial systems
(D-LI-Init). This method iteratively aligns LiDAR-based odometry with IMU
measurements to achieve system initialization. To enhance the reliability of
the LiDAR odometry module, the LiDAR and gyroscope are tightly integrated
within the ESIKF framework. The gyroscope compensates for rotational distortion
in the point cloud. Translational distortion compensation occurs during the
iterative update phase, resulting in the output of LiDAR-gyroscope odometry.
The proposed method can initialize the system no matter the robot is moving or
stationary. Experiments on public datasets and real-world environments
demonstrate that the D-LI-Init algorithm can effectively serve various
platforms, including vehicles, handheld devices, and UAVs. D-LI-Init completes
dynamic initialization regardless of specific motion patterns. To benefit the
research community, we have open-sourced our code and test datasets on GitHub.
|
2504.01452 | Encheng Su | Encheng Su and Hu Cao and Alois Knoll | BiSeg-SAM: Weakly-Supervised Post-Processing Framework for Boosting
Binary Segmentation in Segment Anything Models | 2024 IEEE International Conference on Bioinformatics and Biomedicine
(BIBM) | null | 10.1109/BIBM62325.2024.10822087 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurate segmentation of polyps and skin lesions is essential for diagnosing
colorectal and skin cancers. While various segmentation methods for polyps and
skin lesions using fully supervised deep learning techniques have been
developed, the pixel-level annotation of medical images by doctors is both
time-consuming and costly. Foundational vision models like the Segment Anything
Model (SAM) have demonstrated superior performance; however, directly applying
SAM to medical segmentation may not yield satisfactory results due to the lack
of domain-specific medical knowledge. In this paper, we propose BiSeg-SAM, a
SAM-guided weakly supervised prompting and boundary refinement network for the
segmentation of polyps and skin lesions. Specifically, we fine-tune SAM
combined with a CNN module to learn local features. We introduce a WeakBox with
two functions: automatically generating box prompts for the SAM model and using
our proposed Multi-choice Mask-to-Box (MM2B) transformation for rough
mask-to-box conversion, addressing the mismatch between coarse labels and
precise predictions. Additionally, we apply scale consistency (SC) loss for
prediction scale alignment. Our DetailRefine module enhances boundary precision
and segmentation accuracy by refining coarse predictions using a limited amount
of ground truth labels. This comprehensive approach enables BiSeg-SAM to
achieve excellent multi-task segmentation performance. Our method demonstrates
significant superiority over state-of-the-art (SOTA) methods when tested on
five polyp datasets and one skin cancer dataset.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:04:37 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Su",
"Encheng",
""
],
[
"Cao",
"Hu",
""
],
[
"Knoll",
"Alois",
""
]
] | TITLE: BiSeg-SAM: Weakly-Supervised Post-Processing Framework for Boosting
Binary Segmentation in Segment Anything Models
ABSTRACT: Accurate segmentation of polyps and skin lesions is essential for diagnosing
colorectal and skin cancers. While various segmentation methods for polyps and
skin lesions using fully supervised deep learning techniques have been
developed, the pixel-level annotation of medical images by doctors is both
time-consuming and costly. Foundational vision models like the Segment Anything
Model (SAM) have demonstrated superior performance; however, directly applying
SAM to medical segmentation may not yield satisfactory results due to the lack
of domain-specific medical knowledge. In this paper, we propose BiSeg-SAM, a
SAM-guided weakly supervised prompting and boundary refinement network for the
segmentation of polyps and skin lesions. Specifically, we fine-tune SAM
combined with a CNN module to learn local features. We introduce a WeakBox with
two functions: automatically generating box prompts for the SAM model and using
our proposed Multi-choice Mask-to-Box (MM2B) transformation for rough
mask-to-box conversion, addressing the mismatch between coarse labels and
precise predictions. Additionally, we apply scale consistency (SC) loss for
prediction scale alignment. Our DetailRefine module enhances boundary precision
and segmentation accuracy by refining coarse predictions using a limited amount
of ground truth labels. This comprehensive approach enables BiSeg-SAM to
achieve excellent multi-task segmentation performance. Our method demonstrates
significant superiority over state-of-the-art (SOTA) methods when tested on
five polyp datasets and one skin cancer dataset.
|
2504.01457 | Ting Meng | Ting Meng, Chunyun Fu, Xiangyan Yan, Zheng Liang, Pan Ji, Jianwen
Wang, Tao Huang | Deep LG-Track: An Enhanced Localization-Confidence-Guided Multi-Object
Tracker | 11 pages, 6 fugures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-object tracking plays a crucial role in various applications, such as
autonomous driving and security surveillance. This study introduces Deep
LG-Track, a novel multi-object tracker that incorporates three key enhancements
to improve the tracking accuracy and robustness. First, an adaptive Kalman
filter is developed to dynamically update the covariance of measurement noise
based on detection confidence and trajectory disappearance. Second, a novel
cost matrix is formulated to adaptively fuse motion and appearance information,
leveraging localization confidence and detection confidence as weighting
factors. Third, a dynamic appearance feature updating strategy is introduced,
adjusting the relative weighting of historical and current appearance features
based on appearance clarity and localization accuracy. Comprehensive
evaluations on the MOT17 and MOT20 datasets demonstrate that the proposed Deep
LG-Track consistently outperforms state-of-the-art trackers across multiple
performance metrics, highlighting its effectiveness in multi-object tracking
tasks.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:10:18 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Meng",
"Ting",
""
],
[
"Fu",
"Chunyun",
""
],
[
"Yan",
"Xiangyan",
""
],
[
"Liang",
"Zheng",
""
],
[
"Ji",
"Pan",
""
],
[
"Wang",
"Jianwen",
""
],
[
"Huang",
"Tao",
""
]
] | TITLE: Deep LG-Track: An Enhanced Localization-Confidence-Guided Multi-Object
Tracker
ABSTRACT: Multi-object tracking plays a crucial role in various applications, such as
autonomous driving and security surveillance. This study introduces Deep
LG-Track, a novel multi-object tracker that incorporates three key enhancements
to improve the tracking accuracy and robustness. First, an adaptive Kalman
filter is developed to dynamically update the covariance of measurement noise
based on detection confidence and trajectory disappearance. Second, a novel
cost matrix is formulated to adaptively fuse motion and appearance information,
leveraging localization confidence and detection confidence as weighting
factors. Third, a dynamic appearance feature updating strategy is introduced,
adjusting the relative weighting of historical and current appearance features
based on appearance clarity and localization accuracy. Comprehensive
evaluations on the MOT17 and MOT20 datasets demonstrate that the proposed Deep
LG-Track consistently outperforms state-of-the-art trackers across multiple
performance metrics, highlighting its effectiveness in multi-object tracking
tasks.
|
2504.01464 | Akira Hatakeyama | Akira Hatakeyama, Shota Ito, Toshihiko Yanase, Naoya Ozaki | A Prefixed Patch Time Series Transformer for Two-Point Boundary Value
Problems in Three-Body Problems | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Two-point boundary value problems for cislunar trajectories present
significant challenges in circler restricted three body problem, making
traditional analytical methods like Lambert's problem inapplicable. This study
proposes a novel approach using a prefixed patch time series Transformer model
that automates the solution of two-point boundary value problems from lunar
flyby to arbitrary terminal conditions. Using prefix tokens of terminal
conditions in our deep generative model enables solving boundary value problems
in three-body dynamics. The training dataset consists of trajectories obtained
through forward propagation rather than solving boundary value problems
directly. The model demonstrates potential practical utility for preliminary
trajectory design in cislunar mission scenarios.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:22:03 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Hatakeyama",
"Akira",
""
],
[
"Ito",
"Shota",
""
],
[
"Yanase",
"Toshihiko",
""
],
[
"Ozaki",
"Naoya",
""
]
] | TITLE: A Prefixed Patch Time Series Transformer for Two-Point Boundary Value
Problems in Three-Body Problems
ABSTRACT: Two-point boundary value problems for cislunar trajectories present
significant challenges in circler restricted three body problem, making
traditional analytical methods like Lambert's problem inapplicable. This study
proposes a novel approach using a prefixed patch time series Transformer model
that automates the solution of two-point boundary value problems from lunar
flyby to arbitrary terminal conditions. Using prefix tokens of terminal
conditions in our deep generative model enables solving boundary value problems
in three-body dynamics. The training dataset consists of trajectories obtained
through forward propagation rather than solving boundary value problems
directly. The model demonstrates potential practical utility for preliminary
trajectory design in cislunar mission scenarios.
|
2504.01470 | Soumyya Kanti Datta | Soumyya Kanti Datta, Shan Jia, Siwei Lyu | Detecting Lip-Syncing Deepfakes: Vision Temporal Transformer for
Analyzing Mouth Inconsistencies | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deepfakes are AI-generated media in which the original content is digitally
altered to create convincing but manipulated images, videos, or audio. Among
the various types of deepfakes, lip-syncing deepfakes are one of the most
challenging deepfakes to detect. In these videos, a person's lip movements are
synthesized to match altered or entirely new audio using AI models. Therefore,
unlike other types of deepfakes, the artifacts in lip-syncing deepfakes are
confined to the mouth region, making them more subtle and, thus harder to
discern. In this paper, we propose LIPINC-V2, a novel detection framework that
leverages a combination of vision temporal transformer with multihead
cross-attention to detect lip-syncing deepfakes by identifying spatiotemporal
inconsistencies in the mouth region. These inconsistencies appear across
adjacent frames and persist throughout the video. Our model can successfully
capture both short-term and long-term variations in mouth movement, enhancing
its ability to detect these inconsistencies. Additionally, we created a new
lip-syncing deepfake dataset, LipSyncTIMIT, which was generated using five
state-of-the-art lip-syncing models to simulate real-world scenarios. Extensive
experiments on our proposed LipSyncTIMIT dataset and two other benchmark
deepfake datasets demonstrate that our model achieves state-of-the-art
performance. The code and the dataset are available at
https://github.com/skrantidatta/LIPINC-V2 .
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:24:06 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Datta",
"Soumyya Kanti",
""
],
[
"Jia",
"Shan",
""
],
[
"Lyu",
"Siwei",
""
]
] | TITLE: Detecting Lip-Syncing Deepfakes: Vision Temporal Transformer for
Analyzing Mouth Inconsistencies
ABSTRACT: Deepfakes are AI-generated media in which the original content is digitally
altered to create convincing but manipulated images, videos, or audio. Among
the various types of deepfakes, lip-syncing deepfakes are one of the most
challenging deepfakes to detect. In these videos, a person's lip movements are
synthesized to match altered or entirely new audio using AI models. Therefore,
unlike other types of deepfakes, the artifacts in lip-syncing deepfakes are
confined to the mouth region, making them more subtle and, thus harder to
discern. In this paper, we propose LIPINC-V2, a novel detection framework that
leverages a combination of vision temporal transformer with multihead
cross-attention to detect lip-syncing deepfakes by identifying spatiotemporal
inconsistencies in the mouth region. These inconsistencies appear across
adjacent frames and persist throughout the video. Our model can successfully
capture both short-term and long-term variations in mouth movement, enhancing
its ability to detect these inconsistencies. Additionally, we created a new
lip-syncing deepfake dataset, LipSyncTIMIT, which was generated using five
state-of-the-art lip-syncing models to simulate real-world scenarios. Extensive
experiments on our proposed LipSyncTIMIT dataset and two other benchmark
deepfake datasets demonstrate that our model achieves state-of-the-art
performance. The code and the dataset are available at
https://github.com/skrantidatta/LIPINC-V2 .
|
2504.01472 | Yuejiao Su | Yuejiao Su, Yi Wang, Qiongyang Hu, Chuang Yang, and Lap-Pui Chau | ANNEXE: Unified Analyzing, Answering, and Pixel Grounding for Egocentric
Interaction | Computer Vision and Pattern Recognition | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Egocentric interaction perception is one of the essential branches in
investigating human-environment interaction, which lays the basis for
developing next-generation intelligent systems. However, existing egocentric
interaction understanding methods cannot yield coherent textual and pixel-level
responses simultaneously according to user queries, which lacks flexibility for
varying downstream application requirements. To comprehend egocentric
interactions exhaustively, this paper presents a novel task named Egocentric
Interaction Reasoning and pixel Grounding (Ego-IRG). Taking an egocentric image
with the query as input, Ego-IRG is the first task that aims to resolve the
interactions through three crucial steps: analyzing, answering, and pixel
grounding, which results in fluent textual and fine-grained pixel-level
responses. Another challenge is that existing datasets cannot meet the
conditions for the Ego-IRG task. To address this limitation, this paper creates
the Ego-IRGBench dataset based on extensive manual efforts, which includes over
20k egocentric images with 1.6 million queries and corresponding multimodal
responses about interactions. Moreover, we design a unified ANNEXE model to
generate text- and pixel-level outputs utilizing multimodal large language
models, which enables a comprehensive interpretation of egocentric
interactions. The experiments on the Ego-IRGBench exhibit the effectiveness of
our ANNEXE model compared with other works.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:24:35 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Su",
"Yuejiao",
""
],
[
"Wang",
"Yi",
""
],
[
"Hu",
"Qiongyang",
""
],
[
"Yang",
"Chuang",
""
],
[
"Chau",
"Lap-Pui",
""
]
] | TITLE: ANNEXE: Unified Analyzing, Answering, and Pixel Grounding for Egocentric
Interaction
ABSTRACT: Egocentric interaction perception is one of the essential branches in
investigating human-environment interaction, which lays the basis for
developing next-generation intelligent systems. However, existing egocentric
interaction understanding methods cannot yield coherent textual and pixel-level
responses simultaneously according to user queries, which lacks flexibility for
varying downstream application requirements. To comprehend egocentric
interactions exhaustively, this paper presents a novel task named Egocentric
Interaction Reasoning and pixel Grounding (Ego-IRG). Taking an egocentric image
with the query as input, Ego-IRG is the first task that aims to resolve the
interactions through three crucial steps: analyzing, answering, and pixel
grounding, which results in fluent textual and fine-grained pixel-level
responses. Another challenge is that existing datasets cannot meet the
conditions for the Ego-IRG task. To address this limitation, this paper creates
the Ego-IRGBench dataset based on extensive manual efforts, which includes over
20k egocentric images with 1.6 million queries and corresponding multimodal
responses about interactions. Moreover, we design a unified ANNEXE model to
generate text- and pixel-level outputs utilizing multimodal large language
models, which enables a comprehensive interpretation of egocentric
interactions. The experiments on the Ego-IRGBench exhibit the effectiveness of
our ANNEXE model compared with other works.
|
2504.01476 | Junlong Ren | Junlong Ren, Hao Wang | Enhanced Cross-modal 3D Retrieval via Tri-modal Reconstruction | ICME 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-modal 3D retrieval is a critical yet challenging task, aiming to
achieve bi-directional retrieval between 3D and text modalities. Current
methods predominantly rely on a certain 3D representation (e.g., point cloud),
with few exploiting the 2D-3D consistency and complementary relationships,
which constrains their performance. To bridge this gap, we propose to adopt
multi-view images and point clouds to jointly represent 3D shapes, facilitating
tri-modal alignment (i.e., image, point, text) for enhanced cross-modal 3D
retrieval. Notably, we introduce tri-modal reconstruction to improve the
generalization ability of encoders. Given point features, we reconstruct image
features under the guidance of text features, and vice versa. With well-aligned
point cloud and multi-view image features, we aggregate them as multimodal
embeddings through fine-grained 2D-3D fusion to enhance geometric and semantic
understanding. Recognizing the significant noise in current datasets where many
3D shapes and texts share similar semantics, we employ hard negative
contrastive training to emphasize harder negatives with greater significance,
leading to robust discriminative embeddings. Extensive experiments on the
Text2Shape dataset demonstrate that our method significantly outperforms
previous state-of-the-art methods in both shape-to-text and text-to-shape
retrieval tasks by a substantial margin.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:29:42 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ren",
"Junlong",
""
],
[
"Wang",
"Hao",
""
]
] | TITLE: Enhanced Cross-modal 3D Retrieval via Tri-modal Reconstruction
ABSTRACT: Cross-modal 3D retrieval is a critical yet challenging task, aiming to
achieve bi-directional retrieval between 3D and text modalities. Current
methods predominantly rely on a certain 3D representation (e.g., point cloud),
with few exploiting the 2D-3D consistency and complementary relationships,
which constrains their performance. To bridge this gap, we propose to adopt
multi-view images and point clouds to jointly represent 3D shapes, facilitating
tri-modal alignment (i.e., image, point, text) for enhanced cross-modal 3D
retrieval. Notably, we introduce tri-modal reconstruction to improve the
generalization ability of encoders. Given point features, we reconstruct image
features under the guidance of text features, and vice versa. With well-aligned
point cloud and multi-view image features, we aggregate them as multimodal
embeddings through fine-grained 2D-3D fusion to enhance geometric and semantic
understanding. Recognizing the significant noise in current datasets where many
3D shapes and texts share similar semantics, we employ hard negative
contrastive training to emphasize harder negatives with greater significance,
leading to robust discriminative embeddings. Extensive experiments on the
Text2Shape dataset demonstrate that our method significantly outperforms
previous state-of-the-art methods in both shape-to-text and text-to-shape
retrieval tasks by a substantial margin.
|
2504.01481 | Fabrice Rossi | Roxane Cohen (LAMSADE), Robin David, Florian Yger (LITIS), Fabrice
Rossi (CEREMADE) | Identifying Obfuscated Code through Graph-Based Semantic Analysis of
Binary Code | The 13th International Conference on Complex Networks and their
Applications, Dec 2024, Istabul, Turkey | null | null | null | cs.CR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protecting sensitive program content is a critical issue in various
situations, ranging from legitimate use cases to unethical contexts.
Obfuscation is one of the most used techniques to ensure such protection.
Consequently, attackers must first detect and characterize obfuscation before
launching any attack against it. This paper investigates the problem of
function-level obfuscation detection using graph-based approaches, comparing
algorithms, from elementary baselines to promising techniques like GNN (Graph
Neural Networks), on different feature choices. We consider various obfuscation
types and obfuscators, resulting in two complex datasets. Our findings
demonstrate that GNNs need meaningful features that capture aspects of function
semantics to outperform baselines. Our approach shows satisfactory results,
especially in a challenging 11-class classification task and in a practical
malware analysis example.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:36:27 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Cohen",
"Roxane",
"",
"LAMSADE"
],
[
"David",
"Robin",
"",
"LITIS"
],
[
"Yger",
"Florian",
"",
"LITIS"
],
[
"Rossi",
"Fabrice",
"",
"CEREMADE"
]
] | TITLE: Identifying Obfuscated Code through Graph-Based Semantic Analysis of
Binary Code
ABSTRACT: Protecting sensitive program content is a critical issue in various
situations, ranging from legitimate use cases to unethical contexts.
Obfuscation is one of the most used techniques to ensure such protection.
Consequently, attackers must first detect and characterize obfuscation before
launching any attack against it. This paper investigates the problem of
function-level obfuscation detection using graph-based approaches, comparing
algorithms, from elementary baselines to promising techniques like GNN (Graph
Neural Networks), on different feature choices. We consider various obfuscation
types and obfuscators, resulting in two complex datasets. Our findings
demonstrate that GNNs need meaningful features that capture aspects of function
semantics to outperform baselines. Our approach shows satisfactory results,
especially in a challenging 11-class classification task and in a practical
malware analysis example.
|
2504.01482 | Qihao Ye | Qihao Ye, Xiaochuan Tian, Yuhua Zhu | A Robust Model-Based Approach for Continuous-Time Policy Evaluation with
Unknown L\'evy Process Dynamics | 27 pages, 9 figures | null | null | null | cs.LG cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper develops a model-based framework for continuous-time policy
evaluation (CTPE) in reinforcement learning, incorporating both Brownian and
L\'evy noise to model stochastic dynamics influenced by rare and extreme
events. Our approach formulates the policy evaluation problem as solving a
partial integro-differential equation (PIDE) for the value function with
unknown coefficients. A key challenge in this setting is accurately recovering
the unknown coefficients in the stochastic dynamics, particularly when driven
by L\'evy processes with heavy tail effects. To address this, we propose a
robust numerical approach that effectively handles both unbiased and censored
trajectory datasets. This method combines maximum likelihood estimation with an
iterative tail correction mechanism, improving the stability and accuracy of
coefficient recovery. Additionally, we establish a theoretical bound for the
policy evaluation error based on coefficient recovery error. Through numerical
experiments, we demonstrate the effectiveness and robustness of our method in
recovering heavy-tailed L\'evy dynamics and verify the theoretical error
analysis in policy evaluation.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:37:14 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ye",
"Qihao",
""
],
[
"Tian",
"Xiaochuan",
""
],
[
"Zhu",
"Yuhua",
""
]
] | TITLE: A Robust Model-Based Approach for Continuous-Time Policy Evaluation with
Unknown L\'evy Process Dynamics
ABSTRACT: This paper develops a model-based framework for continuous-time policy
evaluation (CTPE) in reinforcement learning, incorporating both Brownian and
L\'evy noise to model stochastic dynamics influenced by rare and extreme
events. Our approach formulates the policy evaluation problem as solving a
partial integro-differential equation (PIDE) for the value function with
unknown coefficients. A key challenge in this setting is accurately recovering
the unknown coefficients in the stochastic dynamics, particularly when driven
by L\'evy processes with heavy tail effects. To address this, we propose a
robust numerical approach that effectively handles both unbiased and censored
trajectory datasets. This method combines maximum likelihood estimation with an
iterative tail correction mechanism, improving the stability and accuracy of
coefficient recovery. Additionally, we establish a theoretical bound for the
policy evaluation error based on coefficient recovery error. Through numerical
experiments, we demonstrate the effectiveness and robustness of our method in
recovering heavy-tailed L\'evy dynamics and verify the theoretical error
analysis in policy evaluation.
|
2504.01483 | Ruiyang Liu | Siran Li, Ruiyang Liu, Chen Liu, Zhendong Wang, Gaofeng He, Yong-Lu
Li, Xiaogang Jin, Huamin Wang | GarmageNet: A Dataset and Scalable Representation for Generic Garment
Modeling | null | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-fidelity garment modeling remains challenging due to the lack of
large-scale, high-quality datasets and efficient representations capable of
handling non-watertight, multi-layer geometries. In this work, we introduce
Garmage, a neural-network-and-CG-friendly garment representation that
seamlessly encodes the accurate geometry and sewing pattern of complex
multi-layered garments as a structured set of per-panel geometry images. As a
dual-2D-3D representation, Garmage achieves an unprecedented integration of 2D
image-based algorithms with 3D modeling workflows, enabling high fidelity,
non-watertight, multi-layered garment geometries with direct compatibility for
industrial-grade simulations.Built upon this representation, we present
GarmageNet, a novel generation framework capable of producing detailed
multi-layered garments with body-conforming initial geometries and intricate
sewing patterns, based on user prompts or existing in-the-wild sewing patterns.
Furthermore, we introduce a robust stitching algorithm that recovers per-vertex
stitches, ensuring seamless integration into flexible simulation pipelines for
downstream editing of sewing patterns, material properties, and dynamic
simulations. Finally, we release an industrial-standard, large-scale,
high-fidelity garment dataset featuring detailed annotations, vertex-wise
correspondences, and a robust pipeline for converting unstructured production
sewing patterns into GarmageNet standard structural assets, paving the way for
large-scale, industrial-grade garment generation systems.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:37:32 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Li",
"Siran",
""
],
[
"Liu",
"Ruiyang",
""
],
[
"Liu",
"Chen",
""
],
[
"Wang",
"Zhendong",
""
],
[
"He",
"Gaofeng",
""
],
[
"Li",
"Yong-Lu",
""
],
[
"Jin",
"Xiaogang",
""
],
[
"Wang",
"Huamin",
""
]
] | TITLE: GarmageNet: A Dataset and Scalable Representation for Generic Garment
Modeling
ABSTRACT: High-fidelity garment modeling remains challenging due to the lack of
large-scale, high-quality datasets and efficient representations capable of
handling non-watertight, multi-layer geometries. In this work, we introduce
Garmage, a neural-network-and-CG-friendly garment representation that
seamlessly encodes the accurate geometry and sewing pattern of complex
multi-layered garments as a structured set of per-panel geometry images. As a
dual-2D-3D representation, Garmage achieves an unprecedented integration of 2D
image-based algorithms with 3D modeling workflows, enabling high fidelity,
non-watertight, multi-layered garment geometries with direct compatibility for
industrial-grade simulations.Built upon this representation, we present
GarmageNet, a novel generation framework capable of producing detailed
multi-layered garments with body-conforming initial geometries and intricate
sewing patterns, based on user prompts or existing in-the-wild sewing patterns.
Furthermore, we introduce a robust stitching algorithm that recovers per-vertex
stitches, ensuring seamless integration into flexible simulation pipelines for
downstream editing of sewing patterns, material properties, and dynamic
simulations. Finally, we release an industrial-standard, large-scale,
high-fidelity garment dataset featuring detailed annotations, vertex-wise
correspondences, and a robust pipeline for converting unstructured production
sewing patterns into GarmageNet standard structural assets, paving the way for
large-scale, industrial-grade garment generation systems.
|
2504.01489 | Changshuo Zhang | Changshuo Zhang, Xiao Zhang, Teng Shi, Jun Xu, Ji-Rong Wen | Test-Time Alignment for Tracking User Interest Shifts in Sequential
Recommendation | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential recommendation is essential in modern recommender systems, aiming
to predict the next item a user may interact with based on their historical
behaviors. However, real-world scenarios are often dynamic and subject to
shifts in user interests. Conventional sequential recommendation models are
typically trained on static historical data, limiting their ability to adapt to
such shifts and resulting in significant performance degradation during
testing. Recently, Test-Time Training (TTT) has emerged as a promising
paradigm, enabling pre-trained models to dynamically adapt to test data by
leveraging unlabeled examples during testing. However, applying TTT to
effectively track and address user interest shifts in recommender systems
remains an open and challenging problem. Key challenges include how to capture
temporal information effectively and explicitly identifying shifts in user
interests during the testing phase. To address these issues, we propose
T$^2$ARec, a novel model leveraging state space model for TTT by introducing
two Test-Time Alignment modules tailored for sequential recommendation,
effectively capturing the distribution shifts in user interest patterns over
time. Specifically, T$^2$ARec aligns absolute time intervals with
model-adaptive learning intervals to capture temporal dynamics and introduce an
interest state alignment mechanism to effectively and explicitly identify the
user interest shifts with theoretical guarantees. These two alignment modules
enable efficient and incremental updates to model parameters in a
self-supervised manner during testing, enhancing predictions for online
recommendation. Extensive evaluations on three benchmark datasets demonstrate
that T$^2$ARec achieves state-of-the-art performance and robustly mitigates the
challenges posed by user interest shifts.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:42:30 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhang",
"Changshuo",
""
],
[
"Zhang",
"Xiao",
""
],
[
"Shi",
"Teng",
""
],
[
"Xu",
"Jun",
""
],
[
"Wen",
"Ji-Rong",
""
]
] | TITLE: Test-Time Alignment for Tracking User Interest Shifts in Sequential
Recommendation
ABSTRACT: Sequential recommendation is essential in modern recommender systems, aiming
to predict the next item a user may interact with based on their historical
behaviors. However, real-world scenarios are often dynamic and subject to
shifts in user interests. Conventional sequential recommendation models are
typically trained on static historical data, limiting their ability to adapt to
such shifts and resulting in significant performance degradation during
testing. Recently, Test-Time Training (TTT) has emerged as a promising
paradigm, enabling pre-trained models to dynamically adapt to test data by
leveraging unlabeled examples during testing. However, applying TTT to
effectively track and address user interest shifts in recommender systems
remains an open and challenging problem. Key challenges include how to capture
temporal information effectively and explicitly identifying shifts in user
interests during the testing phase. To address these issues, we propose
T$^2$ARec, a novel model leveraging state space model for TTT by introducing
two Test-Time Alignment modules tailored for sequential recommendation,
effectively capturing the distribution shifts in user interest patterns over
time. Specifically, T$^2$ARec aligns absolute time intervals with
model-adaptive learning intervals to capture temporal dynamics and introduce an
interest state alignment mechanism to effectively and explicitly identify the
user interest shifts with theoretical guarantees. These two alignment modules
enable efficient and incremental updates to model parameters in a
self-supervised manner during testing, enhancing predictions for online
recommendation. Extensive evaluations on three benchmark datasets demonstrate
that T$^2$ARec achieves state-of-the-art performance and robustly mitigates the
challenges posed by user interest shifts.
|
2504.01519 | Zhiyuan Tang | Zhiyuan Tang, Dong Wang, Zhikai Zhou, Yong Liu, Shen Huang, Shidong
Shang | Chain of Correction for Full-text Speech Recognition with Large Language
Models | null | null | null | null | cs.CL eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Full-text error correction with Large Language Models (LLMs) for Automatic
Speech Recognition (ASR) has gained increased attention due to its potential to
correct errors across long contexts and address a broader spectrum of error
types, including punctuation restoration and inverse text normalization.
Nevertheless, many challenges persist, including issues related to stability,
controllability, completeness, and fluency. To mitigate these challenges, this
paper proposes the Chain of Correction (CoC) for full-text error correction
with LLMs, which corrects errors segment by segment using pre-recognized text
as guidance within a regular multi-turn chat format. The CoC also uses
pre-recognized full text for context, allowing the model to better grasp global
semantics and maintain a comprehensive overview of the entire content.
Utilizing the open-sourced full-text error correction dataset ChFT, we
fine-tune a pre-trained LLM to evaluate the performance of the CoC framework.
Experimental results demonstrate that the CoC effectively corrects errors in
full-text ASR outputs, significantly outperforming baseline and benchmark
systems. We further analyze how to set the correction threshold to balance
under-correction and over-rephrasing, extrapolate the CoC model on extremely
long ASR outputs, and investigate whether other types of information can be
employed to guide the error correction process.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:06:23 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Tang",
"Zhiyuan",
""
],
[
"Wang",
"Dong",
""
],
[
"Zhou",
"Zhikai",
""
],
[
"Liu",
"Yong",
""
],
[
"Huang",
"Shen",
""
],
[
"Shang",
"Shidong",
""
]
] | TITLE: Chain of Correction for Full-text Speech Recognition with Large Language
Models
ABSTRACT: Full-text error correction with Large Language Models (LLMs) for Automatic
Speech Recognition (ASR) has gained increased attention due to its potential to
correct errors across long contexts and address a broader spectrum of error
types, including punctuation restoration and inverse text normalization.
Nevertheless, many challenges persist, including issues related to stability,
controllability, completeness, and fluency. To mitigate these challenges, this
paper proposes the Chain of Correction (CoC) for full-text error correction
with LLMs, which corrects errors segment by segment using pre-recognized text
as guidance within a regular multi-turn chat format. The CoC also uses
pre-recognized full text for context, allowing the model to better grasp global
semantics and maintain a comprehensive overview of the entire content.
Utilizing the open-sourced full-text error correction dataset ChFT, we
fine-tune a pre-trained LLM to evaluate the performance of the CoC framework.
Experimental results demonstrate that the CoC effectively corrects errors in
full-text ASR outputs, significantly outperforming baseline and benchmark
systems. We further analyze how to set the correction threshold to balance
under-correction and over-rephrasing, extrapolate the CoC model on extremely
long ASR outputs, and investigate whether other types of information can be
employed to guide the error correction process.
|
2504.01523 | Xuemeng Cai | Xuemeng Cai, Lingxiao Jiang | Adapting Knowledge Prompt Tuning for Enhanced Automated Program Repair | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated Program Repair (APR) aims to enhance software reliability by
automatically generating bug-fixing patches. Recent work has improved the
state-of-the-art of APR by fine-tuning pre-trained large language models
(LLMs), such as CodeT5, for APR. However, the effectiveness of fine-tuning
becomes weakened in data scarcity scenarios, and data scarcity can be a common
issue in practice, limiting fine-tuning performance. To alleviate this
limitation, this paper adapts prompt tuning for enhanced APR and conducts a
comprehensive study to evaluate its effectiveness in data scarcity scenarios,
using three LLMs of different sizes and six diverse datasets across four
programming languages. Prompt tuning rewrites the input to a model by adding
extra prompt tokens and tunes both the model and the prompts on a small
dataset. These tokens provide task-specific knowledge that can improve the
model for APR, which is especially critical in data scarcity scenarios.
Moreover, domain knowledge has proven crucial in many code intelligence tasks,
but existing studies fail to leverage domain knowledge during the prompt tuning
for APR. To close this gap, we introduce knowledge prompt tuning, an approach
that adapts prompt tuning with six distinct types of code- or bug-related
domain knowledge for APR. Our work, to the best of our knowledge, is the first
to adapt and evaluate prompt tuning and the effectiveness of code- or
bug-related domain knowledge for APR, particularly under data scarcity
settings. Our evaluation results demonstrate that prompt tuning with knowledge
generally outperforms fine-tuning under various experimental settings,
achieving an average improvement of 87.33% over fine-tuning in data scarcity
scenarios.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:10:02 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Cai",
"Xuemeng",
""
],
[
"Jiang",
"Lingxiao",
""
]
] | TITLE: Adapting Knowledge Prompt Tuning for Enhanced Automated Program Repair
ABSTRACT: Automated Program Repair (APR) aims to enhance software reliability by
automatically generating bug-fixing patches. Recent work has improved the
state-of-the-art of APR by fine-tuning pre-trained large language models
(LLMs), such as CodeT5, for APR. However, the effectiveness of fine-tuning
becomes weakened in data scarcity scenarios, and data scarcity can be a common
issue in practice, limiting fine-tuning performance. To alleviate this
limitation, this paper adapts prompt tuning for enhanced APR and conducts a
comprehensive study to evaluate its effectiveness in data scarcity scenarios,
using three LLMs of different sizes and six diverse datasets across four
programming languages. Prompt tuning rewrites the input to a model by adding
extra prompt tokens and tunes both the model and the prompts on a small
dataset. These tokens provide task-specific knowledge that can improve the
model for APR, which is especially critical in data scarcity scenarios.
Moreover, domain knowledge has proven crucial in many code intelligence tasks,
but existing studies fail to leverage domain knowledge during the prompt tuning
for APR. To close this gap, we introduce knowledge prompt tuning, an approach
that adapts prompt tuning with six distinct types of code- or bug-related
domain knowledge for APR. Our work, to the best of our knowledge, is the first
to adapt and evaluate prompt tuning and the effectiveness of code- or
bug-related domain knowledge for APR, particularly under data scarcity
settings. Our evaluation results demonstrate that prompt tuning with knowledge
generally outperforms fine-tuning under various experimental settings,
achieving an average improvement of 87.33% over fine-tuning in data scarcity
scenarios.
|
2504.01527 | Olivier Rukundo | Olivier Rukundo | Beyond Nearest Neighbor Interpolation in Data Augmentation | 6 pages, 9 figures, 1 table | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Avoiding the risk of undefined categorical labels using nearest neighbor
interpolation overlooks the risk of exacerbating pixel level annotation errors
in data augmentation. To simultaneously avoid these risks, the author modified
convolutional neural networks data transformation functions by incorporating a
modified geometric transformation function to improve the quality of augmented
data by removing the reliance on nearest neighbor interpolation and integrating
a mean based class filtering mechanism to handle undefined categorical labels
with alternative interpolation algorithms. Experiments on semantic segmentation
tasks using three medical image datasets demonstrated both qualitative and
quantitative improvements with alternative interpolation algorithms.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:13:18 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Rukundo",
"Olivier",
""
]
] | TITLE: Beyond Nearest Neighbor Interpolation in Data Augmentation
ABSTRACT: Avoiding the risk of undefined categorical labels using nearest neighbor
interpolation overlooks the risk of exacerbating pixel level annotation errors
in data augmentation. To simultaneously avoid these risks, the author modified
convolutional neural networks data transformation functions by incorporating a
modified geometric transformation function to improve the quality of augmented
data by removing the reliance on nearest neighbor interpolation and integrating
a mean based class filtering mechanism to handle undefined categorical labels
with alternative interpolation algorithms. Experiments on semantic segmentation
tasks using three medical image datasets demonstrated both qualitative and
quantitative improvements with alternative interpolation algorithms.
|
2504.01534 | Adrien Schurger-Foy | Adrien Schurger-Foy, Rafal Dariusz Kocielnik, Caglar Gulcehre, R.
Michael Alvarez | Context-Aware Toxicity Detection in Multiplayer Games: Integrating
Domain-Adaptive Pretraining and Match Metadata | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The detrimental effects of toxicity in competitive online video games are
widely acknowledged, prompting publishers to monitor player chat conversations.
This is challenging due to the context-dependent nature of toxicity, often
spread across multiple messages or informed by non-textual interactions.
Traditional toxicity detectors focus on isolated messages, missing the broader
context needed for accurate moderation. This is especially problematic in video
games, where interactions involve specialized slang, abbreviations, and typos,
making it difficult for standard models to detect toxicity, especially given
its rarity. We adapted RoBERTa LLM to support moderation tailored to video
games, integrating both textual and non-textual context. By enhancing
pretrained embeddings with metadata and addressing the unique slang and
language quirks through domain adaptive pretraining, our method better captures
the nuances of player interactions. Using two gaming datasets - from Defense of
the Ancients 2 (DOTA 2) and Call of Duty$^\circledR$: Modern
Warfare$^\circledR$III (MWIII) we demonstrate which sources of context
(metadata, prior interactions...) are most useful, how to best leverage them to
boost performance, and the conditions conducive to doing so. This work
underscores the importance of context-aware and domain-specific approaches for
proactive moderation.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:21:41 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Schurger-Foy",
"Adrien",
""
],
[
"Kocielnik",
"Rafal Dariusz",
""
],
[
"Gulcehre",
"Caglar",
""
],
[
"Alvarez",
"R. Michael",
""
]
] | TITLE: Context-Aware Toxicity Detection in Multiplayer Games: Integrating
Domain-Adaptive Pretraining and Match Metadata
ABSTRACT: The detrimental effects of toxicity in competitive online video games are
widely acknowledged, prompting publishers to monitor player chat conversations.
This is challenging due to the context-dependent nature of toxicity, often
spread across multiple messages or informed by non-textual interactions.
Traditional toxicity detectors focus on isolated messages, missing the broader
context needed for accurate moderation. This is especially problematic in video
games, where interactions involve specialized slang, abbreviations, and typos,
making it difficult for standard models to detect toxicity, especially given
its rarity. We adapted RoBERTa LLM to support moderation tailored to video
games, integrating both textual and non-textual context. By enhancing
pretrained embeddings with metadata and addressing the unique slang and
language quirks through domain adaptive pretraining, our method better captures
the nuances of player interactions. Using two gaming datasets - from Defense of
the Ancients 2 (DOTA 2) and Call of Duty$^\circledR$: Modern
Warfare$^\circledR$III (MWIII) we demonstrate which sources of context
(metadata, prior interactions...) are most useful, how to best leverage them to
boost performance, and the conditions conducive to doing so. This work
underscores the importance of context-aware and domain-specific approaches for
proactive moderation.
|
2504.01537 | Elena Corbetta | Elena Corbetta and Thomas Bocklitz | Multi-Marker Similarity enables reduced-reference and interpretable
image quality assessment in optical microscopy | 24 pages, 11 figures, 1 table | null | null | null | q-bio.QM physics.optics stat.AP stat.ME | http://creativecommons.org/licenses/by/4.0/ | Optical microscopy contributes to the ever-increasing progress in biological
and biomedical studies, as it allows the implementation of minimally invasive
experimental pipelines to translate the data of measured samples into valuable
knowledge. Within these pipelines, reliable quality assessment must be ensured
to validate the generated results. Image quality assessment is often applied
with full-reference methods to estimate the similarity between the ground truth
and the output images. However, current methods often show poor agreement with
visual perception and lead to the generation of various full-reference metrics
tailored to specific applications. Additionally, they rely on pixel-wise
comparisons, emphasizing local intensity similarity while often overlooking
comprehensive and interpretable image quality assessment. To address these
issues, we have developed a multi-marker similarity method that compares
standard quality markers, such as resolution, signal-to-noise ratio, contrast,
and high frequency components. The method computes a similarity score between
the image and the ground truth for each marker, then combines these scores into
an overall similarity estimate. This provides a full-reference estimate of
image quality while extracting global quality features and detecting
experimental artifacts. Multi-marker similarity provides a reliable and
interpretable method for image quality assessment and the generation of quality
rankings. By focusing on the comparison of quality markers rather than direct
image distances, the method enables reduced reference implementations, where a
single field of view is used as a benchmark for multiple measurements. This
opens the way for reliable automatic evaluation of big datasets, typical of
large biomedical studies, when manual assessment of single images and defining
the ground truth for each field of view is not feasible.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:23:57 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Corbetta",
"Elena",
""
],
[
"Bocklitz",
"Thomas",
""
]
] | TITLE: Multi-Marker Similarity enables reduced-reference and interpretable
image quality assessment in optical microscopy
ABSTRACT: Optical microscopy contributes to the ever-increasing progress in biological
and biomedical studies, as it allows the implementation of minimally invasive
experimental pipelines to translate the data of measured samples into valuable
knowledge. Within these pipelines, reliable quality assessment must be ensured
to validate the generated results. Image quality assessment is often applied
with full-reference methods to estimate the similarity between the ground truth
and the output images. However, current methods often show poor agreement with
visual perception and lead to the generation of various full-reference metrics
tailored to specific applications. Additionally, they rely on pixel-wise
comparisons, emphasizing local intensity similarity while often overlooking
comprehensive and interpretable image quality assessment. To address these
issues, we have developed a multi-marker similarity method that compares
standard quality markers, such as resolution, signal-to-noise ratio, contrast,
and high frequency components. The method computes a similarity score between
the image and the ground truth for each marker, then combines these scores into
an overall similarity estimate. This provides a full-reference estimate of
image quality while extracting global quality features and detecting
experimental artifacts. Multi-marker similarity provides a reliable and
interpretable method for image quality assessment and the generation of quality
rankings. By focusing on the comparison of quality markers rather than direct
image distances, the method enables reduced reference implementations, where a
single field of view is used as a benchmark for multiple measurements. This
opens the way for reliable automatic evaluation of big datasets, typical of
large biomedical studies, when manual assessment of single images and defining
the ground truth for each field of view is not feasible.
|
2504.01540 | Rob van der Goot | Mikkel Wildner Kildeberg, Emil Allerslev Schledermann, Nicolaj Larsen,
Rob van der Goot | From Sm{\o}r-re-br{\o}d to Subwords: Training LLMs on Danish, One
Morpheme at a Time | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The best performing transformer-based language models use subword
tokenization techniques, such as Byte-Pair-Encoding (BPE). However, these
approaches often overlook linguistic principles, such as morphological
segmentation, which we believe is fundamental for understanding
language-specific word structure. In this study, we leverage an annotated
Danish morphological dataset to train a semisupervised model for morphological
segmentation, enabling the development of tokenizers optimized for Danish
morphology. We evaluate four distinct tokenizers, including two custom
morphological tokenizers, by analyzing their performance in morphologically
segmenting Danish words. Additionally, we train two generative transformer
models, \textit{CerebrasGPT-111M} and \textit{LLaMA-3.2 1B}, using these
tokenizers and evaluate their downstream performance. Our findings reveal that
our custom-developed tokenizers substantially enhance morphological
segmentation, achieving an F1 score of 58.84, compared to 39.28 achieved by a
Danish BPE tokenizer. In downstream tasks, models trained with our
morphological tokenizers outperform those using BPE tokenizers across different
evaluation metrics. These results highlight that incorporating Danish
morphological segmentation strategies into tokenizers leads to improved
performance in generative transformer models on Danish language
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:26:02 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Kildeberg",
"Mikkel Wildner",
""
],
[
"Schledermann",
"Emil Allerslev",
""
],
[
"Larsen",
"Nicolaj",
""
],
[
"van der Goot",
"Rob",
""
]
] | TITLE: From Sm{\o}r-re-br{\o}d to Subwords: Training LLMs on Danish, One
Morpheme at a Time
ABSTRACT: The best performing transformer-based language models use subword
tokenization techniques, such as Byte-Pair-Encoding (BPE). However, these
approaches often overlook linguistic principles, such as morphological
segmentation, which we believe is fundamental for understanding
language-specific word structure. In this study, we leverage an annotated
Danish morphological dataset to train a semisupervised model for morphological
segmentation, enabling the development of tokenizers optimized for Danish
morphology. We evaluate four distinct tokenizers, including two custom
morphological tokenizers, by analyzing their performance in morphologically
segmenting Danish words. Additionally, we train two generative transformer
models, \textit{CerebrasGPT-111M} and \textit{LLaMA-3.2 1B}, using these
tokenizers and evaluate their downstream performance. Our findings reveal that
our custom-developed tokenizers substantially enhance morphological
segmentation, achieving an F1 score of 58.84, compared to 39.28 achieved by a
Danish BPE tokenizer. In downstream tasks, models trained with our
morphological tokenizers outperform those using BPE tokenizers across different
evaluation metrics. These results highlight that incorporating Danish
morphological segmentation strategies into tokenizers leads to improved
performance in generative transformer models on Danish language
|
2504.01542 | Amanda Myntti | Amanda Myntti, Erik Henriksson, Veronika Laippala, Sampo Pyysalo | Register Always Matters: Analysis of LLM Pretraining Data Through the
Lens of Language Variation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Pretraining data curation is a cornerstone in Large Language Model (LLM)
development, leading to growing research on quality filtering of large web
corpora. From statistical quality flags to LLM-based labeling systems, datasets
are divided into categories, frequently reducing to a binary: those passing the
filters deemed as valuable examples, others discarded as useless or
detrimental. However, a more detailed understanding of the contribution of
different kinds of texts to model performance is still largely lacking. In this
article, we present the first study utilizing registers (also known as genres)
- a widely used standard in corpus linguistics to model linguistic variation -
to curate pretraining datasets and investigate the effect of register on the
performance of LLMs. We perform comparative studies by training models with
register classified data and evaluating them using standard benchmarks, and
show that the register of pretraining data substantially affects model
performance. We uncover surprising relationships between the pretraining
material and the resulting models: using the News register results in subpar
performance, and on the contrary, including the Opinion class, covering texts
such as reviews and opinion blogs, is highly beneficial. While a model trained
on the entire unfiltered dataset outperforms those trained on datasets limited
to a single register, combining well-performing registers like
How-to-Instructions, Informational Description, and Opinion leads to major
improvements. Furthermore, analysis of individual benchmark results reveals key
differences in the strengths and drawbacks of specific register classes as
pretraining data. These findings show that register is an important explainer
of model variation and can facilitate more deliberate future data selection
practices.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:30:24 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Myntti",
"Amanda",
""
],
[
"Henriksson",
"Erik",
""
],
[
"Laippala",
"Veronika",
""
],
[
"Pyysalo",
"Sampo",
""
]
] | TITLE: Register Always Matters: Analysis of LLM Pretraining Data Through the
Lens of Language Variation
ABSTRACT: Pretraining data curation is a cornerstone in Large Language Model (LLM)
development, leading to growing research on quality filtering of large web
corpora. From statistical quality flags to LLM-based labeling systems, datasets
are divided into categories, frequently reducing to a binary: those passing the
filters deemed as valuable examples, others discarded as useless or
detrimental. However, a more detailed understanding of the contribution of
different kinds of texts to model performance is still largely lacking. In this
article, we present the first study utilizing registers (also known as genres)
- a widely used standard in corpus linguistics to model linguistic variation -
to curate pretraining datasets and investigate the effect of register on the
performance of LLMs. We perform comparative studies by training models with
register classified data and evaluating them using standard benchmarks, and
show that the register of pretraining data substantially affects model
performance. We uncover surprising relationships between the pretraining
material and the resulting models: using the News register results in subpar
performance, and on the contrary, including the Opinion class, covering texts
such as reviews and opinion blogs, is highly beneficial. While a model trained
on the entire unfiltered dataset outperforms those trained on datasets limited
to a single register, combining well-performing registers like
How-to-Instructions, Informational Description, and Opinion leads to major
improvements. Furthermore, analysis of individual benchmark results reveals key
differences in the strengths and drawbacks of specific register classes as
pretraining data. These findings show that register is an important explainer
of model variation and can facilitate more deliberate future data selection
practices.
|
2504.01547 | Luca Ciampi | Luca Ciampi, Gabriele Lagani, Giuseppe Amato, Fabrizio Falchi | Semi-Supervised Biomedical Image Segmentation via Diffusion Models and
Teacher-Student Co-Training | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Supervised deep learning for semantic segmentation has achieved excellent
results in accurately identifying anatomical and pathological structures in
medical images. However, it often requires large annotated training datasets,
which limits its scalability in clinical settings. To address this challenge,
semi-supervised learning is a well-established approach that leverages both
labeled and unlabeled data. In this paper, we introduce a novel semi-supervised
teacher-student framework for biomedical image segmentation, inspired by the
recent success of generative models. Our approach leverages denoising diffusion
probabilistic models (DDPMs) to generate segmentation masks by progressively
refining noisy inputs conditioned on the corresponding images. The teacher
model is first trained in an unsupervised manner using a cycle-consistency
constraint based on noise-corrupted image reconstruction, enabling it to
generate informative semantic masks. Subsequently, the teacher is integrated
into a co-training process with a twin-student network. The student learns from
ground-truth labels when available and from teacher-generated pseudo-labels
otherwise, while the teacher continuously improves its pseudo-labeling
capabilities. Finally, to further enhance performance, we introduce a
multi-round pseudo-label generation strategy that iteratively improves the
pseudo-labeling process. We evaluate our approach on multiple biomedical
imaging benchmarks, spanning multiple imaging modalities and segmentation
tasks. Experimental results show that our method consistently outperforms
state-of-the-art semi-supervised techniques, highlighting its effectiveness in
scenarios with limited annotated data. The code to replicate our experiments
can be found at
https://github.com/ciampluca/diffusion_semi_supervised_biomedical_image_segmentation
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:41:43 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ciampi",
"Luca",
""
],
[
"Lagani",
"Gabriele",
""
],
[
"Amato",
"Giuseppe",
""
],
[
"Falchi",
"Fabrizio",
""
]
] | TITLE: Semi-Supervised Biomedical Image Segmentation via Diffusion Models and
Teacher-Student Co-Training
ABSTRACT: Supervised deep learning for semantic segmentation has achieved excellent
results in accurately identifying anatomical and pathological structures in
medical images. However, it often requires large annotated training datasets,
which limits its scalability in clinical settings. To address this challenge,
semi-supervised learning is a well-established approach that leverages both
labeled and unlabeled data. In this paper, we introduce a novel semi-supervised
teacher-student framework for biomedical image segmentation, inspired by the
recent success of generative models. Our approach leverages denoising diffusion
probabilistic models (DDPMs) to generate segmentation masks by progressively
refining noisy inputs conditioned on the corresponding images. The teacher
model is first trained in an unsupervised manner using a cycle-consistency
constraint based on noise-corrupted image reconstruction, enabling it to
generate informative semantic masks. Subsequently, the teacher is integrated
into a co-training process with a twin-student network. The student learns from
ground-truth labels when available and from teacher-generated pseudo-labels
otherwise, while the teacher continuously improves its pseudo-labeling
capabilities. Finally, to further enhance performance, we introduce a
multi-round pseudo-label generation strategy that iteratively improves the
pseudo-labeling process. We evaluate our approach on multiple biomedical
imaging benchmarks, spanning multiple imaging modalities and segmentation
tasks. Experimental results show that our method consistently outperforms
state-of-the-art semi-supervised techniques, highlighting its effectiveness in
scenarios with limited annotated data. The code to replicate our experiments
can be found at
https://github.com/ciampluca/diffusion_semi_supervised_biomedical_image_segmentation
|
2504.01557 | Shujing Wang | Shujing Wang (1), Selasi Kwashie (2), Michael Bewong (3), Junwei Hu
(1), Vincent M. Nofong (4), Shiqi Miao (1), Zaiwen Feng (1) ((1) Huazhong
Agricultural University, Wuhan, China (2) AI & Cyber Futures Institute,
Charles Sturt University, Australia (3) School of Computing, Mathematics and
Engineering, Charles Sturt University, Australia (4) Department of Computer
Science and Engineering, University of Mines and Technology, Ghana) | FastER: Fast On-Demand Entity Resolution in Property Graphs | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Entity resolution (ER) is the problem of identifying and linking database
records that refer to the same real-world entity. Traditional ER methods use
batch processing, which becomes impractical with growing data volumes due to
high computational costs and lack of real-time capabilities. In many
applications, users need to resolve entities for only a small portion of their
data, making full data processing unnecessary -- a scenario known as
"ER-on-demand". This paper proposes FastER, an efficient ER-on-demand framework
for property graphs. Our approach uses graph differential dependencies (GDDs)
as a knowledge encoding language to design effective filtering mechanisms that
leverage both structural and attribute semantics of graphs. We construct a
blocking graph from filtered subgraphs to reduce the number of candidate entity
pairs requiring comparison. Additionally, FastER incorporates Progressive
Profile Scheduling (PPS), allowing the system to incrementally produce results
throughout the resolution process. Extensive evaluations on multiple benchmark
datasets demonstrate that FastER significantly outperforms state-of-the-art ER
methods in computational efficiency and real-time processing for on-demand
tasks while ensuring reliability. We make FastER publicly available at:
https://anonymous.4open.science/r/On_Demand_Entity_Resolution-9DFB
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:58:38 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wang",
"Shujing",
""
],
[
"Kwashie",
"Selasi",
""
],
[
"Bewong",
"Michael",
""
],
[
"Hu",
"Junwei",
""
],
[
"Nofong",
"Vincent M.",
""
],
[
"Miao",
"Shiqi",
""
],
[
"Feng",
"Zaiwen",
""
]
] | TITLE: FastER: Fast On-Demand Entity Resolution in Property Graphs
ABSTRACT: Entity resolution (ER) is the problem of identifying and linking database
records that refer to the same real-world entity. Traditional ER methods use
batch processing, which becomes impractical with growing data volumes due to
high computational costs and lack of real-time capabilities. In many
applications, users need to resolve entities for only a small portion of their
data, making full data processing unnecessary -- a scenario known as
"ER-on-demand". This paper proposes FastER, an efficient ER-on-demand framework
for property graphs. Our approach uses graph differential dependencies (GDDs)
as a knowledge encoding language to design effective filtering mechanisms that
leverage both structural and attribute semantics of graphs. We construct a
blocking graph from filtered subgraphs to reduce the number of candidate entity
pairs requiring comparison. Additionally, FastER incorporates Progressive
Profile Scheduling (PPS), allowing the system to incrementally produce results
throughout the resolution process. Extensive evaluations on multiple benchmark
datasets demonstrate that FastER significantly outperforms state-of-the-art ER
methods in computational efficiency and real-time processing for on-demand
tasks while ensuring reliability. We make FastER publicly available at:
https://anonymous.4open.science/r/On_Demand_Entity_Resolution-9DFB
|
2504.01559 | Yahui Li | Yahui Li, Zhi Zeng, Liming Pang, Guixuan Zhang, Shuwu Zhang | RealityAvatar: Towards Realistic Loose Clothing Modeling in Animatable
3D Gaussian Avatars | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling animatable human avatars from monocular or multi-view videos has
been widely studied, with recent approaches leveraging neural radiance fields
(NeRFs) or 3D Gaussian Splatting (3DGS) achieving impressive results in
novel-view and novel-pose synthesis. However, existing methods often struggle
to accurately capture the dynamics of loose clothing, as they primarily rely on
global pose conditioning or static per-frame representations, leading to
oversmoothing and temporal inconsistencies in non-rigid regions. To address
this, We propose RealityAvatar, an efficient framework for high-fidelity
digital human modeling, specifically targeting loosely dressed avatars. Our
method leverages 3D Gaussian Splatting to capture complex clothing deformations
and motion dynamics while ensuring geometric consistency. By incorporating a
motion trend module and a latentbone encoder, we explicitly model
pose-dependent deformations and temporal variations in clothing behavior.
Extensive experiments on benchmark datasets demonstrate the effectiveness of
our approach in capturing fine-grained clothing deformations and motion-driven
shape variations. Our method significantly enhances structural fidelity and
perceptual quality in dynamic human reconstruction, particularly in non-rigid
regions, while achieving better consistency across temporal frames.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:59:12 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Li",
"Yahui",
""
],
[
"Zeng",
"Zhi",
""
],
[
"Pang",
"Liming",
""
],
[
"Zhang",
"Guixuan",
""
],
[
"Zhang",
"Shuwu",
""
]
] | TITLE: RealityAvatar: Towards Realistic Loose Clothing Modeling in Animatable
3D Gaussian Avatars
ABSTRACT: Modeling animatable human avatars from monocular or multi-view videos has
been widely studied, with recent approaches leveraging neural radiance fields
(NeRFs) or 3D Gaussian Splatting (3DGS) achieving impressive results in
novel-view and novel-pose synthesis. However, existing methods often struggle
to accurately capture the dynamics of loose clothing, as they primarily rely on
global pose conditioning or static per-frame representations, leading to
oversmoothing and temporal inconsistencies in non-rigid regions. To address
this, We propose RealityAvatar, an efficient framework for high-fidelity
digital human modeling, specifically targeting loosely dressed avatars. Our
method leverages 3D Gaussian Splatting to capture complex clothing deformations
and motion dynamics while ensuring geometric consistency. By incorporating a
motion trend module and a latentbone encoder, we explicitly model
pose-dependent deformations and temporal variations in clothing behavior.
Extensive experiments on benchmark datasets demonstrate the effectiveness of
our approach in capturing fine-grained clothing deformations and motion-driven
shape variations. Our method significantly enhances structural fidelity and
perceptual quality in dynamic human reconstruction, particularly in non-rigid
regions, while achieving better consistency across temporal frames.
|
2504.01561 | Dandan Shan | Dandan Shan and Zihan Li and Yunxiang Li and Qingde Li and Jie Tian
and Qingqi Hong | STPNet: Scale-aware Text Prompt Network for Medical Image Segmentation | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate segmentation of lesions plays a critical role in medical image
analysis and diagnosis. Traditional segmentation approaches that rely solely on
visual features often struggle with the inherent uncertainty in lesion
distribution and size. To address these issues, we propose STPNet, a
Scale-aware Text Prompt Network that leverages vision-language modeling to
enhance medical image segmentation. Our approach utilizes multi-scale textual
descriptions to guide lesion localization and employs retrieval-segmentation
joint learning to bridge the semantic gap between visual and linguistic
modalities. Crucially, STPNet retrieves relevant textual information from a
specialized medical text repository during training, eliminating the need for
text input during inference while retaining the benefits of cross-modal
learning. We evaluate STPNet on three datasets: COVID-Xray, COVID-CT, and
Kvasir-SEG. Experimental results show that our vision-language approach
outperforms state-of-the-art segmentation methods, demonstrating the
effectiveness of incorporating textual semantic knowledge into medical image
analysis. The code has been made publicly on
https://github.com/HUANGLIZI/STPNet.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 10:01:42 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Shan",
"Dandan",
""
],
[
"Li",
"Zihan",
""
],
[
"Li",
"Yunxiang",
""
],
[
"Li",
"Qingde",
""
],
[
"Tian",
"Jie",
""
],
[
"Hong",
"Qingqi",
""
]
] | TITLE: STPNet: Scale-aware Text Prompt Network for Medical Image Segmentation
ABSTRACT: Accurate segmentation of lesions plays a critical role in medical image
analysis and diagnosis. Traditional segmentation approaches that rely solely on
visual features often struggle with the inherent uncertainty in lesion
distribution and size. To address these issues, we propose STPNet, a
Scale-aware Text Prompt Network that leverages vision-language modeling to
enhance medical image segmentation. Our approach utilizes multi-scale textual
descriptions to guide lesion localization and employs retrieval-segmentation
joint learning to bridge the semantic gap between visual and linguistic
modalities. Crucially, STPNet retrieves relevant textual information from a
specialized medical text repository during training, eliminating the need for
text input during inference while retaining the benefits of cross-modal
learning. We evaluate STPNet on three datasets: COVID-Xray, COVID-CT, and
Kvasir-SEG. Experimental results show that our vision-language approach
outperforms state-of-the-art segmentation methods, demonstrating the
effectiveness of incorporating textual semantic knowledge into medical image
analysis. The code has been made publicly on
https://github.com/HUANGLIZI/STPNet.
|
2504.01577 | Lirui Qi | Lirui Qi, Hongliang He, Tong Wang, Siwei Feng, Guohong Fu | Instance Migration Diffusion for Nuclear Instance Segmentation in
Pathology | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nuclear instance segmentation plays a vital role in disease diagnosis within
digital pathology. However, limited labeled data in pathological images
restricts the overall performance of nuclear instance segmentation. To tackle
this challenge, we propose a novel data augmentation framework Instance
Migration Diffusion Model (IM-Diffusion), IM-Diffusion designed to generate
more varied pathological images by constructing diverse nuclear layouts and
internuclear spatial relationships. In detail, we introduce a Nuclear Migration
Module (NMM) which constructs diverse nuclear layouts by simulating the process
of nuclear migration. Building on this, we further present an
Internuclear-regions Inpainting Module (IIM) to generate diverse internuclear
spatial relationships by structure-aware inpainting. On the basis of the above,
IM-Diffusion generates more diverse pathological images with different layouts
and internuclear spatial relationships, thereby facilitating downstream tasks.
Evaluation on the CoNSeP and GLySAC datasets demonstrate that the images
generated by IM-Diffusion effectively enhance overall instance segmentation
performance. Code will be made public later.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 10:29:31 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Qi",
"Lirui",
""
],
[
"He",
"Hongliang",
""
],
[
"Wang",
"Tong",
""
],
[
"Feng",
"Siwei",
""
],
[
"Fu",
"Guohong",
""
]
] | TITLE: Instance Migration Diffusion for Nuclear Instance Segmentation in
Pathology
ABSTRACT: Nuclear instance segmentation plays a vital role in disease diagnosis within
digital pathology. However, limited labeled data in pathological images
restricts the overall performance of nuclear instance segmentation. To tackle
this challenge, we propose a novel data augmentation framework Instance
Migration Diffusion Model (IM-Diffusion), IM-Diffusion designed to generate
more varied pathological images by constructing diverse nuclear layouts and
internuclear spatial relationships. In detail, we introduce a Nuclear Migration
Module (NMM) which constructs diverse nuclear layouts by simulating the process
of nuclear migration. Building on this, we further present an
Internuclear-regions Inpainting Module (IIM) to generate diverse internuclear
spatial relationships by structure-aware inpainting. On the basis of the above,
IM-Diffusion generates more diverse pathological images with different layouts
and internuclear spatial relationships, thereby facilitating downstream tasks.
Evaluation on the CoNSeP and GLySAC datasets demonstrate that the images
generated by IM-Diffusion effectively enhance overall instance segmentation
performance. Code will be made public later.
|
2504.01588 | Giulia Belgiovine | Luca Garello, Giulia Belgiovine, Gabriele Russo, Francesco Rea,
Alessandra Sciutti | Building Knowledge from Interactions: An LLM-Based Architecture for
Adaptive Tutoring and Social Reasoning | Submitted to IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS) 2025 | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Integrating robotics into everyday scenarios like tutoring or physical
training requires robots capable of adaptive, socially engaging, and
goal-oriented interactions. While Large Language Models show promise in
human-like communication, their standalone use is hindered by memory
constraints and contextual incoherence. This work presents a multimodal,
cognitively inspired framework that enhances LLM-based autonomous
decision-making in social and task-oriented Human-Robot Interaction.
Specifically, we develop an LLM-based agent for a robot trainer, balancing
social conversation with task guidance and goal-driven motivation. To further
enhance autonomy and personalization, we introduce a memory system for
selecting, storing and retrieving experiences, facilitating generalized
reasoning based on knowledge built across different interactions. A preliminary
HRI user study and offline experiments with a synthetic dataset validate our
approach, demonstrating the system's ability to manage complex interactions,
autonomously drive training tasks, and build and retrieve contextual memories,
advancing socially intelligent robotics.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 10:45:41 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Garello",
"Luca",
""
],
[
"Belgiovine",
"Giulia",
""
],
[
"Russo",
"Gabriele",
""
],
[
"Rea",
"Francesco",
""
],
[
"Sciutti",
"Alessandra",
""
]
] | TITLE: Building Knowledge from Interactions: An LLM-Based Architecture for
Adaptive Tutoring and Social Reasoning
ABSTRACT: Integrating robotics into everyday scenarios like tutoring or physical
training requires robots capable of adaptive, socially engaging, and
goal-oriented interactions. While Large Language Models show promise in
human-like communication, their standalone use is hindered by memory
constraints and contextual incoherence. This work presents a multimodal,
cognitively inspired framework that enhances LLM-based autonomous
decision-making in social and task-oriented Human-Robot Interaction.
Specifically, we develop an LLM-based agent for a robot trainer, balancing
social conversation with task guidance and goal-driven motivation. To further
enhance autonomy and personalization, we introduce a memory system for
selecting, storing and retrieving experiences, facilitating generalized
reasoning based on knowledge built across different interactions. A preliminary
HRI user study and offline experiments with a synthetic dataset validate our
approach, demonstrating the system's ability to manage complex interactions,
autonomously drive training tasks, and build and retrieve contextual memories,
advancing socially intelligent robotics.
|
2504.01593 | Martin Weigt | Francesco Calvanese, Giovanni Peinetti, Polina Pavlinova, Philippe
Nghe and Martin Weigt | Integrating experimental feedback improves generative models for
biological sequences | single document containing supplemental information | null | null | null | q-bio.BM physics.bio-ph q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Generative probabilistic models have shown promise in designing artificial
RNA and protein sequences but often suffer from high rates of false positives,
where sequences predicted as functional fail experimental validation. To
address this critical limitation, we explore the impact of reintegrating
experimental feedback into the model design process. We propose a
likelihood-based reintegration scheme, which we test through extensive
computational experiments on both RNA and protein datasets, as well as through
wet-lab experiments on the self-splicing ribozyme from the group I intron RNA
family where our approach demonstrates particular efficacy. We show that
integrating recent experimental data enhances the model's capacity of
generating functional sequences (e.g. from 6.7\% to 63.7\% of active designs at
45 mutations). This feedback-driven approach thus provides a significant
improvement in the design of biomolecular sequences by directly tackling the
false-positive challenge.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 10:57:53 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Calvanese",
"Francesco",
""
],
[
"Peinetti",
"Giovanni",
""
],
[
"Pavlinova",
"Polina",
""
],
[
"Nghe",
"Philippe",
""
],
[
"Weigt",
"Martin",
""
]
] | TITLE: Integrating experimental feedback improves generative models for
biological sequences
ABSTRACT: Generative probabilistic models have shown promise in designing artificial
RNA and protein sequences but often suffer from high rates of false positives,
where sequences predicted as functional fail experimental validation. To
address this critical limitation, we explore the impact of reintegrating
experimental feedback into the model design process. We propose a
likelihood-based reintegration scheme, which we test through extensive
computational experiments on both RNA and protein datasets, as well as through
wet-lab experiments on the self-splicing ribozyme from the group I intron RNA
family where our approach demonstrates particular efficacy. We show that
integrating recent experimental data enhances the model's capacity of
generating functional sequences (e.g. from 6.7\% to 63.7\% of active designs at
45 mutations). This feedback-driven approach thus provides a significant
improvement in the design of biomolecular sequences by directly tackling the
false-positive challenge.
|
2504.01596 | Jijun Xiang | Jijun Xiang, Xuan Zhu, Xianqi Wang, Yu Wang, Hong Zhang, Fei Guo, Xin
Yang | DEPTHOR: Depth Enhancement from a Practical Light-Weight dToF Sensor and
RGB Image | 10 pages, 8 figures, 7 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Depth enhancement, which uses RGB images as guidance to convert raw signals
from dToF into high-precision, dense depth maps, is a critical task in computer
vision. Although existing super-resolution-based methods show promising results
on public datasets, they often rely on idealized assumptions like accurate
region correspondences and reliable dToF inputs, overlooking calibration errors
that cause misalignment and anomaly signals inherent to dToF imaging, limiting
real-world applicability. To address these challenges, we propose a novel
completion-based method, named DEPTHOR, featuring advances in both the training
strategy and model architecture. First, we propose a method to simulate
real-world dToF data from the accurate ground truth in synthetic datasets to
enable noise-robust training. Second, we design a novel network that
incorporates monocular depth estimation (MDE), leveraging global depth
relationships and contextual information to improve prediction in challenging
regions. On the ZJU-L5 dataset, our training strategy significantly enhances
depth completion models, achieving results comparable to depth super-resolution
methods, while our model achieves state-of-the-art results, improving Rel and
RMSE by 27% and 18%, respectively. On a more challenging set of dToF samples we
collected, our method outperforms SOTA methods on preliminary stereo-based GT,
improving Rel and RMSE by 23% and 22%, respectively. Our Code is available at
https://github.com/ShadowBbBb/Depthor
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 11:02:21 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Xiang",
"Jijun",
""
],
[
"Zhu",
"Xuan",
""
],
[
"Wang",
"Xianqi",
""
],
[
"Wang",
"Yu",
""
],
[
"Zhang",
"Hong",
""
],
[
"Guo",
"Fei",
""
],
[
"Yang",
"Xin",
""
]
] | TITLE: DEPTHOR: Depth Enhancement from a Practical Light-Weight dToF Sensor and
RGB Image
ABSTRACT: Depth enhancement, which uses RGB images as guidance to convert raw signals
from dToF into high-precision, dense depth maps, is a critical task in computer
vision. Although existing super-resolution-based methods show promising results
on public datasets, they often rely on idealized assumptions like accurate
region correspondences and reliable dToF inputs, overlooking calibration errors
that cause misalignment and anomaly signals inherent to dToF imaging, limiting
real-world applicability. To address these challenges, we propose a novel
completion-based method, named DEPTHOR, featuring advances in both the training
strategy and model architecture. First, we propose a method to simulate
real-world dToF data from the accurate ground truth in synthetic datasets to
enable noise-robust training. Second, we design a novel network that
incorporates monocular depth estimation (MDE), leveraging global depth
relationships and contextual information to improve prediction in challenging
regions. On the ZJU-L5 dataset, our training strategy significantly enhances
depth completion models, achieving results comparable to depth super-resolution
methods, while our model achieves state-of-the-art results, improving Rel and
RMSE by 27% and 18%, respectively. On a more challenging set of dToF samples we
collected, our method outperforms SOTA methods on preliminary stereo-based GT,
improving Rel and RMSE by 23% and 22%, respectively. Our Code is available at
https://github.com/ShadowBbBb/Depthor
|
2504.01597 | Dandan Shan | Yuehui Qiu and Dandan Shan and Yining Wang and Pei Dong and Dijia Wu
and Xinnian Yang and Qingqi Hong and Dinggang Shen | A topology-preserving three-stage framework for fully-connected coronary
artery extraction | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coronary artery extraction is a crucial prerequisite for computer-aided
diagnosis of coronary artery disease. Accurately extracting the complete
coronary tree remains challenging due to several factors, including presence of
thin distal vessels, tortuous topological structures, and insufficient
contrast. These issues often result in over-segmentation and under-segmentation
in current segmentation methods. To address these challenges, we propose a
topology-preserving three-stage framework for fully-connected coronary artery
extraction. This framework includes vessel segmentation, centerline
reconnection, and missing vessel reconstruction. First, we introduce a new
centerline enhanced loss in the segmentation process. Second, for the broken
vessel segments, we further propose a regularized walk algorithm to integrate
distance, probabilities predicted by a centerline classifier, and directional
cosine similarity, for reconnecting the centerlines. Third, we apply implicit
neural representation and implicit modeling, to reconstruct the geometric model
of the missing vessels. Experimental results show that our proposed framework
outperforms existing methods, achieving Dice scores of 88.53\% and 85.07\%,
with Hausdorff Distances (HD) of 1.07mm and 1.63mm on ASOCA and PDSCA datasets,
respectively. Code will be available at https://github.com/YH-Qiu/CorSegRec.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 11:04:44 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Qiu",
"Yuehui",
""
],
[
"Shan",
"Dandan",
""
],
[
"Wang",
"Yining",
""
],
[
"Dong",
"Pei",
""
],
[
"Wu",
"Dijia",
""
],
[
"Yang",
"Xinnian",
""
],
[
"Hong",
"Qingqi",
""
],
[
"Shen",
"Dinggang",
""
]
] | TITLE: A topology-preserving three-stage framework for fully-connected coronary
artery extraction
ABSTRACT: Coronary artery extraction is a crucial prerequisite for computer-aided
diagnosis of coronary artery disease. Accurately extracting the complete
coronary tree remains challenging due to several factors, including presence of
thin distal vessels, tortuous topological structures, and insufficient
contrast. These issues often result in over-segmentation and under-segmentation
in current segmentation methods. To address these challenges, we propose a
topology-preserving three-stage framework for fully-connected coronary artery
extraction. This framework includes vessel segmentation, centerline
reconnection, and missing vessel reconstruction. First, we introduce a new
centerline enhanced loss in the segmentation process. Second, for the broken
vessel segments, we further propose a regularized walk algorithm to integrate
distance, probabilities predicted by a centerline classifier, and directional
cosine similarity, for reconnecting the centerlines. Third, we apply implicit
neural representation and implicit modeling, to reconstruct the geometric model
of the missing vessels. Experimental results show that our proposed framework
outperforms existing methods, achieving Dice scores of 88.53\% and 85.07\%,
with Hausdorff Distances (HD) of 1.07mm and 1.63mm on ASOCA and PDSCA datasets,
respectively. Code will be available at https://github.com/YH-Qiu/CorSegRec.
|
2504.01602 | Zihan Lin | Changshuo Zhang, Zihan Lin, Shukai Liu, Yongqi Liu, Han Li | Comment Staytime Prediction with LLM-enhanced Comment Understanding | Accepted by WWW 2025 Industry Track | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In modern online streaming platforms, the comments section plays a critical
role in enhancing the overall user experience. Understanding user behavior
within the comments section is essential for comprehensive user interest
modeling. A key factor of user engagement is staytime, which refers to the
amount of time that users browse and post comments. Existing watchtime
prediction methods struggle to adapt to staytime prediction, overlooking
interactions with individual comments and their interrelation. In this paper,
we present a micro-video recommendation dataset with video comments (named as
KuaiComt) which is collected from Kuaishou platform. correspondingly, we
propose a practical framework for comment staytime prediction with LLM-enhanced
Comment Understanding (LCU). Our framework leverages the strong text
comprehension capabilities of large language models (LLMs) to understand
textual information of comments, while also incorporating fine-grained comment
ranking signals as auxiliary tasks. The framework is two-staged: first, the LLM
is fine-tuned using domain-specific tasks to bridge the video and the comments;
second, we incorporate the LLM outputs into the prediction model and design two
comment ranking auxiliary tasks to better understand user preference. Extensive
offline experiments demonstrate the effectiveness of our framework, showing
significant improvements on the task of comment staytime prediction.
Additionally, online A/B testing further validates the practical benefits on
industrial scenario. Our dataset KuaiComt
(https://github.com/lyingCS/KuaiComt.github.io) and code for LCU
(https://github.com/lyingCS/LCU) are fully released.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 11:09:18 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Zhang",
"Changshuo",
""
],
[
"Lin",
"Zihan",
""
],
[
"Liu",
"Shukai",
""
],
[
"Liu",
"Yongqi",
""
],
[
"Li",
"Han",
""
]
] | TITLE: Comment Staytime Prediction with LLM-enhanced Comment Understanding
ABSTRACT: In modern online streaming platforms, the comments section plays a critical
role in enhancing the overall user experience. Understanding user behavior
within the comments section is essential for comprehensive user interest
modeling. A key factor of user engagement is staytime, which refers to the
amount of time that users browse and post comments. Existing watchtime
prediction methods struggle to adapt to staytime prediction, overlooking
interactions with individual comments and their interrelation. In this paper,
we present a micro-video recommendation dataset with video comments (named as
KuaiComt) which is collected from Kuaishou platform. correspondingly, we
propose a practical framework for comment staytime prediction with LLM-enhanced
Comment Understanding (LCU). Our framework leverages the strong text
comprehension capabilities of large language models (LLMs) to understand
textual information of comments, while also incorporating fine-grained comment
ranking signals as auxiliary tasks. The framework is two-staged: first, the LLM
is fine-tuned using domain-specific tasks to bridge the video and the comments;
second, we incorporate the LLM outputs into the prediction model and design two
comment ranking auxiliary tasks to better understand user preference. Extensive
offline experiments demonstrate the effectiveness of our framework, showing
significant improvements on the task of comment staytime prediction.
Additionally, online A/B testing further validates the practical benefits on
industrial scenario. Our dataset KuaiComt
(https://github.com/lyingCS/KuaiComt.github.io) and code for LCU
(https://github.com/lyingCS/LCU) are fully released.
|
2504.01605 | Renda Han | Renda Han, Guangzhen Yao, Wenxin Zhang, Yu Li, Wen Xin, Huajie Lei,
Mengfei Li, Zeyu Zhang, Chengze Du, and Yahe Tian | Multi-Relation Graph-Kernel Strengthen Network for Graph-Level
Clustering | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph-level clustering is a fundamental task of data mining, aiming at
dividing unlabeled graphs into distinct groups. However, existing deep methods
that are limited by pooling have difficulty extracting diverse and complex
graph structure features, while traditional graph kernel methods rely on
exhaustive substructure search, unable to adaptive handle multi-relational
data. This limitation hampers producing robust and representative graph-level
embeddings. To address this issue, we propose a novel Multi-Relation
Graph-Kernel Strengthen Network for Graph-Level Clustering (MGSN), which
integrates multi-relation modeling with graph kernel techniques to fully
leverage their respective advantages. Specifically, MGSN constructs
multi-relation graphs to capture diverse semantic relationships between nodes
and graphs, which employ graph kernel methods to extract graph similarity
features, enriching the representation space. Moreover, a relation-aware
representation refinement strategy is designed, which adaptively aligns
multi-relation information across views while enhancing graph-level features
through a progressive fusion process. Extensive experiments on multiple
benchmark datasets demonstrate the superiority of MGSN over state-of-the-art
methods. The results highlight its ability to leverage multi-relation
structures and graph kernel features, establishing a new paradigm for robust
graph-level clustering.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 11:17:15 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Han",
"Renda",
""
],
[
"Yao",
"Guangzhen",
""
],
[
"Zhang",
"Wenxin",
""
],
[
"Li",
"Yu",
""
],
[
"Xin",
"Wen",
""
],
[
"Lei",
"Huajie",
""
],
[
"Li",
"Mengfei",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Du",
"Chengze",
""
],
[
"Tian",
"Yahe",
""
]
] | TITLE: Multi-Relation Graph-Kernel Strengthen Network for Graph-Level
Clustering
ABSTRACT: Graph-level clustering is a fundamental task of data mining, aiming at
dividing unlabeled graphs into distinct groups. However, existing deep methods
that are limited by pooling have difficulty extracting diverse and complex
graph structure features, while traditional graph kernel methods rely on
exhaustive substructure search, unable to adaptive handle multi-relational
data. This limitation hampers producing robust and representative graph-level
embeddings. To address this issue, we propose a novel Multi-Relation
Graph-Kernel Strengthen Network for Graph-Level Clustering (MGSN), which
integrates multi-relation modeling with graph kernel techniques to fully
leverage their respective advantages. Specifically, MGSN constructs
multi-relation graphs to capture diverse semantic relationships between nodes
and graphs, which employ graph kernel methods to extract graph similarity
features, enriching the representation space. Moreover, a relation-aware
representation refinement strategy is designed, which adaptively aligns
multi-relation information across views while enhancing graph-level features
through a progressive fusion process. Extensive experiments on multiple
benchmark datasets demonstrate the superiority of MGSN over state-of-the-art
methods. The results highlight its ability to leverage multi-relation
structures and graph kernel features, establishing a new paradigm for robust
graph-level clustering.
|
2504.01619 | Hao Wu | Hao Wu, Hao Wang, Ruochong Li, Xuran Ma, Hui Xiong | 3DBonsai: Structure-Aware Bonsai Modeling Using Conditioned 3D Gaussian
Splatting | Accepted by ICME 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in text-to-3D generation have shown remarkable results by
leveraging 3D priors in combination with 2D diffusion. However, previous
methods utilize 3D priors that lack detailed and complex structural
information, limiting them to generating simple objects and presenting
challenges for creating intricate structures such as bonsai. In this paper, we
propose 3DBonsai, a novel text-to-3D framework for generating 3D bonsai with
complex structures. Technically, we first design a trainable 3D space
colonization algorithm to produce bonsai structures, which are then enhanced
through random sampling and point cloud augmentation to serve as the 3D
Gaussian priors. We introduce two bonsai generation pipelines with distinct
structural levels: fine structure conditioned generation, which initializes 3D
Gaussians using a 3D structure prior to produce detailed and complex bonsai,
and coarse structure conditioned generation, which employs a multi-view
structure consistency module to align 2D and 3D structures. Moreover, we have
compiled a unified 2D and 3D Chinese-style bonsai dataset. Our experimental
results demonstrate that 3DBonsai significantly outperforms existing methods,
providing a new benchmark for structure-aware 3D bonsai generation.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 11:27:02 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wu",
"Hao",
""
],
[
"Wang",
"Hao",
""
],
[
"Li",
"Ruochong",
""
],
[
"Ma",
"Xuran",
""
],
[
"Xiong",
"Hui",
""
]
] | TITLE: 3DBonsai: Structure-Aware Bonsai Modeling Using Conditioned 3D Gaussian
Splatting
ABSTRACT: Recent advancements in text-to-3D generation have shown remarkable results by
leveraging 3D priors in combination with 2D diffusion. However, previous
methods utilize 3D priors that lack detailed and complex structural
information, limiting them to generating simple objects and presenting
challenges for creating intricate structures such as bonsai. In this paper, we
propose 3DBonsai, a novel text-to-3D framework for generating 3D bonsai with
complex structures. Technically, we first design a trainable 3D space
colonization algorithm to produce bonsai structures, which are then enhanced
through random sampling and point cloud augmentation to serve as the 3D
Gaussian priors. We introduce two bonsai generation pipelines with distinct
structural levels: fine structure conditioned generation, which initializes 3D
Gaussians using a 3D structure prior to produce detailed and complex bonsai,
and coarse structure conditioned generation, which employs a multi-view
structure consistency module to align 2D and 3D structures. Moreover, we have
compiled a unified 2D and 3D Chinese-style bonsai dataset. Our experimental
results demonstrate that 3DBonsai significantly outperforms existing methods,
providing a new benchmark for structure-aware 3D bonsai generation.
|
2504.01627 | Lena Schmidt | Lena Schmidt, Oshin Sharma, Chris Marshall, Sonia Garcia Gonzalez
Moral | Horizon Scans can be accelerated using novel information retrieval and
artificial intelligence tools | null | null | null | null | cs.IR cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Introduction: Horizon scanning in healthcare assesses early signals of
innovation, crucial for timely adoption. Current horizon scanning faces
challenges in efficient information retrieval and analysis, especially from
unstructured sources like news, presenting a need for innovative tools.
Methodology: The study introduces SCANAR and AIDOC, open-source Python-based
tools designed to improve horizon scanning. SCANAR automates the retrieval and
processing of news articles, offering functionalities such as de-duplication
and unsupervised relevancy ranking. AIDOC aids filtration by leveraging AI to
reorder textual data based on relevancy, employing neural networks for semantic
similarity, and subsequently prioritizing likely relevant entries for human
review. Results: Twelve internal datasets from horizon scans and four external
benchmarking datasets were used. SCANAR improved retrieval efficiency by
automating processes previously dependent on manual labour. AIDOC displayed
work-saving potential, achieving around 62% reduction in manual review efforts
at 95% recall. Comparative analysis with benchmarking data showed AIDOC's
performance was similar to existing systematic review automation tools, though
performance varied depending on dataset characteristics. A smaller case-study
on our news datasets shows the potential of ensembling large language models
within the active-learning process for faster detection of relevant articles
across news datasets. Conclusion: The validation indicates that SCANAR and
AIDOC show potential to enhance horizon scanning efficiency by streamlining
data retrieval and prioritisation. These tools may alleviate methodological
limitations and allow broader, swifter horizon scans. Further studies are
suggested to optimize these models and to design new workflows and validation
processes that integrate large language models.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 11:33:08 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Schmidt",
"Lena",
""
],
[
"Sharma",
"Oshin",
""
],
[
"Marshall",
"Chris",
""
],
[
"Moral",
"Sonia Garcia Gonzalez",
""
]
] | TITLE: Horizon Scans can be accelerated using novel information retrieval and
artificial intelligence tools
ABSTRACT: Introduction: Horizon scanning in healthcare assesses early signals of
innovation, crucial for timely adoption. Current horizon scanning faces
challenges in efficient information retrieval and analysis, especially from
unstructured sources like news, presenting a need for innovative tools.
Methodology: The study introduces SCANAR and AIDOC, open-source Python-based
tools designed to improve horizon scanning. SCANAR automates the retrieval and
processing of news articles, offering functionalities such as de-duplication
and unsupervised relevancy ranking. AIDOC aids filtration by leveraging AI to
reorder textual data based on relevancy, employing neural networks for semantic
similarity, and subsequently prioritizing likely relevant entries for human
review. Results: Twelve internal datasets from horizon scans and four external
benchmarking datasets were used. SCANAR improved retrieval efficiency by
automating processes previously dependent on manual labour. AIDOC displayed
work-saving potential, achieving around 62% reduction in manual review efforts
at 95% recall. Comparative analysis with benchmarking data showed AIDOC's
performance was similar to existing systematic review automation tools, though
performance varied depending on dataset characteristics. A smaller case-study
on our news datasets shows the potential of ensembling large language models
within the active-learning process for faster detection of relevant articles
across news datasets. Conclusion: The validation indicates that SCANAR and
AIDOC show potential to enhance horizon scanning efficiency by streamlining
data retrieval and prioritisation. These tools may alleviate methodological
limitations and allow broader, swifter horizon scans. Further studies are
suggested to optimize these models and to design new workflows and validation
processes that integrate large language models.
|
2504.01647 | Tobias Fischer | Tobias Fischer and Samuel Rota Bul\`o and Yung-Hsu Yang and Nikhil
Varma Keetha and Lorenzo Porzi and Norman M\"uller and Katja Schwarz and
Jonathon Luiten and Marc Pollefeys and Peter Kontschieder | FlowR: Flowing from Sparse to Dense 3D Reconstructions | Project page is available at https://tobiasfshr.github.io/pub/flowr | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | 3D Gaussian splatting enables high-quality novel view synthesis (NVS) at
real-time frame rates. However, its quality drops sharply as we depart from the
training views. Thus, dense captures are needed to match the high-quality
expectations of some applications, e.g. Virtual Reality (VR). However, such
dense captures are very laborious and expensive to obtain. Existing works have
explored using 2D generative models to alleviate this requirement by
distillation or generating additional training views. These methods are often
conditioned only on a handful of reference input views and thus do not fully
exploit the available 3D information, leading to inconsistent generation
results and reconstruction artifacts. To tackle this problem, we propose a
multi-view, flow matching model that learns a flow to connect novel view
renderings from possibly sparse reconstructions to renderings that we expect
from dense reconstructions. This enables augmenting scene captures with novel,
generated views to improve reconstruction quality. Our model is trained on a
novel dataset of 3.6M image pairs and can process up to 45 views at 540x960
resolution (91K tokens) on one H100 GPU in a single forward pass. Our pipeline
consistently improves NVS in sparse- and dense-view scenarios, leading to
higher-quality reconstructions than prior works across multiple, widely-used
NVS benchmarks.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 11:57:01 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Fischer",
"Tobias",
""
],
[
"Bulò",
"Samuel Rota",
""
],
[
"Yang",
"Yung-Hsu",
""
],
[
"Keetha",
"Nikhil Varma",
""
],
[
"Porzi",
"Lorenzo",
""
],
[
"Müller",
"Norman",
""
],
[
"Schwarz",
"Katja",
""
],
[
"Luiten",
"Jonathon",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Kontschieder",
"Peter",
""
]
] | TITLE: FlowR: Flowing from Sparse to Dense 3D Reconstructions
ABSTRACT: 3D Gaussian splatting enables high-quality novel view synthesis (NVS) at
real-time frame rates. However, its quality drops sharply as we depart from the
training views. Thus, dense captures are needed to match the high-quality
expectations of some applications, e.g. Virtual Reality (VR). However, such
dense captures are very laborious and expensive to obtain. Existing works have
explored using 2D generative models to alleviate this requirement by
distillation or generating additional training views. These methods are often
conditioned only on a handful of reference input views and thus do not fully
exploit the available 3D information, leading to inconsistent generation
results and reconstruction artifacts. To tackle this problem, we propose a
multi-view, flow matching model that learns a flow to connect novel view
renderings from possibly sparse reconstructions to renderings that we expect
from dense reconstructions. This enables augmenting scene captures with novel,
generated views to improve reconstruction quality. Our model is trained on a
novel dataset of 3.6M image pairs and can process up to 45 views at 540x960
resolution (91K tokens) on one H100 GPU in a single forward pass. Our pipeline
consistently improves NVS in sparse- and dense-view scenarios, leading to
higher-quality reconstructions than prior works across multiple, widely-used
NVS benchmarks.
|
2504.01648 | Haosheng Li | Haosheng Li, Yuecong Xu, Junjie Chen, Kemi Ding | ProtoGuard-guided PROPEL: Class-Aware Prototype Enhancement and
Progressive Labeling for Incremental 3D Point Cloud Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D point cloud semantic segmentation technology has been widely used.
However, in real-world scenarios, the environment is evolving. Thus,
offline-trained segmentation models may lead to catastrophic forgetting of
previously seen classes. Class-incremental learning (CIL) is designed to
address the problem of catastrophic forgetting. While point clouds are common,
we observe high similarity and unclear boundaries between different classes.
Meanwhile, they are known to be imbalanced in class distribution. These lead to
issues including misclassification between similar classes and the long-tail
problem, which have not been adequately addressed in previous CIL methods. We
thus propose ProtoGuard and PROPEL (Progressive Refinement Of PsEudo-Labels).
In the base-class training phase, ProtoGuard maintains geometric and semantic
prototypes for each class, which are combined into prototype features using an
attention mechanism. In the novel-class training phase, PROPEL inherits the
base feature extractor and classifier, guiding pseudo-label propagation and
updates based on density distribution and semantic similarity. Extensive
experiments show that our approach achieves remarkable results on both the
S3DIS and ScanNet datasets, improving the mIoU of 3D point cloud segmentation
by a maximum of 20.39% under the 5-step CIL scenario on S3DIS.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 11:58:04 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Li",
"Haosheng",
""
],
[
"Xu",
"Yuecong",
""
],
[
"Chen",
"Junjie",
""
],
[
"Ding",
"Kemi",
""
]
] | TITLE: ProtoGuard-guided PROPEL: Class-Aware Prototype Enhancement and
Progressive Labeling for Incremental 3D Point Cloud Segmentation
ABSTRACT: 3D point cloud semantic segmentation technology has been widely used.
However, in real-world scenarios, the environment is evolving. Thus,
offline-trained segmentation models may lead to catastrophic forgetting of
previously seen classes. Class-incremental learning (CIL) is designed to
address the problem of catastrophic forgetting. While point clouds are common,
we observe high similarity and unclear boundaries between different classes.
Meanwhile, they are known to be imbalanced in class distribution. These lead to
issues including misclassification between similar classes and the long-tail
problem, which have not been adequately addressed in previous CIL methods. We
thus propose ProtoGuard and PROPEL (Progressive Refinement Of PsEudo-Labels).
In the base-class training phase, ProtoGuard maintains geometric and semantic
prototypes for each class, which are combined into prototype features using an
attention mechanism. In the novel-class training phase, PROPEL inherits the
base feature extractor and classifier, guiding pseudo-label propagation and
updates based on density distribution and semantic similarity. Extensive
experiments show that our approach achieves remarkable results on both the
S3DIS and ScanNet datasets, improving the mIoU of 3D point cloud segmentation
by a maximum of 20.39% under the 5-step CIL scenario on S3DIS.
|
2504.01660 | James Trayford Dr | James W. Trayford, Samantha Youles, Chris Harrison, Rose Shepherd,
Nicolas Bonne | STRAUSS: Sonification Tools & Resources for Analysis Using Sound
Synthesis | 4 pages, linking to documentation on ReadTheDocs
(https://strauss.readthedocs.io/en/latest/) | null | null | null | astro-ph.IM physics.data-an | http://creativecommons.org/licenses/by/4.0/ | Sonification, or conveying data using non-verbal audio, is a relatively niche
but growing approach for presenting data across multiple specialist domains
including astronomy, climate science, and beyond. The STRAUSS Python package
aims to provide such a tool, which builds upon previous approaches to provide a
powerful means to explore different ways of expressing data, with fine control
over the output audio and its format. STRAUSS is a free, open source (FOSS)
package, designed to allow flexible and effective sonification to be integrated
into data workflows, in analogy to widely used visualisation packages. The
remit of STRAUSS is broad; it is intended to be able to bridge between ad-hoc
solutions for sonifying very particular datasets, and highly technical
compositional and sound-design tools that are not optimised for sonification,
or may have a steep learning curve. The code offers a range of approaches to
sonification for a variety of contexts (e.g. science education, science
communication, technical data analysis, etc). To this end, STRAUSS is packaged
with a number of examples of different sonification approaches, and preset
configurations to support "low-barrier, high-ceiling" approach. STRAUSS has
been used to produce both educational resources and analysis tools.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 12:12:30 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Trayford",
"James W.",
""
],
[
"Youles",
"Samantha",
""
],
[
"Harrison",
"Chris",
""
],
[
"Shepherd",
"Rose",
""
],
[
"Bonne",
"Nicolas",
""
]
] | TITLE: STRAUSS: Sonification Tools & Resources for Analysis Using Sound
Synthesis
ABSTRACT: Sonification, or conveying data using non-verbal audio, is a relatively niche
but growing approach for presenting data across multiple specialist domains
including astronomy, climate science, and beyond. The STRAUSS Python package
aims to provide such a tool, which builds upon previous approaches to provide a
powerful means to explore different ways of expressing data, with fine control
over the output audio and its format. STRAUSS is a free, open source (FOSS)
package, designed to allow flexible and effective sonification to be integrated
into data workflows, in analogy to widely used visualisation packages. The
remit of STRAUSS is broad; it is intended to be able to bridge between ad-hoc
solutions for sonifying very particular datasets, and highly technical
compositional and sound-design tools that are not optimised for sonification,
or may have a steep learning curve. The code offers a range of approaches to
sonification for a variety of contexts (e.g. science education, science
communication, technical data analysis, etc). To this end, STRAUSS is packaged
with a number of examples of different sonification approaches, and preset
configurations to support "low-barrier, high-ceiling" approach. STRAUSS has
been used to produce both educational resources and analysis tools.
|
2504.01666 | Sarah Alyami | Sarah Alyami and Hamzah Luqman | CLIP-SLA: Parameter-Efficient CLIP Adaptation for Continuous Sign
Language Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Continuous sign language recognition (CSLR) focuses on interpreting and
transcribing sequences of sign language gestures in videos. In this work, we
propose CLIP sign language adaptation (CLIP-SLA), a novel CSLR framework that
leverages the powerful pre-trained visual encoder from the CLIP model to sign
language tasks through parameter-efficient fine-tuning (PEFT). We introduce two
variants, SLA-Adapter and SLA-LoRA, which integrate PEFT modules into the CLIP
visual encoder, enabling fine-tuning with minimal trainable parameters. The
effectiveness of the proposed frameworks is validated on four datasets:
Phoenix2014, Phoenix2014-T, CSL-Daily, and Isharah-500, where both CLIP-SLA
variants outperformed several SOTA models with fewer trainable parameters.
Extensive ablation studies emphasize the effectiveness and flexibility of the
proposed methods with different vision-language models for CSLR. These findings
showcase the potential of adapting large-scale pre-trained models for scalable
and efficient CSLR, which pave the way for future advancements in sign language
understanding.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 12:15:33 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Alyami",
"Sarah",
""
],
[
"Luqman",
"Hamzah",
""
]
] | TITLE: CLIP-SLA: Parameter-Efficient CLIP Adaptation for Continuous Sign
Language Recognition
ABSTRACT: Continuous sign language recognition (CSLR) focuses on interpreting and
transcribing sequences of sign language gestures in videos. In this work, we
propose CLIP sign language adaptation (CLIP-SLA), a novel CSLR framework that
leverages the powerful pre-trained visual encoder from the CLIP model to sign
language tasks through parameter-efficient fine-tuning (PEFT). We introduce two
variants, SLA-Adapter and SLA-LoRA, which integrate PEFT modules into the CLIP
visual encoder, enabling fine-tuning with minimal trainable parameters. The
effectiveness of the proposed frameworks is validated on four datasets:
Phoenix2014, Phoenix2014-T, CSL-Daily, and Isharah-500, where both CLIP-SLA
variants outperformed several SOTA models with fewer trainable parameters.
Extensive ablation studies emphasize the effectiveness and flexibility of the
proposed methods with different vision-language models for CSLR. These findings
showcase the potential of adapting large-scale pre-trained models for scalable
and efficient CSLR, which pave the way for future advancements in sign language
understanding.
|
2504.01676 | Jingyang Zhu | Yuanming Shi, Jingyang Zhu, Chunxiao Jiang, Linling Kuang, and Khaled
B. Letaief | Satellite Edge Artificial Intelligence with Large Models: Architectures
and Technologies | 15 pages, 5 figures; submitted to SCIENCE CHINA Information Sciences
for possible publication | null | null | null | cs.LG cs.DC cs.NI eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driven by the growing demand for intelligent remote sensing applications,
large artificial intelligence (AI) models pre-trained on large-scale unlabeled
datasets and fine-tuned for downstream tasks have significantly improved
learning performance for various downstream tasks due to their generalization
capabilities. However, many specific downstream tasks, such as extreme weather
nowcasting (e.g., downburst and tornado), disaster monitoring, and battlefield
surveillance, require real-time data processing. Traditional methods via
transferring raw data to ground stations for processing often cause significant
issues in terms of latency and trustworthiness. To address these challenges,
satellite edge AI provides a paradigm shift from ground-based to on-board data
processing by leveraging the integrated communication-and-computation
capabilities in space computing power networks (Space-CPN), thereby enhancing
the timeliness, effectiveness, and trustworthiness for remote sensing
downstream tasks. Moreover, satellite edge large AI model (LAM) involves both
the training (i.e., fine-tuning) and inference phases, where a key challenge
lies in developing computation task decomposition principles to support
scalable LAM deployment in resource-constrained space networks with
time-varying topologies. In this article, we first propose a satellite
federated fine-tuning architecture to split and deploy the modules of LAM over
space and ground networks for efficient LAM fine-tuning. We then introduce a
microservice-empowered satellite edge LAM inference architecture that
virtualizes LAM components into lightweight microservices tailored for
multi-task multimodal inference. Finally, we discuss the future directions for
enhancing the efficiency and scalability of satellite edge LAM, including
task-oriented communication, brain-inspired computing, and satellite edge AI
network optimization.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 12:25:57 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Shi",
"Yuanming",
""
],
[
"Zhu",
"Jingyang",
""
],
[
"Jiang",
"Chunxiao",
""
],
[
"Kuang",
"Linling",
""
],
[
"Letaief",
"Khaled B.",
""
]
] | TITLE: Satellite Edge Artificial Intelligence with Large Models: Architectures
and Technologies
ABSTRACT: Driven by the growing demand for intelligent remote sensing applications,
large artificial intelligence (AI) models pre-trained on large-scale unlabeled
datasets and fine-tuned for downstream tasks have significantly improved
learning performance for various downstream tasks due to their generalization
capabilities. However, many specific downstream tasks, such as extreme weather
nowcasting (e.g., downburst and tornado), disaster monitoring, and battlefield
surveillance, require real-time data processing. Traditional methods via
transferring raw data to ground stations for processing often cause significant
issues in terms of latency and trustworthiness. To address these challenges,
satellite edge AI provides a paradigm shift from ground-based to on-board data
processing by leveraging the integrated communication-and-computation
capabilities in space computing power networks (Space-CPN), thereby enhancing
the timeliness, effectiveness, and trustworthiness for remote sensing
downstream tasks. Moreover, satellite edge large AI model (LAM) involves both
the training (i.e., fine-tuning) and inference phases, where a key challenge
lies in developing computation task decomposition principles to support
scalable LAM deployment in resource-constrained space networks with
time-varying topologies. In this article, we first propose a satellite
federated fine-tuning architecture to split and deploy the modules of LAM over
space and ground networks for efficient LAM fine-tuning. We then introduce a
microservice-empowered satellite edge LAM inference architecture that
virtualizes LAM components into lightweight microservices tailored for
multi-task multimodal inference. Finally, we discuss the future directions for
enhancing the efficiency and scalability of satellite edge LAM, including
task-oriented communication, brain-inspired computing, and satellite edge AI
network optimization.
|
2504.01689 | Noam Elata Mr | Noam Elata, Hyungjin Chung, Jong Chul Ye, Tomer Michaeli, Michael Elad | InvFussion: Bridging Supervised and Zero-shot Diffusion for Inverse
Problems | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffusion Models have demonstrated remarkable capabilities in handling
inverse problems, offering high-quality posterior-sampling-based solutions.
Despite significant advances, a fundamental trade-off persists, regarding the
way the conditioned synthesis is employed: Training-based methods achieve high
quality results, while zero-shot approaches trade this with flexibility. This
work introduces a framework that combines the best of both worlds -- the strong
performance of supervised approaches and the flexibility of zero-shot methods.
This is achieved through a novel architectural design that seamlessly
integrates the degradation operator directly into the denoiser. In each block,
our proposed architecture applies the degradation operator on the network
activations and conditions the output using the attention mechanism, enabling
adaptation to diverse degradation scenarios while maintaining high performance.
Our work demonstrates the versatility of the proposed architecture, operating
as a general MMSE estimator, a posterior sampler, or a Neural Posterior
Principal Component estimator. This flexibility enables a wide range of
downstream tasks, highlighting the broad applicability of our framework. The
proposed modification of the denoiser network offers a versatile, accurate, and
computationally efficient solution, demonstrating the advantages of dedicated
network architectures for complex inverse problems. Experimental results on the
FFHQ and ImageNet datasets demonstrate state-of-the-art posterior-sampling
performance, surpassing both training-based and zero-shot alternatives.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 12:40:57 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Elata",
"Noam",
""
],
[
"Chung",
"Hyungjin",
""
],
[
"Ye",
"Jong Chul",
""
],
[
"Michaeli",
"Tomer",
""
],
[
"Elad",
"Michael",
""
]
] | TITLE: InvFussion: Bridging Supervised and Zero-shot Diffusion for Inverse
Problems
ABSTRACT: Diffusion Models have demonstrated remarkable capabilities in handling
inverse problems, offering high-quality posterior-sampling-based solutions.
Despite significant advances, a fundamental trade-off persists, regarding the
way the conditioned synthesis is employed: Training-based methods achieve high
quality results, while zero-shot approaches trade this with flexibility. This
work introduces a framework that combines the best of both worlds -- the strong
performance of supervised approaches and the flexibility of zero-shot methods.
This is achieved through a novel architectural design that seamlessly
integrates the degradation operator directly into the denoiser. In each block,
our proposed architecture applies the degradation operator on the network
activations and conditions the output using the attention mechanism, enabling
adaptation to diverse degradation scenarios while maintaining high performance.
Our work demonstrates the versatility of the proposed architecture, operating
as a general MMSE estimator, a posterior sampler, or a Neural Posterior
Principal Component estimator. This flexibility enables a wide range of
downstream tasks, highlighting the broad applicability of our framework. The
proposed modification of the denoiser network offers a versatile, accurate, and
computationally efficient solution, demonstrating the advantages of dedicated
network architectures for complex inverse problems. Experimental results on the
FFHQ and ImageNet datasets demonstrate state-of-the-art posterior-sampling
performance, surpassing both training-based and zero-shot alternatives.
|
2504.01692 | Isabella Cama | Isabella Cama, Alejandro Guzm\'an, Cristina Campi, Michele Piana,
Karim Lekadir, Sara Garbarino, Oliver D\'iaz | Segmentation variability and radiomics stability for predicting
Triple-Negative Breast Cancer subtype using Magnetic Resonance Imaging | 22 pages, 7 figures | null | null | null | stat.AP cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most papers caution against using predictive models for disease
stratification based on unselected radiomic features, as these features are
affected by contouring variability. Instead, they advocate for the use of the
Intraclass Correlation Coefficient (ICC) as a measure of stability for feature
selection. However, the direct effect of segmentation variability on the
predictive models is rarely studied. This study investigates the impact of
segmentation variability on feature stability and predictive performance in
radiomics-based prediction of Triple-Negative Breast Cancer (TNBC) subtype
using Magnetic Resonance Imaging. A total of 244 images from the Duke dataset
were used, with segmentation variability introduced through modifications of
manual segmentations. For each mask, explainable radiomic features were
selected using the Shapley Additive exPlanations method and used to train
logistic regression models. Feature stability across segmentations was assessed
via ICC, Pearson's correlation, and reliability scores quantifying the
relationship between feature stability and segmentation variability. Results
indicate that segmentation accuracy does not significantly impact predictive
performance. While incorporating peritumoral information may reduce feature
reproducibility, it does not diminish feature predictive capability. Moreover,
feature selection in predictive models is not inherently tied to feature
stability with respect to segmentation, suggesting that an overreliance on ICC
or reliability scores for feature selection might exclude valuable predictive
features.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 12:48:01 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Cama",
"Isabella",
""
],
[
"Guzmán",
"Alejandro",
""
],
[
"Campi",
"Cristina",
""
],
[
"Piana",
"Michele",
""
],
[
"Lekadir",
"Karim",
""
],
[
"Garbarino",
"Sara",
""
],
[
"Díaz",
"Oliver",
""
]
] | TITLE: Segmentation variability and radiomics stability for predicting
Triple-Negative Breast Cancer subtype using Magnetic Resonance Imaging
ABSTRACT: Most papers caution against using predictive models for disease
stratification based on unselected radiomic features, as these features are
affected by contouring variability. Instead, they advocate for the use of the
Intraclass Correlation Coefficient (ICC) as a measure of stability for feature
selection. However, the direct effect of segmentation variability on the
predictive models is rarely studied. This study investigates the impact of
segmentation variability on feature stability and predictive performance in
radiomics-based prediction of Triple-Negative Breast Cancer (TNBC) subtype
using Magnetic Resonance Imaging. A total of 244 images from the Duke dataset
were used, with segmentation variability introduced through modifications of
manual segmentations. For each mask, explainable radiomic features were
selected using the Shapley Additive exPlanations method and used to train
logistic regression models. Feature stability across segmentations was assessed
via ICC, Pearson's correlation, and reliability scores quantifying the
relationship between feature stability and segmentation variability. Results
indicate that segmentation accuracy does not significantly impact predictive
performance. While incorporating peritumoral information may reduce feature
reproducibility, it does not diminish feature predictive capability. Moreover,
feature selection in predictive models is not inherently tied to feature
stability with respect to segmentation, suggesting that an overreliance on ICC
or reliability scores for feature selection might exclude valuable predictive
features.
|
2504.01708 | Petr Vanc | Petr Vanc, Karla Stepanova | TransforMerger: Transformer-based Voice-Gesture Fusion for Robust
Human-Robot Communication | 8 pages, 7 figures | null | null | null | cs.RO cs.HC cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As human-robot collaboration advances, natural and flexible communication
methods are essential for effective robot control. Traditional methods relying
on a single modality or rigid rules struggle with noisy or misaligned data as
well as with object descriptions that do not perfectly fit the predefined
object names (e.g. 'Pick that red object'). We introduce TransforMerger, a
transformer-based reasoning model that infers a structured action command for
robotic manipulation based on fused voice and gesture inputs. Our approach
merges multimodal data into a single unified sentence, which is then processed
by the language model. We employ probabilistic embeddings to handle uncertainty
and we integrate contextual scene understanding to resolve ambiguous references
(e.g., gestures pointing to multiple objects or vague verbal cues like "this").
We evaluate TransforMerger in simulated and real-world experiments,
demonstrating its robustness to noise, misalignment, and missing information.
Our results show that TransforMerger outperforms deterministic baselines,
especially in scenarios requiring more contextual knowledge, enabling more
robust and flexible human-robot communication. Code and datasets are available
at: http://imitrob.ciirc.cvut.cz/publications/transformerger.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 13:15:59 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Vanc",
"Petr",
""
],
[
"Stepanova",
"Karla",
""
]
] | TITLE: TransforMerger: Transformer-based Voice-Gesture Fusion for Robust
Human-Robot Communication
ABSTRACT: As human-robot collaboration advances, natural and flexible communication
methods are essential for effective robot control. Traditional methods relying
on a single modality or rigid rules struggle with noisy or misaligned data as
well as with object descriptions that do not perfectly fit the predefined
object names (e.g. 'Pick that red object'). We introduce TransforMerger, a
transformer-based reasoning model that infers a structured action command for
robotic manipulation based on fused voice and gesture inputs. Our approach
merges multimodal data into a single unified sentence, which is then processed
by the language model. We employ probabilistic embeddings to handle uncertainty
and we integrate contextual scene understanding to resolve ambiguous references
(e.g., gestures pointing to multiple objects or vague verbal cues like "this").
We evaluate TransforMerger in simulated and real-world experiments,
demonstrating its robustness to noise, misalignment, and missing information.
Our results show that TransforMerger outperforms deterministic baselines,
especially in scenarios requiring more contextual knowledge, enabling more
robust and flexible human-robot communication. Code and datasets are available
at: http://imitrob.ciirc.cvut.cz/publications/transformerger.
|
2504.01738 | Philip Lippmann | Philip Lippmann and Jie Yang | Style over Substance: Distilled Language Models Reason Via Stylistic
Replication | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Specialized reasoning language models (RLMs) have demonstrated that scaling
test-time computation through detailed reasoning traces significantly enhances
performance. Although these traces effectively facilitate knowledge
distillation into smaller, instruction-tuned models, the precise nature of
transferred reasoning remains unclear. In this study, we investigate to what
extent distilled models internalize replicated stylistic patterns during
reasoning. To this end, we systematically analyze reasoning traces, identifying
structural and lexical patterns that characterize successful reasoning. We then
introduce two new datasets -- a dataset of emergent reasoning traces and a
synthetic dataset explicitly constructed to replicate these stylistic patterns
-- to precisely examine their influence on distilled models' reasoning
capabilities. We find that models trained on the synthetic traces achieve
comparable performance, indicating that distilled reasoning abilities rely
significantly on surface-level patterns. Surprisingly, we observe an increase
in performance even when the synthetic traces are altered to lead to the wrong
answer. Our findings highlight how stylistic patterns can be leveraged to
efficiently enhance LM reasoning across diverse model families.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 13:50:20 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Lippmann",
"Philip",
""
],
[
"Yang",
"Jie",
""
]
] | TITLE: Style over Substance: Distilled Language Models Reason Via Stylistic
Replication
ABSTRACT: Specialized reasoning language models (RLMs) have demonstrated that scaling
test-time computation through detailed reasoning traces significantly enhances
performance. Although these traces effectively facilitate knowledge
distillation into smaller, instruction-tuned models, the precise nature of
transferred reasoning remains unclear. In this study, we investigate to what
extent distilled models internalize replicated stylistic patterns during
reasoning. To this end, we systematically analyze reasoning traces, identifying
structural and lexical patterns that characterize successful reasoning. We then
introduce two new datasets -- a dataset of emergent reasoning traces and a
synthetic dataset explicitly constructed to replicate these stylistic patterns
-- to precisely examine their influence on distilled models' reasoning
capabilities. We find that models trained on the synthetic traces achieve
comparable performance, indicating that distilled reasoning abilities rely
significantly on surface-level patterns. Surprisingly, we observe an increase
in performance even when the synthetic traces are altered to lead to the wrong
answer. Our findings highlight how stylistic patterns can be leveraged to
efficiently enhance LM reasoning across diverse model families.
|
2504.01740 | Neville Kenneth Kitson | Neville K. Kitson, Anthony C. Constantinou | Stable Structure Learning with HC-Stable and Tabu-Stable Algorithms | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many Bayesian Network structure learning algorithms are unstable, with the
learned graph sensitive to arbitrary dataset artifacts, such as the ordering of
columns (i.e., variable order). PC-Stable attempts to address this issue for
the widely-used PC algorithm, prompting researchers to use the "stable" version
instead. However, this problem seems to have been overlooked for score-based
algorithms. In this study, we show that some widely-used score-based
algorithms, as well as hybrid and constraint-based algorithms, including
PC-Stable, suffer from the same issue. We propose a novel solution for
score-based greedy hill-climbing that eliminates instability by determining a
stable node order, leading to consistent results regardless of variable
ordering. Two implementations, HC-Stable and Tabu-Stable, are introduced.
Tabu-Stable achieves the highest BIC scores across all networks, and the
highest accuracy for categorical networks. These results highlight the
importance of addressing instability in structure learning and provide a robust
and practical approach for future applications. This extends the scope and
impact of our previous work presented at Probabilistic Graphical Models 2024 by
incorporating continuous variables. The implementation, along with usage
instructions, is freely available on GitHub at
https://github.com/causal-iq/discovery.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 13:51:44 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Kitson",
"Neville K.",
""
],
[
"Constantinou",
"Anthony C.",
""
]
] | TITLE: Stable Structure Learning with HC-Stable and Tabu-Stable Algorithms
ABSTRACT: Many Bayesian Network structure learning algorithms are unstable, with the
learned graph sensitive to arbitrary dataset artifacts, such as the ordering of
columns (i.e., variable order). PC-Stable attempts to address this issue for
the widely-used PC algorithm, prompting researchers to use the "stable" version
instead. However, this problem seems to have been overlooked for score-based
algorithms. In this study, we show that some widely-used score-based
algorithms, as well as hybrid and constraint-based algorithms, including
PC-Stable, suffer from the same issue. We propose a novel solution for
score-based greedy hill-climbing that eliminates instability by determining a
stable node order, leading to consistent results regardless of variable
ordering. Two implementations, HC-Stable and Tabu-Stable, are introduced.
Tabu-Stable achieves the highest BIC scores across all networks, and the
highest accuracy for categorical networks. These results highlight the
importance of addressing instability in structure learning and provide a robust
and practical approach for future applications. This extends the scope and
impact of our previous work presented at Probabilistic Graphical Models 2024 by
incorporating continuous variables. The implementation, along with usage
instructions, is freely available on GitHub at
https://github.com/causal-iq/discovery.
|
2504.01757 | Eduardo Fernandes Montesuma | Eduardo Fernandes Montesuma | KD$^{2}$M: An unifying framework for feature knowledge distillation | 8 pages, 2 figures, 1 table, under review | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Distillation (KD) seeks to transfer the knowledge of a teacher,
towards a student neural net. This process is often done by matching the
networks' predictions (i.e., their output), but, recently several works have
proposed to match the distributions of neural nets' activations (i.e., their
features), a process known as \emph{distribution matching}. In this paper, we
propose an unifying framework, Knowledge Distillation through Distribution
Matching (KD$^{2}$M), which formalizes this strategy. Our contributions are
threefold. We i) provide an overview of distribution metrics used in
distribution matching, ii) benchmark on computer vision datasets, and iii)
derive new theoretical results for KD.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 14:14:46 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Montesuma",
"Eduardo Fernandes",
""
]
] | TITLE: KD$^{2}$M: An unifying framework for feature knowledge distillation
ABSTRACT: Knowledge Distillation (KD) seeks to transfer the knowledge of a teacher,
towards a student neural net. This process is often done by matching the
networks' predictions (i.e., their output), but, recently several works have
proposed to match the distributions of neural nets' activations (i.e., their
features), a process known as \emph{distribution matching}. In this paper, we
propose an unifying framework, Knowledge Distillation through Distribution
Matching (KD$^{2}$M), which formalizes this strategy. Our contributions are
threefold. We i) provide an overview of distribution metrics used in
distribution matching, ii) benchmark on computer vision datasets, and iii)
derive new theoretical results for KD.
|
2504.01764 | Mingrui Ye | Mingrui Ye, Lianping Yang, Hegui Zhu, Zenghao Zheng, Xin Wang, Yantao
Lo | Dual-stream Transformer-GCN Model with Contextualized Representations
Learning for Monocular 3D Human Pose Estimation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a novel approach to monocular 3D human pose estimation
using contextualized representation learning with the Transformer-GCN
dual-stream model. Monocular 3D human pose estimation is challenged by depth
ambiguity, limited 3D-labeled training data, imbalanced modeling, and
restricted model generalization. To address these limitations, our work
introduces a groundbreaking motion pre-training method based on contextualized
representation learning. Specifically, our method involves masking 2D pose
features and utilizing a Transformer-GCN dual-stream model to learn
high-dimensional representations through a self-distillation setup. By focusing
on contextualized representation learning and spatial-temporal modeling, our
approach enhances the model's ability to understand spatial-temporal
relationships between postures, resulting in superior generalization.
Furthermore, leveraging the Transformer-GCN dual-stream model, our approach
effectively balances global and local interactions in video pose estimation.
The model adaptively integrates information from both the Transformer and GCN
streams, where the GCN stream effectively learns local relationships between
adjacent key points and frames, while the Transformer stream captures
comprehensive global spatial and temporal features. Our model achieves
state-of-the-art performance on two benchmark datasets, with an MPJPE of 38.0mm
and P-MPJPE of 31.9mm on Human3.6M, and an MPJPE of 15.9mm on MPI-INF-3DHP.
Furthermore, visual experiments on public datasets and in-the-wild videos
demonstrate the robustness and generalization capabilities of our approach.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 14:17:57 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ye",
"Mingrui",
""
],
[
"Yang",
"Lianping",
""
],
[
"Zhu",
"Hegui",
""
],
[
"Zheng",
"Zenghao",
""
],
[
"Wang",
"Xin",
""
],
[
"Lo",
"Yantao",
""
]
] | TITLE: Dual-stream Transformer-GCN Model with Contextualized Representations
Learning for Monocular 3D Human Pose Estimation
ABSTRACT: This paper introduces a novel approach to monocular 3D human pose estimation
using contextualized representation learning with the Transformer-GCN
dual-stream model. Monocular 3D human pose estimation is challenged by depth
ambiguity, limited 3D-labeled training data, imbalanced modeling, and
restricted model generalization. To address these limitations, our work
introduces a groundbreaking motion pre-training method based on contextualized
representation learning. Specifically, our method involves masking 2D pose
features and utilizing a Transformer-GCN dual-stream model to learn
high-dimensional representations through a self-distillation setup. By focusing
on contextualized representation learning and spatial-temporal modeling, our
approach enhances the model's ability to understand spatial-temporal
relationships between postures, resulting in superior generalization.
Furthermore, leveraging the Transformer-GCN dual-stream model, our approach
effectively balances global and local interactions in video pose estimation.
The model adaptively integrates information from both the Transformer and GCN
streams, where the GCN stream effectively learns local relationships between
adjacent key points and frames, while the Transformer stream captures
comprehensive global spatial and temporal features. Our model achieves
state-of-the-art performance on two benchmark datasets, with an MPJPE of 38.0mm
and P-MPJPE of 31.9mm on Human3.6M, and an MPJPE of 15.9mm on MPI-INF-3DHP.
Furthermore, visual experiments on public datasets and in-the-wild videos
demonstrate the robustness and generalization capabilities of our approach.
|
2504.01790 | Sveinung Ohrem | Sveinung Johan Ohrem, Bent Haugal{\o}kken, Eleni Kelasidi | SOLAQUA: SINTEF Ocean Large Aquaculture Robotics Dataset | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | This paper presents a dataset gathered with an underwater robot in a
sea-based aquaculture setting. Data was gathered from an operational fish farm
and includes data from sensors such as the Waterlinked A50 DVL, the Nortek
Nucleus 1000 DVL, Sonardyne Micro Ranger 2 USBL, Sonoptix Mulitbeam Sonar, mono
and stereo cameras, and vehicle sensor data such as power usage, IMU, pressure,
temperature, and more. Data acquisition is performed during both manual and
autonomous traversal of the net pen structure. The collected vision data is of
undamaged nets with some fish and marine growth presence, and it is expected
that both the research community and the aquaculture industry will benefit
greatly from the utilization of the proposed SOLAQUA dataset.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 14:58:16 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ohrem",
"Sveinung Johan",
""
],
[
"Haugaløkken",
"Bent",
""
],
[
"Kelasidi",
"Eleni",
""
]
] | TITLE: SOLAQUA: SINTEF Ocean Large Aquaculture Robotics Dataset
ABSTRACT: This paper presents a dataset gathered with an underwater robot in a
sea-based aquaculture setting. Data was gathered from an operational fish farm
and includes data from sensors such as the Waterlinked A50 DVL, the Nortek
Nucleus 1000 DVL, Sonardyne Micro Ranger 2 USBL, Sonoptix Mulitbeam Sonar, mono
and stereo cameras, and vehicle sensor data such as power usage, IMU, pressure,
temperature, and more. Data acquisition is performed during both manual and
autonomous traversal of the net pen structure. The collected vision data is of
undamaged nets with some fish and marine growth presence, and it is expected
that both the research community and the aquaculture industry will benefit
greatly from the utilization of the proposed SOLAQUA dataset.
|
2504.01792 | Limeng Qiao | Limeng Qiao, Yiyang Gan, Bairui Wang, Jie Qin, Shuang Xu, Siqi Yang,
Lin Ma | UniViTAR: Unified Vision Transformer with Native Resolution | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional Vision Transformer simplifies visual modeling by standardizing
input resolutions, often disregarding the variability of natural visual data
and compromising spatial-contextual fidelity. While preliminary explorations
have superficially investigated native resolution modeling, existing approaches
still lack systematic analysis from a visual representation perspective. To
bridge this gap, we introduce UniViTAR, a family of homogeneous vision
foundation models tailored for unified visual modality and native resolution
scenario in the era of multimodal. Our framework first conducts architectural
upgrades to the vanilla paradigm by integrating multiple advanced components.
Building upon these improvements, a progressive training paradigm is
introduced, which strategically combines two core mechanisms: (1) resolution
curriculum learning, transitioning from fixed-resolution pretraining to native
resolution tuning, thereby leveraging ViT's inherent adaptability to
variable-length sequences, and (2) visual modality adaptation via inter-batch
image-video switching, which balances computational efficiency with enhanced
temporal reasoning. In parallel, a hybrid training framework further synergizes
sigmoid-based contrastive loss with feature distillation from a frozen teacher
model, thereby accelerating early-stage convergence. Finally, trained
exclusively on public datasets, externsive experiments across multiple model
scales from 0.3B to 1B demonstrate its effectiveness.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 14:59:39 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Qiao",
"Limeng",
""
],
[
"Gan",
"Yiyang",
""
],
[
"Wang",
"Bairui",
""
],
[
"Qin",
"Jie",
""
],
[
"Xu",
"Shuang",
""
],
[
"Yang",
"Siqi",
""
],
[
"Ma",
"Lin",
""
]
] | TITLE: UniViTAR: Unified Vision Transformer with Native Resolution
ABSTRACT: Conventional Vision Transformer simplifies visual modeling by standardizing
input resolutions, often disregarding the variability of natural visual data
and compromising spatial-contextual fidelity. While preliminary explorations
have superficially investigated native resolution modeling, existing approaches
still lack systematic analysis from a visual representation perspective. To
bridge this gap, we introduce UniViTAR, a family of homogeneous vision
foundation models tailored for unified visual modality and native resolution
scenario in the era of multimodal. Our framework first conducts architectural
upgrades to the vanilla paradigm by integrating multiple advanced components.
Building upon these improvements, a progressive training paradigm is
introduced, which strategically combines two core mechanisms: (1) resolution
curriculum learning, transitioning from fixed-resolution pretraining to native
resolution tuning, thereby leveraging ViT's inherent adaptability to
variable-length sequences, and (2) visual modality adaptation via inter-batch
image-video switching, which balances computational efficiency with enhanced
temporal reasoning. In parallel, a hybrid training framework further synergizes
sigmoid-based contrastive loss with feature distillation from a frozen teacher
model, thereby accelerating early-stage convergence. Finally, trained
exclusively on public datasets, externsive experiments across multiple model
scales from 0.3B to 1B demonstrate its effectiveness.
|
2504.01803 | Javier Pastor-Galindo | Felipe S\'anchez Gonz\'alez, Javier Pastor-Galindo, Jos\'e A.
Ruip\'erez-Valiente | DISINFOX: an open-source threat exchange platform serving intelligence
on disinformation and influence operations | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | This paper introduces DISINFOX, an open-source threat intelligence exchange
platform for the structured collection, management, and dissemination of
disinformation incidents and influence operations. Analysts can upload and
correlate information manipulation and interference incidents, while clients
can access and analyze the data through an interactive web interface or
programmatically via a public API. This facilitates integration with other
vendors, providing a unified view of cybersecurity and disinformation events.
The solution is fully containerized using Docker, comprising a web-based
frontend for user interaction, a backend REST API for managing core
functionalities, and a public API for structured data retrieval, enabling
seamless integration with existing Cyber Threat Intelligence (CTI) workflows.
In particular, DISINFOX models the incidents through DISARM Tactics,
Techniques, and Procedures (TTPs), a MITRE ATT&CK-like framework for
disinformation, with a custom data model based on the Structured Threat
Information eXpression (STIX2) standard.
As an open-source solution, DISINFOX provides a reproducible and extensible
hub for researchers, analysts, and policymakers seeking to enhance the
detection, investigation, and mitigation of disinformation threats. The
intelligence generated from a custom dataset has been tested and utilized by a
local instance of OpenCTI, a mature CTI platform, via a custom-built connector,
validating the platform with the exchange of more than 100 disinformation
incidents.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 15:11:43 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"González",
"Felipe Sánchez",
""
],
[
"Pastor-Galindo",
"Javier",
""
],
[
"Ruipérez-Valiente",
"José A.",
""
]
] | TITLE: DISINFOX: an open-source threat exchange platform serving intelligence
on disinformation and influence operations
ABSTRACT: This paper introduces DISINFOX, an open-source threat intelligence exchange
platform for the structured collection, management, and dissemination of
disinformation incidents and influence operations. Analysts can upload and
correlate information manipulation and interference incidents, while clients
can access and analyze the data through an interactive web interface or
programmatically via a public API. This facilitates integration with other
vendors, providing a unified view of cybersecurity and disinformation events.
The solution is fully containerized using Docker, comprising a web-based
frontend for user interaction, a backend REST API for managing core
functionalities, and a public API for structured data retrieval, enabling
seamless integration with existing Cyber Threat Intelligence (CTI) workflows.
In particular, DISINFOX models the incidents through DISARM Tactics,
Techniques, and Procedures (TTPs), a MITRE ATT&CK-like framework for
disinformation, with a custom data model based on the Structured Threat
Information eXpression (STIX2) standard.
As an open-source solution, DISINFOX provides a reproducible and extensible
hub for researchers, analysts, and policymakers seeking to enhance the
detection, investigation, and mitigation of disinformation threats. The
intelligence generated from a custom dataset has been tested and utilized by a
local instance of OpenCTI, a mature CTI platform, via a custom-built connector,
validating the platform with the exchange of more than 100 disinformation
incidents.
|
2504.01805 | Kun Ouyang | Kun Ouyang | Spatial-R1: Enhancing MLLMs in Video Spatial Reasoning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enhancing the spatial reasoning capabilities of Multi-modal Large Language
Models (MLLMs) for video understanding is crucial yet challenging. We present
Spatial-R1, a targeted approach involving two key contributions: the curation
of SR, a new video spatial reasoning dataset from ScanNet with automatically
generated QA pairs across seven task types, and the application of
Task-Specific Group Relative Policy Optimization (GRPO) for fine-tuning. By
training the Qwen2.5-VL-7B-Instruct model on SR using GRPO, Spatial-R1
significantly advances performance on the VSI-Bench benchmark, achieving a
7.4\% gain over the baseline and outperforming strong contemporary models. This
work validates the effectiveness of specialized data curation and optimization
techniques for improving complex spatial reasoning in video MLLMs.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 15:12:17 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ouyang",
"Kun",
""
]
] | TITLE: Spatial-R1: Enhancing MLLMs in Video Spatial Reasoning
ABSTRACT: Enhancing the spatial reasoning capabilities of Multi-modal Large Language
Models (MLLMs) for video understanding is crucial yet challenging. We present
Spatial-R1, a targeted approach involving two key contributions: the curation
of SR, a new video spatial reasoning dataset from ScanNet with automatically
generated QA pairs across seven task types, and the application of
Task-Specific Group Relative Policy Optimization (GRPO) for fine-tuning. By
training the Qwen2.5-VL-7B-Instruct model on SR using GRPO, Spatial-R1
significantly advances performance on the VSI-Bench benchmark, achieving a
7.4\% gain over the baseline and outperforming strong contemporary models. This
work validates the effectiveness of specialized data curation and optimization
techniques for improving complex spatial reasoning in video MLLMs.
|
2504.01833 | Sumuk Shashidhar | Sumuk Shashidhar, Cl\'ementine Fourrier, Alina Lozovskia, Thomas Wolf,
Gokhan Tur, Dilek Hakkani-T\"ur | YourBench: Easy Custom Evaluation Sets for Everyone | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Evaluating large language models (LLMs) effectively remains a critical
bottleneck, as traditional static benchmarks suffer from saturation and
contamination, while human evaluations are costly and slow. This hinders timely
or domain-specific assessment, crucial for real-world applications. We
introduce YourBench, a novel, open-source framework that addresses these
limitations by enabling dynamic, automated generation of reliable, up-to-date,
and domain-tailored benchmarks cheaply and without manual annotation, directly
from user-provided documents. We demonstrate its efficacy by replicating 7
diverse MMLU subsets using minimal source text, achieving this for under 15 USD
in total inference costs while perfectly preserving the relative model
performance rankings (Spearman Rho = 1) observed on the original benchmark. To
ensure that YourBench generates data grounded in provided input instead of
relying on posterior parametric knowledge in models, we also introduce
Tempora-0325, a novel dataset of over 7K diverse documents, published
exclusively after March 2025. Our comprehensive analysis spans 26 SoTA models
from 7 major families across varying scales (3-671B parameters) to validate the
quality of generated evaluations through rigorous algorithmic checks (e.g.,
citation grounding) and human assessments. We release the YourBench library,
the Tempora-0325 dataset, 150k+ question answer pairs based on Tempora and all
evaluation and inference traces to facilitate reproducible research and empower
the community to generate bespoke benchmarks on demand, fostering more relevant
and trustworthy LLM evaluation.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 15:40:24 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Shashidhar",
"Sumuk",
""
],
[
"Fourrier",
"Clémentine",
""
],
[
"Lozovskia",
"Alina",
""
],
[
"Wolf",
"Thomas",
""
],
[
"Tur",
"Gokhan",
""
],
[
"Hakkani-Tür",
"Dilek",
""
]
] | TITLE: YourBench: Easy Custom Evaluation Sets for Everyone
ABSTRACT: Evaluating large language models (LLMs) effectively remains a critical
bottleneck, as traditional static benchmarks suffer from saturation and
contamination, while human evaluations are costly and slow. This hinders timely
or domain-specific assessment, crucial for real-world applications. We
introduce YourBench, a novel, open-source framework that addresses these
limitations by enabling dynamic, automated generation of reliable, up-to-date,
and domain-tailored benchmarks cheaply and without manual annotation, directly
from user-provided documents. We demonstrate its efficacy by replicating 7
diverse MMLU subsets using minimal source text, achieving this for under 15 USD
in total inference costs while perfectly preserving the relative model
performance rankings (Spearman Rho = 1) observed on the original benchmark. To
ensure that YourBench generates data grounded in provided input instead of
relying on posterior parametric knowledge in models, we also introduce
Tempora-0325, a novel dataset of over 7K diverse documents, published
exclusively after March 2025. Our comprehensive analysis spans 26 SoTA models
from 7 major families across varying scales (3-671B parameters) to validate the
quality of generated evaluations through rigorous algorithmic checks (e.g.,
citation grounding) and human assessments. We release the YourBench library,
the Tempora-0325 dataset, 150k+ question answer pairs based on Tempora and all
evaluation and inference traces to facilitate reproducible research and empower
the community to generate bespoke benchmarks on demand, fostering more relevant
and trustworthy LLM evaluation.
|
2504.01838 | Nusrat Munia | Nusrat Munia and Abdullah-Al-Zubaer Imran | Prompting Medical Vision-Language Models to Mitigate Diagnosis Bias by
Generating Realistic Dermoscopic Images | Paper accepted at International Symposium on Biomedical Imaging (ISBI
2025) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence (AI) in skin disease diagnosis has improved
significantly, but a major concern is that these models frequently show biased
performance across subgroups, especially regarding sensitive attributes such as
skin color. To address these issues, we propose a novel generative AI-based
framework, namely, Dermatology Diffusion Transformer (DermDiT), which leverages
text prompts generated via Vision Language Models and multimodal text-image
learning to generate new dermoscopic images. We utilize large vision language
models to generate accurate and proper prompts for each dermoscopic image which
helps to generate synthetic images to improve the representation of
underrepresented groups (patient, disease, etc.) in highly imbalanced datasets
for clinical diagnoses. Our extensive experimentation showcases the large
vision language models providing much more insightful representations, that
enable DermDiT to generate high-quality images. Our code is available at
https://github.com/Munia03/DermDiT
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 15:44:12 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Munia",
"Nusrat",
""
],
[
"Imran",
"Abdullah-Al-Zubaer",
""
]
] | TITLE: Prompting Medical Vision-Language Models to Mitigate Diagnosis Bias by
Generating Realistic Dermoscopic Images
ABSTRACT: Artificial Intelligence (AI) in skin disease diagnosis has improved
significantly, but a major concern is that these models frequently show biased
performance across subgroups, especially regarding sensitive attributes such as
skin color. To address these issues, we propose a novel generative AI-based
framework, namely, Dermatology Diffusion Transformer (DermDiT), which leverages
text prompts generated via Vision Language Models and multimodal text-image
learning to generate new dermoscopic images. We utilize large vision language
models to generate accurate and proper prompts for each dermoscopic image which
helps to generate synthetic images to improve the representation of
underrepresented groups (patient, disease, etc.) in highly imbalanced datasets
for clinical diagnoses. Our extensive experimentation showcases the large
vision language models providing much more insightful representations, that
enable DermDiT to generate high-quality images. Our code is available at
https://github.com/Munia03/DermDiT
|
2504.01850 | Ali Al-Kaswan | Ali Al-Kaswan, Sebastian Deatc, Beg\"um Ko\c{c}, Arie van Deursen,
Maliheh Izadi | Code Red! On the Harmfulness of Applying Off-the-shelf Large Language
Models to Programming Tasks | FSE'25 Technical Track | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Nowadays, developers increasingly rely on solutions powered by Large Language
Models (LLM) to assist them with their coding tasks. This makes it crucial to
align these tools with human values to prevent malicious misuse. In this paper,
we propose a comprehensive framework for assessing the potential harmfulness of
LLMs within the software engineering domain. We begin by developing a taxonomy
of potentially harmful software engineering scenarios and subsequently, create
a dataset of prompts based on this taxonomy. To systematically assess the
responses, we design and validate an automatic evaluator that classifies the
outputs of a variety of LLMs both open-source and closed-source models, as well
as general-purpose and code-specific LLMs. Furthermore, we investigate the
impact of models size, architecture family, and alignment strategies on their
tendency to generate harmful content. The results show significant disparities
in the alignment of various LLMs for harmlessness. We find that some models and
model families, such as Openhermes, are more harmful than others and that
code-specific models do not perform better than their general-purpose
counterparts. Notably, some fine-tuned models perform significantly worse than
their base-models due to their design choices. On the other side, we find that
larger models tend to be more helpful and are less likely to respond with
harmful information. These results highlight the importance of targeted
alignment strategies tailored to the unique challenges of software engineering
tasks and provide a foundation for future work in this critical area.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 16:00:14 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Al-Kaswan",
"Ali",
""
],
[
"Deatc",
"Sebastian",
""
],
[
"Koç",
"Begüm",
""
],
[
"van Deursen",
"Arie",
""
],
[
"Izadi",
"Maliheh",
""
]
] | TITLE: Code Red! On the Harmfulness of Applying Off-the-shelf Large Language
Models to Programming Tasks
ABSTRACT: Nowadays, developers increasingly rely on solutions powered by Large Language
Models (LLM) to assist them with their coding tasks. This makes it crucial to
align these tools with human values to prevent malicious misuse. In this paper,
we propose a comprehensive framework for assessing the potential harmfulness of
LLMs within the software engineering domain. We begin by developing a taxonomy
of potentially harmful software engineering scenarios and subsequently, create
a dataset of prompts based on this taxonomy. To systematically assess the
responses, we design and validate an automatic evaluator that classifies the
outputs of a variety of LLMs both open-source and closed-source models, as well
as general-purpose and code-specific LLMs. Furthermore, we investigate the
impact of models size, architecture family, and alignment strategies on their
tendency to generate harmful content. The results show significant disparities
in the alignment of various LLMs for harmlessness. We find that some models and
model families, such as Openhermes, are more harmful than others and that
code-specific models do not perform better than their general-purpose
counterparts. Notably, some fine-tuned models perform significantly worse than
their base-models due to their design choices. On the other side, we find that
larger models tend to be more helpful and are less likely to respond with
harmful information. These results highlight the importance of targeted
alignment strategies tailored to the unique challenges of software engineering
tasks and provide a foundation for future work in this critical area.
|
2504.01857 | Zhiwei Yu | Zhiwei Yu, Tuo Li, Changhong Wang, Hui Chen, Lang Zhou | Cross-Lingual Consistency: A Novel Inference Framework for Advancing
Reasoning in Large Language Models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chain-of-thought (CoT) has emerged as a critical mechanism for enhancing
reasoning capabilities in large language models (LLMs), with self-consistency
demonstrating notable promise in boosting performance. However, inherent
linguistic biases in multilingual training corpora frequently cause semantic
drift and logical inconsistencies, especially in sub-10B parameter LLMs
handling complex inference tasks. To overcome these constraints, we propose the
Cross-Lingual Consistency (CLC) framework, an innovative inference paradigm
that integrates multilingual reasoning paths through majority voting to elevate
LLMs' reasoning capabilities. Empirical evaluations on the CMATH dataset reveal
CLC's superiority over the conventional self-consistency method, delivering
9.5%, 6.5%, and 6.0% absolute accuracy gains for DeepSeek-Math-7B-Instruct,
Qwen2.5-Math-7B-Instruct, and Gemma2-9B-Instruct respectively. Expanding CLC's
linguistic scope to 11 diverse languages implies two synergistic benefits: 1)
neutralizing linguistic biases in multilingual training corpora through
multilingual ensemble voting, 2) escaping monolingual reasoning traps by
exploring the broader multilingual solution space. This dual benefits
empirically enables more globally optimal reasoning paths compared to
monolingual self-consistency baselines, as evidenced by the 4.1%-18.5% accuracy
gains using Gemma2-9B-Instruct on the MGSM dataset.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 16:09:39 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Yu",
"Zhiwei",
""
],
[
"Li",
"Tuo",
""
],
[
"Wang",
"Changhong",
""
],
[
"Chen",
"Hui",
""
],
[
"Zhou",
"Lang",
""
]
] | TITLE: Cross-Lingual Consistency: A Novel Inference Framework for Advancing
Reasoning in Large Language Models
ABSTRACT: Chain-of-thought (CoT) has emerged as a critical mechanism for enhancing
reasoning capabilities in large language models (LLMs), with self-consistency
demonstrating notable promise in boosting performance. However, inherent
linguistic biases in multilingual training corpora frequently cause semantic
drift and logical inconsistencies, especially in sub-10B parameter LLMs
handling complex inference tasks. To overcome these constraints, we propose the
Cross-Lingual Consistency (CLC) framework, an innovative inference paradigm
that integrates multilingual reasoning paths through majority voting to elevate
LLMs' reasoning capabilities. Empirical evaluations on the CMATH dataset reveal
CLC's superiority over the conventional self-consistency method, delivering
9.5%, 6.5%, and 6.0% absolute accuracy gains for DeepSeek-Math-7B-Instruct,
Qwen2.5-Math-7B-Instruct, and Gemma2-9B-Instruct respectively. Expanding CLC's
linguistic scope to 11 diverse languages implies two synergistic benefits: 1)
neutralizing linguistic biases in multilingual training corpora through
multilingual ensemble voting, 2) escaping monolingual reasoning traps by
exploring the broader multilingual solution space. This dual benefits
empirically enables more globally optimal reasoning paths compared to
monolingual self-consistency baselines, as evidenced by the 4.1%-18.5% accuracy
gains using Gemma2-9B-Instruct on the MGSM dataset.
|
2504.01861 | Yeong Gwang Son | Yeong Gwang Son, Seunghwan Um, Juyong Hong, Tat Hieu Bui, and Hyouk
Ryeol Choi | Corner-Grasp: Multi-Action Grasp Detection and Active Gripper Adaptation
for Grasping in Cluttered Environments | 11 pages, 14 figures | null | null | null | cs.RO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robotic grasping is an essential capability, playing a critical role in
enabling robots to physically interact with their surroundings. Despite
extensive research, challenges remain due to the diverse shapes and properties
of target objects, inaccuracies in sensing, and potential collisions with the
environment. In this work, we propose a method for effectively grasping in
cluttered bin-picking environments where these challenges intersect. We utilize
a multi-functional gripper that combines both suction and finger grasping to
handle a wide range of objects. We also present an active gripper adaptation
strategy to minimize collisions between the gripper hardware and the
surrounding environment by actively leveraging the reciprocating suction cup
and reconfigurable finger motion. To fully utilize the gripper's capabilities,
we built a neural network that detects suction and finger grasp points from a
single input RGB-D image. This network is trained using a larger-scale
synthetic dataset generated from simulation. In addition to this, we propose an
efficient approach to constructing a real-world dataset that facilitates grasp
point detection on various objects with diverse characteristics. Experiment
results show that the proposed method can grasp objects in cluttered
bin-picking scenarios and prevent collisions with environmental constraints
such as a corner of the bin. Our proposed method demonstrated its effectiveness
in the 9th Robotic Grasping and Manipulation Competition (RGMC) held at ICRA
2024.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 16:12:28 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Son",
"Yeong Gwang",
""
],
[
"Um",
"Seunghwan",
""
],
[
"Hong",
"Juyong",
""
],
[
"Bui",
"Tat Hieu",
""
],
[
"Choi",
"Hyouk Ryeol",
""
]
] | TITLE: Corner-Grasp: Multi-Action Grasp Detection and Active Gripper Adaptation
for Grasping in Cluttered Environments
ABSTRACT: Robotic grasping is an essential capability, playing a critical role in
enabling robots to physically interact with their surroundings. Despite
extensive research, challenges remain due to the diverse shapes and properties
of target objects, inaccuracies in sensing, and potential collisions with the
environment. In this work, we propose a method for effectively grasping in
cluttered bin-picking environments where these challenges intersect. We utilize
a multi-functional gripper that combines both suction and finger grasping to
handle a wide range of objects. We also present an active gripper adaptation
strategy to minimize collisions between the gripper hardware and the
surrounding environment by actively leveraging the reciprocating suction cup
and reconfigurable finger motion. To fully utilize the gripper's capabilities,
we built a neural network that detects suction and finger grasp points from a
single input RGB-D image. This network is trained using a larger-scale
synthetic dataset generated from simulation. In addition to this, we propose an
efficient approach to constructing a real-world dataset that facilitates grasp
point detection on various objects with diverse characteristics. Experiment
results show that the proposed method can grasp objects in cluttered
bin-picking scenarios and prevent collisions with environmental constraints
such as a corner of the bin. Our proposed method demonstrated its effectiveness
in the 9th Robotic Grasping and Manipulation Competition (RGMC) held at ICRA
2024.
|
2504.01863 | Mark Smucker | Mark D. Smucker and Houmaan Chamani | Extending MovieLens-32M to Provide New Evaluation Objectives | Our extension to MovieLens-32M is available for researchers at
https://uwaterlooir.github.io/datasets/ml-32m-extension | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Offline evaluation of recommender systems has traditionally treated the
problem as a machine learning problem. In the classic case of recommending
movies, where the user has provided explicit ratings of which movies they like
and don't like, each user's ratings are split into test and train sets, and the
evaluation task becomes to predict the held out test data using the training
data. This machine learning style of evaluation makes the objective to
recommend the movies that a user has watched and rated highly, which is not the
same task as helping the user find movies that they would enjoy if they watched
them. This mismatch in objective between evaluation and task is a compromise to
avoid the cost of asking a user to evaluate recommendations by watching each
movie. As a resource available for download, we offer an extension to the
MovieLens-32M dataset that provides for new evaluation objectives. Our primary
objective is to predict the movies that a user would be interested in watching,
i.e. predict their watchlist. To construct this extension, we recruited
MovieLens users, collected their profiles, made recommendations with a diverse
set of algorithms, pooled the recommendations, and had the users assess the
pools. Notably, we found that the traditional machine learning style of
evaluation ranks the Popular algorithm, which recommends movies based on total
number of ratings in the system, in the middle of the twenty-two recommendation
runs we used to build the pools. In contrast, when we rank the runs by users'
interest in watching movies, we find that recommending popular movies as a
recommendation algorithm becomes one of the worst performing runs. It appears
that by asking users to assess their personal recommendations, we can alleviate
the popularity bias issues created by using information retrieval effectiveness
measures for the evaluation of recommender systems.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 16:15:46 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Smucker",
"Mark D.",
""
],
[
"Chamani",
"Houmaan",
""
]
] | TITLE: Extending MovieLens-32M to Provide New Evaluation Objectives
ABSTRACT: Offline evaluation of recommender systems has traditionally treated the
problem as a machine learning problem. In the classic case of recommending
movies, where the user has provided explicit ratings of which movies they like
and don't like, each user's ratings are split into test and train sets, and the
evaluation task becomes to predict the held out test data using the training
data. This machine learning style of evaluation makes the objective to
recommend the movies that a user has watched and rated highly, which is not the
same task as helping the user find movies that they would enjoy if they watched
them. This mismatch in objective between evaluation and task is a compromise to
avoid the cost of asking a user to evaluate recommendations by watching each
movie. As a resource available for download, we offer an extension to the
MovieLens-32M dataset that provides for new evaluation objectives. Our primary
objective is to predict the movies that a user would be interested in watching,
i.e. predict their watchlist. To construct this extension, we recruited
MovieLens users, collected their profiles, made recommendations with a diverse
set of algorithms, pooled the recommendations, and had the users assess the
pools. Notably, we found that the traditional machine learning style of
evaluation ranks the Popular algorithm, which recommends movies based on total
number of ratings in the system, in the middle of the twenty-two recommendation
runs we used to build the pools. In contrast, when we rank the runs by users'
interest in watching movies, we find that recommending popular movies as a
recommendation algorithm becomes one of the worst performing runs. It appears
that by asking users to assess their personal recommendations, we can alleviate
the popularity bias issues created by using information retrieval effectiveness
measures for the evaluation of recommender systems.
|
2504.01875 | Ben Keslaki | Ben Keslaki | Architect Your Landscape Approach (AYLA) for Optimizations in Deep
Learning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Stochastic Gradient Descent (SGD) and its variants, such as ADAM, are
foundational to deep learning optimization, adjusting model parameters using
fixed or adaptive learning rates based on loss function gradients. However,
these methods often face challenges in balancing adaptability and efficiency in
non-convex, high-dimensional settings. This paper introduces AYLA, a novel
optimization technique that enhances training dynamics through loss function
transformations. By applying a tunable power-law transformation, AYLA preserves
critical points while scaling loss values to amplify gradient sensitivity,
accelerating convergence. We further propose a dynamic (effective) learning
rate that adapts to the transformed loss, improving optimization efficiency.
Empirical tests on finding minimum of a synthetic non-convex polynomial, a
non-convex curve-fitting dataset, and digit classification (MNIST) demonstrate
that AYLA surpasses SGD and ADAM in convergence speed and stability. This
approach redefines the loss landscape for better optimization outcomes,
offering a promising advancement for deep neural networks and can be applied to
any optimization method and potentially improve the performance of it.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 16:31:39 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Keslaki",
"Ben",
""
]
] | TITLE: Architect Your Landscape Approach (AYLA) for Optimizations in Deep
Learning
ABSTRACT: Stochastic Gradient Descent (SGD) and its variants, such as ADAM, are
foundational to deep learning optimization, adjusting model parameters using
fixed or adaptive learning rates based on loss function gradients. However,
these methods often face challenges in balancing adaptability and efficiency in
non-convex, high-dimensional settings. This paper introduces AYLA, a novel
optimization technique that enhances training dynamics through loss function
transformations. By applying a tunable power-law transformation, AYLA preserves
critical points while scaling loss values to amplify gradient sensitivity,
accelerating convergence. We further propose a dynamic (effective) learning
rate that adapts to the transformed loss, improving optimization efficiency.
Empirical tests on finding minimum of a synthetic non-convex polynomial, a
non-convex curve-fitting dataset, and digit classification (MNIST) demonstrate
that AYLA surpasses SGD and ADAM in convergence speed and stability. This
approach redefines the loss landscape for better optimization outcomes,
offering a promising advancement for deep neural networks and can be applied to
any optimization method and potentially improve the performance of it.
|
2504.01879 | Tushar Kataria | Abhilash Shankarampeta, Harsh Mahajan, Tushar Kataria, Dan Roth, Vivek
Gupta | TransientTables: Evaluating LLMs' Reasoning on Temporally Evolving
Semi-structured Tables | 19 Pages. 21 Tables, 1 figure | null | null | null | cs.CL cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans continuously make new discoveries, and understanding temporal sequence
of events leading to these breakthroughs is essential for advancing science and
society. This ability to reason over time allows us to identify future steps
and understand the effects of financial and political decisions on our lives.
However, large language models (LLMs) are typically trained on static datasets,
limiting their ability to perform effective temporal reasoning. To assess the
temporal reasoning capabilities of LLMs, we present the TRANSIENTTABLES
dataset, which comprises 3,971 questions derived from over 14,000 tables,
spanning 1,238 entities across multiple time periods. We introduce a
template-based question-generation pipeline that harnesses LLMs to refine both
templates and questions. Additionally, we establish baseline results using
state-of-the-art LLMs to create a benchmark. We also introduce novel modeling
strategies centered around task decomposition, enhancing LLM performance.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 16:34:43 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Shankarampeta",
"Abhilash",
""
],
[
"Mahajan",
"Harsh",
""
],
[
"Kataria",
"Tushar",
""
],
[
"Roth",
"Dan",
""
],
[
"Gupta",
"Vivek",
""
]
] | TITLE: TransientTables: Evaluating LLMs' Reasoning on Temporally Evolving
Semi-structured Tables
ABSTRACT: Humans continuously make new discoveries, and understanding temporal sequence
of events leading to these breakthroughs is essential for advancing science and
society. This ability to reason over time allows us to identify future steps
and understand the effects of financial and political decisions on our lives.
However, large language models (LLMs) are typically trained on static datasets,
limiting their ability to perform effective temporal reasoning. To assess the
temporal reasoning capabilities of LLMs, we present the TRANSIENTTABLES
dataset, which comprises 3,971 questions derived from over 14,000 tables,
spanning 1,238 entities across multiple time periods. We introduce a
template-based question-generation pipeline that harnesses LLMs to refine both
templates and questions. Additionally, we establish baseline results using
state-of-the-art LLMs to create a benchmark. We also introduce novel modeling
strategies centered around task decomposition, enhancing LLM performance.
|
2504.01882 | Diego Cajaraville-Aboy | Diego Cajaraville-Aboy, Marta Moure-Garrido, Carlos Beis-Penedo,
Carlos Garcia-Rubio, Rebeca P. D\'iaz-Redondo, Celeste Campo, Ana
Fern\'andez-Vilas, and Manuel Fern\'andez-Veiga | CO-DEFEND: Continuous Decentralized Federated Learning for Secure
DoH-Based Threat Detection | 15 pages, 8 figures, 4 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The use of DNS over HTTPS (DoH) tunneling by an attacker to hide malicious
activity within encrypted DNS traffic poses a serious threat to network
security, as it allows malicious actors to bypass traditional monitoring and
intrusion detection systems while evading detection by conventional traffic
analysis techniques. Machine Learning (ML) techniques can be used to detect DoH
tunnels; however, their effectiveness relies on large datasets containing both
benign and malicious traffic. Sharing such datasets across entities is
challenging due to privacy concerns. In this work, we propose CO-DEFEND
(Continuous Decentralized Federated Learning for Secure DoH-Based Threat
Detection), a Decentralized Federated Learning (DFL) framework that enables
multiple entities to collaboratively train a classification machine learning
model while preserving data privacy and enhancing resilience against single
points of failure. The proposed DFL framework, which is scalable and
privacy-preserving, is based on a federation process that allows multiple
entities to train online their local models using incoming DoH flows in real
time as they are processed by the entity. In addition, we adapt four classical
machine learning algorithms, Support Vector Machines (SVM), Logistic Regression
(LR), Decision Trees (DT), and Random Forest (RF), for federated scenarios,
comparing their results with more computationally complex alternatives such as
neural networks. We compare our proposed method by using the dataset
CIRA-CIC-DoHBrw-2020 with existing machine learning approaches to demonstrate
its effectiveness in detecting malicious DoH tunnels and the benefits it
brings.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 16:40:01 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Cajaraville-Aboy",
"Diego",
""
],
[
"Moure-Garrido",
"Marta",
""
],
[
"Beis-Penedo",
"Carlos",
""
],
[
"Garcia-Rubio",
"Carlos",
""
],
[
"Díaz-Redondo",
"Rebeca P.",
""
],
[
"Campo",
"Celeste",
""
],
[
"Fernández-Vilas",
"Ana",
""
],
[
"Fernández-Veiga",
"Manuel",
""
]
] | TITLE: CO-DEFEND: Continuous Decentralized Federated Learning for Secure
DoH-Based Threat Detection
ABSTRACT: The use of DNS over HTTPS (DoH) tunneling by an attacker to hide malicious
activity within encrypted DNS traffic poses a serious threat to network
security, as it allows malicious actors to bypass traditional monitoring and
intrusion detection systems while evading detection by conventional traffic
analysis techniques. Machine Learning (ML) techniques can be used to detect DoH
tunnels; however, their effectiveness relies on large datasets containing both
benign and malicious traffic. Sharing such datasets across entities is
challenging due to privacy concerns. In this work, we propose CO-DEFEND
(Continuous Decentralized Federated Learning for Secure DoH-Based Threat
Detection), a Decentralized Federated Learning (DFL) framework that enables
multiple entities to collaboratively train a classification machine learning
model while preserving data privacy and enhancing resilience against single
points of failure. The proposed DFL framework, which is scalable and
privacy-preserving, is based on a federation process that allows multiple
entities to train online their local models using incoming DoH flows in real
time as they are processed by the entity. In addition, we adapt four classical
machine learning algorithms, Support Vector Machines (SVM), Logistic Regression
(LR), Decision Trees (DT), and Random Forest (RF), for federated scenarios,
comparing their results with more computationally complex alternatives such as
neural networks. We compare our proposed method by using the dataset
CIRA-CIC-DoHBrw-2020 with existing machine learning approaches to demonstrate
its effectiveness in detecting malicious DoH tunnels and the benefits it
brings.
|
2504.01901 | Haochen Wang | Haochen Wang and Yucheng Zhao and Tiancai Wang and Haoqiang Fan and
Xiangyu Zhang and Zhaoxiang Zhang | Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness | null | null | null | null | cs.CV cs.AI cs.CL cs.RO | http://creativecommons.org/licenses/by/4.0/ | The rapid development of Large Multimodal Models (LMMs) for 2D images and
videos has spurred efforts to adapt these models for interpreting 3D scenes.
However, the absence of large-scale 3D vision-language datasets has posed a
significant obstacle. To address this issue, typical approaches focus on
injecting 3D awareness into 2D LMMs by designing 3D input-level scene
representations. This work provides a new perspective. We introduce
reconstructive visual instruction tuning with 3D-awareness (Ross3D), which
integrates 3D-aware visual supervision into the training procedure.
Specifically, it incorporates cross-view and global-view reconstruction. The
former requires reconstructing masked views by aggregating overlapping
information from other views. The latter aims to aggregate information from all
available views to recover Bird's-Eye-View images, contributing to a
comprehensive overview of the entire scene. Empirically, Ross3D achieves
state-of-the-art performance across various 3D scene understanding benchmarks.
More importantly, our semi-supervised experiments demonstrate significant
potential in leveraging large amounts of unlabeled 3D vision-only data.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 16:59:55 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wang",
"Haochen",
""
],
[
"Zhao",
"Yucheng",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Fan",
"Haoqiang",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Zhang",
"Zhaoxiang",
""
]
] | TITLE: Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness
ABSTRACT: The rapid development of Large Multimodal Models (LMMs) for 2D images and
videos has spurred efforts to adapt these models for interpreting 3D scenes.
However, the absence of large-scale 3D vision-language datasets has posed a
significant obstacle. To address this issue, typical approaches focus on
injecting 3D awareness into 2D LMMs by designing 3D input-level scene
representations. This work provides a new perspective. We introduce
reconstructive visual instruction tuning with 3D-awareness (Ross3D), which
integrates 3D-aware visual supervision into the training procedure.
Specifically, it incorporates cross-view and global-view reconstruction. The
former requires reconstructing masked views by aggregating overlapping
information from other views. The latter aims to aggregate information from all
available views to recover Bird's-Eye-View images, contributing to a
comprehensive overview of the entire scene. Empirically, Ross3D achieves
state-of-the-art performance across various 3D scene understanding benchmarks.
More importantly, our semi-supervised experiments demonstrate significant
potential in leveraging large amounts of unlabeled 3D vision-only data.
|
2504.01903 | Zijun Wang | Zijun Wang, Haoqin Tu, Yuhan Wang, Juncheng Wu, Jieru Mei, Brian R.
Bartoldson, Bhavya Kailkhura, Cihang Xie | STAR-1: Safer Alignment of Reasoning LLMs with 1K Data | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces STAR-1, a high-quality, just-1k-scale safety dataset
specifically designed for large reasoning models (LRMs) like DeepSeek-R1. Built
on three core principles -- diversity, deliberative reasoning, and rigorous
filtering -- STAR-1 aims to address the critical needs for safety alignment in
LRMs. Specifically, we begin by integrating existing open-source safety
datasets from diverse sources. Then, we curate safety policies to generate
policy-grounded deliberative reasoning samples. Lastly, we apply a GPT-4o-based
safety scoring system to select training examples aligned with best practices.
Experimental results show that fine-tuning LRMs with STAR-1 leads to an average
40% improvement in safety performance across four benchmarks, while only
incurring a marginal decrease (e.g., an average of 1.1%) in reasoning ability
measured across five reasoning tasks. Extensive ablation studies further
validate the importance of our design principles in constructing STAR-1 and
analyze its efficacy across both LRMs and traditional LLMs. Our project page is
https://ucsc-vlaa.github.io/STAR-1.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:04:04 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Wang",
"Zijun",
""
],
[
"Tu",
"Haoqin",
""
],
[
"Wang",
"Yuhan",
""
],
[
"Wu",
"Juncheng",
""
],
[
"Mei",
"Jieru",
""
],
[
"Bartoldson",
"Brian R.",
""
],
[
"Kailkhura",
"Bhavya",
""
],
[
"Xie",
"Cihang",
""
]
] | TITLE: STAR-1: Safer Alignment of Reasoning LLMs with 1K Data
ABSTRACT: This paper introduces STAR-1, a high-quality, just-1k-scale safety dataset
specifically designed for large reasoning models (LRMs) like DeepSeek-R1. Built
on three core principles -- diversity, deliberative reasoning, and rigorous
filtering -- STAR-1 aims to address the critical needs for safety alignment in
LRMs. Specifically, we begin by integrating existing open-source safety
datasets from diverse sources. Then, we curate safety policies to generate
policy-grounded deliberative reasoning samples. Lastly, we apply a GPT-4o-based
safety scoring system to select training examples aligned with best practices.
Experimental results show that fine-tuning LRMs with STAR-1 leads to an average
40% improvement in safety performance across four benchmarks, while only
incurring a marginal decrease (e.g., an average of 1.1%) in reasoning ability
measured across five reasoning tasks. Extensive ablation studies further
validate the importance of our design principles in constructing STAR-1 and
analyze its efficacy across both LRMs and traditional LLMs. Our project page is
https://ucsc-vlaa.github.io/STAR-1.
|
2504.01916 | Mothilal Asokan | Mothilal Asokan, Kebin Wu, Fatima Albreiki | FineLIP: Extending CLIP's Reach via Fine-Grained Alignment with Longer
Text Inputs | null | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | As a pioneering vision-language model, CLIP (Contrastive Language-Image
Pre-training) has achieved significant success across various domains and a
wide range of downstream vision-language tasks. However, the text encoders in
popular CLIP models are limited to processing only 77 text tokens, which
constrains their ability to effectively handle longer, detail-rich captions.
Additionally, CLIP models often struggle to effectively capture detailed visual
and textual information, which hampers their performance on tasks that require
fine-grained analysis. To address these limitations, we present a novel
approach, \textbf{FineLIP}, that extends the capabilities of CLIP. FineLIP
enhances cross-modal text-image mapping by incorporating \textbf{Fine}-grained
alignment with \textbf{L}onger text input within the CL\textbf{IP}-style
framework. FineLIP first extends the positional embeddings to handle longer
text, followed by the dynamic aggregation of local image and text tokens. The
aggregated results are then used to enforce fine-grained token-to-token
cross-modal alignment. We validate our model on datasets with long, detailed
captions across two tasks: zero-shot cross-modal retrieval and text-to-image
generation. Quantitative and qualitative experimental results demonstrate the
effectiveness of FineLIP, outperforming existing state-of-the-art approaches.
Furthermore, comprehensive ablation studies validate the benefits of key design
elements within FineLIP.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:19:59 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Asokan",
"Mothilal",
""
],
[
"Wu",
"Kebin",
""
],
[
"Albreiki",
"Fatima",
""
]
] | TITLE: FineLIP: Extending CLIP's Reach via Fine-Grained Alignment with Longer
Text Inputs
ABSTRACT: As a pioneering vision-language model, CLIP (Contrastive Language-Image
Pre-training) has achieved significant success across various domains and a
wide range of downstream vision-language tasks. However, the text encoders in
popular CLIP models are limited to processing only 77 text tokens, which
constrains their ability to effectively handle longer, detail-rich captions.
Additionally, CLIP models often struggle to effectively capture detailed visual
and textual information, which hampers their performance on tasks that require
fine-grained analysis. To address these limitations, we present a novel
approach, \textbf{FineLIP}, that extends the capabilities of CLIP. FineLIP
enhances cross-modal text-image mapping by incorporating \textbf{Fine}-grained
alignment with \textbf{L}onger text input within the CL\textbf{IP}-style
framework. FineLIP first extends the positional embeddings to handle longer
text, followed by the dynamic aggregation of local image and text tokens. The
aggregated results are then used to enforce fine-grained token-to-token
cross-modal alignment. We validate our model on datasets with long, detailed
captions across two tasks: zero-shot cross-modal retrieval and text-to-image
generation. Quantitative and qualitative experimental results demonstrate the
effectiveness of FineLIP, outperforming existing state-of-the-art approaches.
Furthermore, comprehensive ablation studies validate the benefits of key design
elements within FineLIP.
|
2504.01921 | Harsh Vardhan | Harsh Vardhan, Xiaofan Yu, Tajana Rosing, Arya Mazumdar | Client Selection in Federated Learning with Data Heterogeneity and
Network Latencies | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Federated learning (FL) is a distributed machine learning paradigm where
multiple clients conduct local training based on their private data, then the
updated models are sent to a central server for global aggregation. The
practical convergence of FL is challenged by multiple factors, with the primary
hurdle being the heterogeneity among clients. This heterogeneity manifests as
data heterogeneity concerning local data distribution and latency heterogeneity
during model transmission to the server. While prior research has introduced
various efficient client selection methods to alleviate the negative impacts of
either of these heterogeneities individually, efficient methods to handle
real-world settings where both these heterogeneities exist simultaneously do
not exist. In this paper, we propose two novel theoretically optimal client
selection schemes that can handle both these heterogeneities. Our methods
involve solving simple optimization problems every round obtained by minimizing
the theoretical runtime to convergence. Empirical evaluations on 9 datasets
with non-iid data distributions, 2 practical delay distributions, and
non-convex neural network models demonstrate that our algorithms are at least
competitive to and at most 20 times better than best existing baselines.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:31:15 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Vardhan",
"Harsh",
""
],
[
"Yu",
"Xiaofan",
""
],
[
"Rosing",
"Tajana",
""
],
[
"Mazumdar",
"Arya",
""
]
] | TITLE: Client Selection in Federated Learning with Data Heterogeneity and
Network Latencies
ABSTRACT: Federated learning (FL) is a distributed machine learning paradigm where
multiple clients conduct local training based on their private data, then the
updated models are sent to a central server for global aggregation. The
practical convergence of FL is challenged by multiple factors, with the primary
hurdle being the heterogeneity among clients. This heterogeneity manifests as
data heterogeneity concerning local data distribution and latency heterogeneity
during model transmission to the server. While prior research has introduced
various efficient client selection methods to alleviate the negative impacts of
either of these heterogeneities individually, efficient methods to handle
real-world settings where both these heterogeneities exist simultaneously do
not exist. In this paper, we propose two novel theoretically optimal client
selection schemes that can handle both these heterogeneities. Our methods
involve solving simple optimization problems every round obtained by minimizing
the theoretical runtime to convergence. Empirical evaluations on 9 datasets
with non-iid data distributions, 2 practical delay distributions, and
non-convex neural network models demonstrate that our algorithms are at least
competitive to and at most 20 times better than best existing baselines.
|
2504.01922 | Zhaoyang Cao | Zhaoyang Cao, John Nguyen, Reza Zafarani | Is Less Really More? Fake News Detection with Limited Information | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | The threat that online fake news and misinformation pose to democracy,
justice, public confidence, and especially to vulnerable populations, has led
to a sharp increase in the need for fake news detection and intervention.
Whether multi-modal or pure text-based, most fake news detection methods depend
on textual analysis of entire articles. However, these fake news detection
methods come with certain limitations. For instance, fake news detection
methods that rely on full text can be computationally inefficient, demand large
amounts of training data to achieve competitive accuracy, and may lack
robustness across different datasets. This is because fake news datasets have
strong variations in terms of the level and types of information they provide;
where some can include large paragraphs of text with images and metadata,
others can be a few short sentences. Perhaps if one could only use minimal
information to detect fake news, fake news detection methods could become more
robust and resilient to the lack of information. We aim to overcome these
limitations by detecting fake news using systematically selected, limited
information that is both effective and capable of delivering robust, promising
performance. We propose a framework called SLIM Systematically-selected Limited
Information) for fake news detection. In SLIM, we quantify the amount of
information by introducing information-theoretic measures. SLIM leverages
limited information to achieve performance in fake news detection comparable to
that of state-of-the-art obtained using the full text. Furthermore, by
combining various types of limited information, SLIM can perform even better
while significantly reducing the quantity of information required for training
compared to state-of-the-art language model-based fake news detection
techniques.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:32:37 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Cao",
"Zhaoyang",
""
],
[
"Nguyen",
"John",
""
],
[
"Zafarani",
"Reza",
""
]
] | TITLE: Is Less Really More? Fake News Detection with Limited Information
ABSTRACT: The threat that online fake news and misinformation pose to democracy,
justice, public confidence, and especially to vulnerable populations, has led
to a sharp increase in the need for fake news detection and intervention.
Whether multi-modal or pure text-based, most fake news detection methods depend
on textual analysis of entire articles. However, these fake news detection
methods come with certain limitations. For instance, fake news detection
methods that rely on full text can be computationally inefficient, demand large
amounts of training data to achieve competitive accuracy, and may lack
robustness across different datasets. This is because fake news datasets have
strong variations in terms of the level and types of information they provide;
where some can include large paragraphs of text with images and metadata,
others can be a few short sentences. Perhaps if one could only use minimal
information to detect fake news, fake news detection methods could become more
robust and resilient to the lack of information. We aim to overcome these
limitations by detecting fake news using systematically selected, limited
information that is both effective and capable of delivering robust, promising
performance. We propose a framework called SLIM Systematically-selected Limited
Information) for fake news detection. In SLIM, we quantify the amount of
information by introducing information-theoretic measures. SLIM leverages
limited information to achieve performance in fake news detection comparable to
that of state-of-the-art obtained using the full text. Furthermore, by
combining various types of limited information, SLIM can perform even better
while significantly reducing the quantity of information required for training
compared to state-of-the-art language model-based fake news detection
techniques.
|
2504.01925 | Haykel Snoussi | Haykel Snoussi and Davood Karimi | Equivariant Spherical CNNs for Accurate Fiber Orientation Distribution
Estimation in Neonatal Diffusion MRI with Reduced Acquisition Time | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Early and accurate assessment of brain microstructure using diffusion
Magnetic Resonance Imaging (dMRI) is crucial for identifying neurodevelopmental
disorders in neonates, but remains challenging due to low signal-to-noise ratio
(SNR), motion artifacts, and ongoing myelination. In this study, we propose a
rotationally equivariant Spherical Convolutional Neural Network (sCNN)
framework tailored for neonatal dMRI. We predict the Fiber Orientation
Distribution (FOD) from multi-shell dMRI signals acquired with a reduced set of
gradient directions (30% of the full protocol), enabling faster and more
cost-effective acquisitions. We train and evaluate the performance of our sCNN
using real data from 43 neonatal dMRI datasets provided by the Developing Human
Connectome Project (dHCP). Our results demonstrate that the sCNN achieves
significantly lower mean squared error (MSE) and higher angular correlation
coefficient (ACC) compared to a Multi-Layer Perceptron (MLP) baseline,
indicating improved accuracy in FOD estimation. Furthermore, tractography
results based on the sCNN-predicted FODs show improved anatomical plausibility,
coverage, and coherence compared to those from the MLP. These findings
highlight that sCNNs, with their inherent rotational equivariance, offer a
promising approach for accurate and clinically efficient dMRI analysis, paving
the way for improved diagnostic capabilities and characterization of early
brain development.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:36:51 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Snoussi",
"Haykel",
""
],
[
"Karimi",
"Davood",
""
]
] | TITLE: Equivariant Spherical CNNs for Accurate Fiber Orientation Distribution
Estimation in Neonatal Diffusion MRI with Reduced Acquisition Time
ABSTRACT: Early and accurate assessment of brain microstructure using diffusion
Magnetic Resonance Imaging (dMRI) is crucial for identifying neurodevelopmental
disorders in neonates, but remains challenging due to low signal-to-noise ratio
(SNR), motion artifacts, and ongoing myelination. In this study, we propose a
rotationally equivariant Spherical Convolutional Neural Network (sCNN)
framework tailored for neonatal dMRI. We predict the Fiber Orientation
Distribution (FOD) from multi-shell dMRI signals acquired with a reduced set of
gradient directions (30% of the full protocol), enabling faster and more
cost-effective acquisitions. We train and evaluate the performance of our sCNN
using real data from 43 neonatal dMRI datasets provided by the Developing Human
Connectome Project (dHCP). Our results demonstrate that the sCNN achieves
significantly lower mean squared error (MSE) and higher angular correlation
coefficient (ACC) compared to a Multi-Layer Perceptron (MLP) baseline,
indicating improved accuracy in FOD estimation. Furthermore, tractography
results based on the sCNN-predicted FODs show improved anatomical plausibility,
coverage, and coherence compared to those from the MLP. These findings
highlight that sCNNs, with their inherent rotational equivariance, offer a
promising approach for accurate and clinically efficient dMRI analysis, paving
the way for improved diagnostic capabilities and characterization of early
brain development.
|
2504.01930 | Washington Cunha | Washington Cunha, Leonardo Rocha, Marcos Andr\'e Gon\c{c}alves | A thorough benchmark of automatic text classification: From traditional
approaches to large language models | 7 pages, 2 figures, 3 tables | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Automatic text classification (ATC) has experienced remarkable advancements
in the past decade, best exemplified by recent small and large language models
(SLMs and LLMs), leveraged by Transformer architectures. Despite recent
effectiveness improvements, a comprehensive cost-benefit analysis investigating
whether the effectiveness gains of these recent approaches compensate their
much higher costs when compared to more traditional text classification
approaches such as SVMs and Logistic Regression is still missing in the
literature. In this context, this work's main contributions are twofold: (i) we
provide a scientifically sound comparative analysis of the cost-benefit of
twelve traditional and recent ATC solutions including five open LLMs, and (ii)
a large benchmark comprising {22 datasets}, including sentiment analysis and
topic classification, with their (train-validation-test) partitions based on
folded cross-validation procedures, along with documentation, and code. The
release of code, data, and documentation enables the community to replicate
experiments and advance the field in a more scientifically sound manner. Our
comparative experimental results indicate that LLMs outperform traditional
approaches (up to 26%-7.1% on average) and SLMs (up to 4.9%-1.9% on average) in
terms of effectiveness. However, LLMs incur significantly higher computational
costs due to fine-tuning, being, on average 590x and 8.5x slower than
traditional methods and SLMs, respectively. Results suggests the following
recommendations: (1) LLMs for applications that require the best possible
effectiveness and can afford the costs; (2) traditional methods such as
Logistic Regression and SVM for resource-limited applications or those that
cannot afford the cost of tuning large LLMs; and (3) SLMs like Roberta for
near-optimal effectiveness-efficiency trade-off.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:40:08 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Cunha",
"Washington",
""
],
[
"Rocha",
"Leonardo",
""
],
[
"Gonçalves",
"Marcos André",
""
]
] | TITLE: A thorough benchmark of automatic text classification: From traditional
approaches to large language models
ABSTRACT: Automatic text classification (ATC) has experienced remarkable advancements
in the past decade, best exemplified by recent small and large language models
(SLMs and LLMs), leveraged by Transformer architectures. Despite recent
effectiveness improvements, a comprehensive cost-benefit analysis investigating
whether the effectiveness gains of these recent approaches compensate their
much higher costs when compared to more traditional text classification
approaches such as SVMs and Logistic Regression is still missing in the
literature. In this context, this work's main contributions are twofold: (i) we
provide a scientifically sound comparative analysis of the cost-benefit of
twelve traditional and recent ATC solutions including five open LLMs, and (ii)
a large benchmark comprising {22 datasets}, including sentiment analysis and
topic classification, with their (train-validation-test) partitions based on
folded cross-validation procedures, along with documentation, and code. The
release of code, data, and documentation enables the community to replicate
experiments and advance the field in a more scientifically sound manner. Our
comparative experimental results indicate that LLMs outperform traditional
approaches (up to 26%-7.1% on average) and SLMs (up to 4.9%-1.9% on average) in
terms of effectiveness. However, LLMs incur significantly higher computational
costs due to fine-tuning, being, on average 590x and 8.5x slower than
traditional methods and SLMs, respectively. Results suggests the following
recommendations: (1) LLMs for applications that require the best possible
effectiveness and can afford the costs; (2) traditional methods such as
Logistic Regression and SVM for resource-limited applications or those that
cannot afford the cost of tuning large LLMs; and (3) SLMs like Roberta for
near-optimal effectiveness-efficiency trade-off.
|
2504.01943 | Wasi Uddin Ahmad | Wasi Uddin Ahmad, Sean Narenthiran, Somshubra Majumdar, Aleksander
Ficek, Siddhartha Jain, Jocelyn Huang, Vahid Noroozi, Boris Ginsburg | OpenCodeReasoning: Advancing Data Distillation for Competitive Coding | Work in progress | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Since the advent of reasoning-based large language models, many have found
great success from distilling reasoning capabilities into student models. Such
techniques have significantly bridged the gap between reasoning and standard
LLMs on coding tasks. Despite this, much of the progress on distilling
reasoning models remains locked behind proprietary datasets or lacks details on
data curation, filtering and subsequent training. To address this, we construct
a superior supervised fine-tuning (SFT) dataset that we use to achieve
state-of-the-art coding capability results in models of various sizes. Our
distilled models use only SFT to achieve 61.8% on LiveCodeBench and 24.6% on
CodeContests, surpassing alternatives trained with reinforcement learning. We
then perform analysis on the data sources used to construct our dataset, the
impact of code execution filtering, and the importance of instruction/solution
diversity. We observe that execution filtering negatively affected benchmark
accuracy, leading us to prioritize instruction diversity over solution
correctness. Finally, we also analyze the token efficiency and reasoning
patterns utilized by these models. We will open-source these datasets and
distilled models to the community.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:50:31 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Narenthiran",
"Sean",
""
],
[
"Majumdar",
"Somshubra",
""
],
[
"Ficek",
"Aleksander",
""
],
[
"Jain",
"Siddhartha",
""
],
[
"Huang",
"Jocelyn",
""
],
[
"Noroozi",
"Vahid",
""
],
[
"Ginsburg",
"Boris",
""
]
] | TITLE: OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
ABSTRACT: Since the advent of reasoning-based large language models, many have found
great success from distilling reasoning capabilities into student models. Such
techniques have significantly bridged the gap between reasoning and standard
LLMs on coding tasks. Despite this, much of the progress on distilling
reasoning models remains locked behind proprietary datasets or lacks details on
data curation, filtering and subsequent training. To address this, we construct
a superior supervised fine-tuning (SFT) dataset that we use to achieve
state-of-the-art coding capability results in models of various sizes. Our
distilled models use only SFT to achieve 61.8% on LiveCodeBench and 24.6% on
CodeContests, surpassing alternatives trained with reinforcement learning. We
then perform analysis on the data sources used to construct our dataset, the
impact of code execution filtering, and the importance of instruction/solution
diversity. We observe that execution filtering negatively affected benchmark
accuracy, leading us to prioritize instruction diversity over solution
correctness. Finally, we also analyze the token efficiency and reasoning
patterns utilized by these models. We will open-source these datasets and
distilled models to the community.
|
2504.01947 | Daniel Becking | Daniel Becking, Ingo Friese, Karsten M\"uller, Thomas Buchholz, Mandy
Galkow-Schneider, Wojciech Samek, Detlev Marpe | Efficient Federated Learning Tiny Language Models for Mobile Network
Feature Prediction | Accepted at 2025 EuCNC & 6G Summit Poster Session | null | null | null | cs.LG cs.AI cs.DC eess.SP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In telecommunications, Autonomous Networks (ANs) automatically adjust
configurations based on specific requirements (e.g., bandwidth) and available
resources. These networks rely on continuous monitoring and intelligent
mechanisms for self-optimization, self-repair, and self-protection, nowadays
enhanced by Neural Networks (NNs) to enable predictive modeling and pattern
recognition. Here, Federated Learning (FL) allows multiple AN cells - each
equipped with NNs - to collaboratively train models while preserving data
privacy. However, FL requires frequent transmission of large neural data and
thus an efficient, standardized compression strategy for reliable
communication. To address this, we investigate NNCodec, a Fraunhofer
implementation of the ISO/IEC Neural Network Coding (NNC) standard, within a
novel FL framework that integrates tiny language models (TLMs) for various
mobile network feature prediction (e.g., ping, SNR or band frequency). Our
experimental results on the Berlin V2X dataset demonstrate that NNCodec
achieves transparent compression (i.e., negligible performance loss) while
reducing communication overhead to below 1%, showing the effectiveness of
combining NNC with FL in collaboratively learned autonomous mobile networks.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:54:06 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Becking",
"Daniel",
""
],
[
"Friese",
"Ingo",
""
],
[
"Müller",
"Karsten",
""
],
[
"Buchholz",
"Thomas",
""
],
[
"Galkow-Schneider",
"Mandy",
""
],
[
"Samek",
"Wojciech",
""
],
[
"Marpe",
"Detlev",
""
]
] | TITLE: Efficient Federated Learning Tiny Language Models for Mobile Network
Feature Prediction
ABSTRACT: In telecommunications, Autonomous Networks (ANs) automatically adjust
configurations based on specific requirements (e.g., bandwidth) and available
resources. These networks rely on continuous monitoring and intelligent
mechanisms for self-optimization, self-repair, and self-protection, nowadays
enhanced by Neural Networks (NNs) to enable predictive modeling and pattern
recognition. Here, Federated Learning (FL) allows multiple AN cells - each
equipped with NNs - to collaboratively train models while preserving data
privacy. However, FL requires frequent transmission of large neural data and
thus an efficient, standardized compression strategy for reliable
communication. To address this, we investigate NNCodec, a Fraunhofer
implementation of the ISO/IEC Neural Network Coding (NNC) standard, within a
novel FL framework that integrates tiny language models (TLMs) for various
mobile network feature prediction (e.g., ping, SNR or band frequency). Our
experimental results on the Berlin V2X dataset demonstrate that NNCodec
achieves transparent compression (i.e., negligible performance loss) while
reducing communication overhead to below 1%, showing the effectiveness of
combining NNC with FL in collaboratively learned autonomous mobile networks.
|
2504.01951 | Ciro Beneduce | Massimiliano Luca, Ciro Beneduce, Bruno Lepri, Jacopo Staiano | The LLM Wears Prada: Analysing Gender Bias and Stereotypes through
Online Shopping Data | null | null | null | null | cs.AI cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the wide and cross-domain adoption of Large Language Models, it becomes
crucial to assess to which extent the statistical correlations in training
data, which underlie their impressive performance, hide subtle and potentially
troubling biases. Gender bias in LLMs has been widely investigated from the
perspectives of works, hobbies, and emotions typically associated with a
specific gender. In this study, we introduce a novel perspective. We
investigate whether LLMs can predict an individual's gender based solely on
online shopping histories and whether these predictions are influenced by
gender biases and stereotypes. Using a dataset of historical online purchases
from users in the United States, we evaluate the ability of six LLMs to
classify gender and we then analyze their reasoning and products-gender
co-occurrences. Results indicate that while models can infer gender with
moderate accuracy, their decisions are often rooted in stereotypical
associations between product categories and gender. Furthermore, explicit
instructions to avoid bias reduce the certainty of model predictions, but do
not eliminate stereotypical patterns. Our findings highlight the persistent
nature of gender biases in LLMs and emphasize the need for robust
bias-mitigation strategies.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:56:08 GMT"
}
] | 2025-04-03T00:00:00 | [
[
"Luca",
"Massimiliano",
""
],
[
"Beneduce",
"Ciro",
""
],
[
"Lepri",
"Bruno",
""
],
[
"Staiano",
"Jacopo",
""
]
] | TITLE: The LLM Wears Prada: Analysing Gender Bias and Stereotypes through
Online Shopping Data
ABSTRACT: With the wide and cross-domain adoption of Large Language Models, it becomes
crucial to assess to which extent the statistical correlations in training
data, which underlie their impressive performance, hide subtle and potentially
troubling biases. Gender bias in LLMs has been widely investigated from the
perspectives of works, hobbies, and emotions typically associated with a
specific gender. In this study, we introduce a novel perspective. We
investigate whether LLMs can predict an individual's gender based solely on
online shopping histories and whether these predictions are influenced by
gender biases and stereotypes. Using a dataset of historical online purchases
from users in the United States, we evaluate the ability of six LLMs to
classify gender and we then analyze their reasoning and products-gender
co-occurrences. Results indicate that while models can infer gender with
moderate accuracy, their decisions are often rooted in stereotypical
associations between product categories and gender. Furthermore, explicit
instructions to avoid bias reduce the certainty of model predictions, but do
not eliminate stereotypical patterns. Our findings highlight the persistent
nature of gender biases in LLMs and emphasize the need for robust
bias-mitigation strategies.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.