Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.21825 | Younes MOUSSAOUI | Youn\`es Moussaoui (Nantes Univ - ECN, CHU Nantes), Diana Mateus
(Nantes Univ - ECN), Nasrin Taheri (CHU Nantes), Sa\"id Moussaoui (Nantes
Univ - ECN), Thomas Carlier (CHU Nantes), Simon Stute (CHU Nantes) | Implicit neural representations for end-to-end PET reconstruction | IEEE International Symposium on Biomedical Imaging, Apr 2025, Houston
(Texas), United States | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Implicit neural representations (INRs) have demonstrated strong capabilities
in various medical imaging tasks, such as denoising, registration, and
segmentation, by representing images as continuous functions, allowing complex
details to be captured. For image reconstruction problems, INRs can also reduce
artifacts typically introduced by conventional reconstruction algorithms.
However, to the best of our knowledge, INRs have not been studied in the
context of PET reconstruction. In this paper, we propose an unsupervised PET
image reconstruction method based on the implicit SIREN neural network
architecture using sinusoidal activation functions. Our method incorporates a
forward projection model and a loss function adapted to perform PET image
reconstruction directly from sinograms, without the need for large training
datasets. The performance of the proposed approach was compared with that of
conventional penalized likelihood methods and deep image prior (DIP) based
reconstruction using brain phantom data and realistically simulated sinograms.
The results show that the INR-based approach can reconstruct high-quality
images with a simpler, more efficient model, offering improvements in PET image
reconstruction, particularly in terms of contrast, activity recovery, and
relative bias.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:30:53 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Moussaoui",
"Younès",
"",
"Nantes Univ - ECN, CHU Nantes"
],
[
"Mateus",
"Diana",
"",
"Nantes Univ - ECN"
],
[
"Taheri",
"Nasrin",
"",
"CHU Nantes"
],
[
"Moussaoui",
"Saïd",
"",
"Nantes\n Univ - ECN"
],
[
"Carlier",
"Thomas",
"",
"CHU Nantes"
],
[
"Stute",
"Simon",
"",
"CHU Nantes"
]
] | TITLE: Implicit neural representations for end-to-end PET reconstruction
ABSTRACT: Implicit neural representations (INRs) have demonstrated strong capabilities
in various medical imaging tasks, such as denoising, registration, and
segmentation, by representing images as continuous functions, allowing complex
details to be captured. For image reconstruction problems, INRs can also reduce
artifacts typically introduced by conventional reconstruction algorithms.
However, to the best of our knowledge, INRs have not been studied in the
context of PET reconstruction. In this paper, we propose an unsupervised PET
image reconstruction method based on the implicit SIREN neural network
architecture using sinusoidal activation functions. Our method incorporates a
forward projection model and a loss function adapted to perform PET image
reconstruction directly from sinograms, without the need for large training
datasets. The performance of the proposed approach was compared with that of
conventional penalized likelihood methods and deep image prior (DIP) based
reconstruction using brain phantom data and realistically simulated sinograms.
The results show that the INR-based approach can reconstruct high-quality
images with a simpler, more efficient model, offering improvements in PET image
reconstruction, particularly in terms of contrast, activity recovery, and
relative bias.
|
2503.21826 | Ludovic Tuncay | Ludovic Tuncay (IRIT-SAMoVA), Etienne Labb\'e (IRIT-SAMoVA), Thomas
Pellegrini (IRIT-SAMoVA, UT3) | Hierarchical Label Propagation: A Model-Size-Dependent Performance
Booster for AudioSet Tagging | null | ICASSP 2025 - 2025 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), Apr 2025, Hyderabad, India. pp.1-5 | 10.1109/ICASSP49660.2025.10888798 | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AudioSet is one of the most used and largest datasets in audio tagging,
containing about 2 million audio samples that are manually labeled with 527
event categories organized into an ontology. However, the annotations contain
inconsistencies, particularly where categories that should be labeled as
positive according to the ontology are frequently mislabeled as negative. To
address this issue, we apply Hierarchical Label Propagation (HLP), which
propagates labels up the ontology hierarchy, resulting in a mean increase in
positive labels per audio clip from 1.98 to 2.39 and affecting 109 out of the
527 classes. Our results demonstrate that HLP provides performance benefits
across various model architectures, including convolutional neural networks
(PANN's CNN6 and ConvNeXT) and transformers (PaSST), with smaller models
showing more improvements. Finally, on FSD50K, another widely used dataset,
models trained on AudioSet with HLP consistently outperformed those trained
without HLP. Our source code will be made available on GitHub.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 08:45:43 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Tuncay",
"Ludovic",
"",
"IRIT-SAMoVA"
],
[
"Labbé",
"Etienne",
"",
"IRIT-SAMoVA"
],
[
"Pellegrini",
"Thomas",
"",
"IRIT-SAMoVA, UT3"
]
] | TITLE: Hierarchical Label Propagation: A Model-Size-Dependent Performance
Booster for AudioSet Tagging
ABSTRACT: AudioSet is one of the most used and largest datasets in audio tagging,
containing about 2 million audio samples that are manually labeled with 527
event categories organized into an ontology. However, the annotations contain
inconsistencies, particularly where categories that should be labeled as
positive according to the ontology are frequently mislabeled as negative. To
address this issue, we apply Hierarchical Label Propagation (HLP), which
propagates labels up the ontology hierarchy, resulting in a mean increase in
positive labels per audio clip from 1.98 to 2.39 and affecting 109 out of the
527 classes. Our results demonstrate that HLP provides performance benefits
across various model architectures, including convolutional neural networks
(PANN's CNN6 and ConvNeXT) and transformers (PaSST), with smaller models
showing more improvements. Finally, on FSD50K, another widely used dataset,
models trained on AudioSet with HLP consistently outperformed those trained
without HLP. Our source code will be made available on GitHub.
|
2503.21827 | Mark Phil Pacot | Mark Phil Pacot, Jayno Juventud, Gleen Dalaorao | Hybrid Multi-Stage Learning Framework for Edge Detection: A Survey | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Edge detection remains a fundamental yet challenging task in computer vision,
especially under varying illumination, noise, and complex scene conditions.
This paper introduces a Hybrid Multi-Stage Learning Framework that integrates
Convolutional Neural Network (CNN) feature extraction with a Support Vector
Machine (SVM) classifier to improve edge localization and structural accuracy.
Unlike conventional end-to-end deep learning models, our approach decouples
feature representation and classification stages, enhancing robustness and
interpretability. Extensive experiments conducted on benchmark datasets such as
BSDS500 and NYUDv2 demonstrate that the proposed framework outperforms
traditional edge detectors and even recent learning-based methods in terms of
Optimal Dataset Scale (ODS) and Optimal Image Scale (OIS), while maintaining
competitive Average Precision (AP). Both qualitative and quantitative results
highlight enhanced performance on edge continuity, noise suppression, and
perceptual clarity achieved by our method. This work not only bridges classical
and deep learning paradigms but also sets a new direction for scalable,
interpretable, and high-quality edge detection solutions.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 13:06:31 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Pacot",
"Mark Phil",
""
],
[
"Juventud",
"Jayno",
""
],
[
"Dalaorao",
"Gleen",
""
]
] | TITLE: Hybrid Multi-Stage Learning Framework for Edge Detection: A Survey
ABSTRACT: Edge detection remains a fundamental yet challenging task in computer vision,
especially under varying illumination, noise, and complex scene conditions.
This paper introduces a Hybrid Multi-Stage Learning Framework that integrates
Convolutional Neural Network (CNN) feature extraction with a Support Vector
Machine (SVM) classifier to improve edge localization and structural accuracy.
Unlike conventional end-to-end deep learning models, our approach decouples
feature representation and classification stages, enhancing robustness and
interpretability. Extensive experiments conducted on benchmark datasets such as
BSDS500 and NYUDv2 demonstrate that the proposed framework outperforms
traditional edge detectors and even recent learning-based methods in terms of
Optimal Dataset Scale (ODS) and Optimal Image Scale (OIS), while maintaining
competitive Average Precision (AP). Both qualitative and quantitative results
highlight enhanced performance on edge continuity, noise suppression, and
perceptual clarity achieved by our method. This work not only bridges classical
and deep learning paradigms but also sets a new direction for scalable,
interpretable, and high-quality edge detection solutions.
|
2503.21829 | Richard McKinley | Ivan Diaz, Florin Scherer, Yanik Berli, Roland Wiest, Helly Hammer,
Robert Hoepner, Alejandro Leon Betancourt, Piotr Radojewski, Richard McKinley | Learning from spatially inhomogenous data: resolution-adaptive
convolutions for multiple sclerosis lesion segmentation | null | null | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the setting of clinical imaging, differences in between vendors, hospitals
and sequences can yield highly inhomogeneous imaging data. In MRI in
particular, voxel dimension, slice spacing and acquisition plane can vary
substantially. For clinical applications, therefore, algorithms must be trained
to handle data with various voxel resolutions. The usual strategy to deal with
heterogeneity of resolution is harmonization: resampling imaging data to a
common (usually isovoxel) resolution. This can lead to loss of fidelity arising
from interpolation artifacts out-of-plane and downsampling in-plane. We present
in this paper a network architecture designed to be able to learn directly from
spatially heterogeneous data, without resampling: a segmentation network based
on the e3nn framework that leverages a spherical harmonic, rather than
voxel-grid, parameterization of convolutional kernels, with a fixed physical
radius. Networks based on these kernels can be resampled to their input voxel
dimensions. We trained and tested our network on a publicly available dataset
assembled from three centres, and on an in-house dataset of Multiple Sclerosis
cases with a high degree of spatial inhomogeneity. We compared our approach to
a standard U-Net with two strategies for handling inhomogeneous data: training
directly on the data without resampling, and resampling to a common resolution
of 1mm isovoxels. We show that our network is able to learn from various
combinations of voxel sizes and outperforms classical U-Nets on 2D testing
cases and most 3D testing cases. This shows an ability to generalize well when
tested on image resolutions not seen during training. Our code can be found at:
http://github.com/SCAN-NRAD/e3nn\_U-Net.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 14:07:52 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Diaz",
"Ivan",
""
],
[
"Scherer",
"Florin",
""
],
[
"Berli",
"Yanik",
""
],
[
"Wiest",
"Roland",
""
],
[
"Hammer",
"Helly",
""
],
[
"Hoepner",
"Robert",
""
],
[
"Betancourt",
"Alejandro Leon",
""
],
[
"Radojewski",
"Piotr",
""
],
[
"McKinley",
"Richard",
""
]
] | TITLE: Learning from spatially inhomogenous data: resolution-adaptive
convolutions for multiple sclerosis lesion segmentation
ABSTRACT: In the setting of clinical imaging, differences in between vendors, hospitals
and sequences can yield highly inhomogeneous imaging data. In MRI in
particular, voxel dimension, slice spacing and acquisition plane can vary
substantially. For clinical applications, therefore, algorithms must be trained
to handle data with various voxel resolutions. The usual strategy to deal with
heterogeneity of resolution is harmonization: resampling imaging data to a
common (usually isovoxel) resolution. This can lead to loss of fidelity arising
from interpolation artifacts out-of-plane and downsampling in-plane. We present
in this paper a network architecture designed to be able to learn directly from
spatially heterogeneous data, without resampling: a segmentation network based
on the e3nn framework that leverages a spherical harmonic, rather than
voxel-grid, parameterization of convolutional kernels, with a fixed physical
radius. Networks based on these kernels can be resampled to their input voxel
dimensions. We trained and tested our network on a publicly available dataset
assembled from three centres, and on an in-house dataset of Multiple Sclerosis
cases with a high degree of spatial inhomogeneity. We compared our approach to
a standard U-Net with two strategies for handling inhomogeneous data: training
directly on the data without resampling, and resampling to a common resolution
of 1mm isovoxels. We show that our network is able to learn from various
combinations of voxel sizes and outperforms classical U-Nets on 2D testing
cases and most 3D testing cases. This shows an ability to generalize well when
tested on image resolutions not seen during training. Our code can be found at:
http://github.com/SCAN-NRAD/e3nn\_U-Net.
|
2503.21834 | Haomin Yu | Haomin Yu, Tianyi Li, Kristian Torp, Christian S. Jensen | A Multi-Modal Knowledge-Enhanced Framework for Vessel Trajectory
Prediction | 8 pages, 5 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Accurate vessel trajectory prediction facilitates improved navigational
safety, routing, and environmental protection. However, existing prediction
methods are challenged by the irregular sampling time intervals of the vessel
tracking data from the global AIS system and the complexity of vessel movement.
These aspects render model learning and generalization difficult. To address
these challenges and improve vessel trajectory prediction, we propose the
multi-modal knowledge-enhanced framework (MAKER) for vessel trajectory
prediction. To contend better with the irregular sampling time intervals, MAKER
features a Large language model-guided Knowledge Transfer (LKT) module that
leverages pre-trained language models to transfer trajectory-specific
contextual knowledge effectively. To enhance the ability to learn complex
trajectory patterns, MAKER incorporates a Knowledge-based Self-paced Learning
(KSL) module. This module employs kinematic knowledge to progressively
integrate complex patterns during training, allowing for adaptive learning and
enhanced generalization. Experimental results on two vessel trajectory datasets
show that MAKER can improve the prediction accuracy of state-of-the-art methods
by 12.08%-17.86%.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 00:01:35 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Yu",
"Haomin",
""
],
[
"Li",
"Tianyi",
""
],
[
"Torp",
"Kristian",
""
],
[
"Jensen",
"Christian S.",
""
]
] | TITLE: A Multi-Modal Knowledge-Enhanced Framework for Vessel Trajectory
Prediction
ABSTRACT: Accurate vessel trajectory prediction facilitates improved navigational
safety, routing, and environmental protection. However, existing prediction
methods are challenged by the irregular sampling time intervals of the vessel
tracking data from the global AIS system and the complexity of vessel movement.
These aspects render model learning and generalization difficult. To address
these challenges and improve vessel trajectory prediction, we propose the
multi-modal knowledge-enhanced framework (MAKER) for vessel trajectory
prediction. To contend better with the irregular sampling time intervals, MAKER
features a Large language model-guided Knowledge Transfer (LKT) module that
leverages pre-trained language models to transfer trajectory-specific
contextual knowledge effectively. To enhance the ability to learn complex
trajectory patterns, MAKER incorporates a Knowledge-based Self-paced Learning
(KSL) module. This module employs kinematic knowledge to progressively
integrate complex patterns during training, allowing for adaptive learning and
enhanced generalization. Experimental results on two vessel trajectory datasets
show that MAKER can improve the prediction accuracy of state-of-the-art methods
by 12.08%-17.86%.
|
2503.21836 | Ran Wei | Ran Wei, ZhiXiong Lan, Qing Yan, Ning Song, Ming Lv, LongQing Ye | iMedImage Technical Report | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background: Chromosome karyotype analysis is crucial for diagnosing
hereditary diseases, yet detecting structural abnormalities remains
challenging. While AI has shown promise in medical imaging, its effectiveness
varies across modalities. Leveraging advances in Foundation Models that
integrate multimodal medical imaging for robust feature extraction and accurate
diagnosis, we developed iMedImage, an end-to-end model for general medical
image recognition, demonstrating strong performance across multiple imaging
tasks, including chromosome abnormality detection. Materials and Methods: We
constructed a comprehensive medical image dataset encompassing multiple
modalities from common medical domains, including chromosome, cell, pathology,
ultrasound, X-ray, CT, and MRI images. Based on this dataset, we developed the
iMedImage model, which incorporates the following key features: (1) a unified
representation method for diverse modality inputs and medical imaging tasks;
(2) multi-level (case-level, image-level, patch-level) image recognition
capabilities enhanced by Chain of Thought (CoT) embedding and Mixture of
Experts (MoE) strategies. Results: The test set comprised data from 12
institutions across six regions in China, covering three mainstream scanning
devices, and included naturally distributed, unscreened abnormal cases. On this
diverse dataset, the model achieved a fully automated chromosome analysis
workflow, including segmentation, karyotyping, and abnormality detection,
reaching a sensitivity of 92.75% and a specificity of 91.5%. Conclusion: We
propose iMedImage, an end-to-end foundation model for medical image analysis,
demonstrating its superior performance across various medical imaging tasks.
iMedImage provides clinicians with a precise imaging analysis tool and
contributes to improving diagnostic accuracy and disease screening.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 03:25:28 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wei",
"Ran",
""
],
[
"Lan",
"ZhiXiong",
""
],
[
"Yan",
"Qing",
""
],
[
"Song",
"Ning",
""
],
[
"Lv",
"Ming",
""
],
[
"Ye",
"LongQing",
""
]
] | TITLE: iMedImage Technical Report
ABSTRACT: Background: Chromosome karyotype analysis is crucial for diagnosing
hereditary diseases, yet detecting structural abnormalities remains
challenging. While AI has shown promise in medical imaging, its effectiveness
varies across modalities. Leveraging advances in Foundation Models that
integrate multimodal medical imaging for robust feature extraction and accurate
diagnosis, we developed iMedImage, an end-to-end model for general medical
image recognition, demonstrating strong performance across multiple imaging
tasks, including chromosome abnormality detection. Materials and Methods: We
constructed a comprehensive medical image dataset encompassing multiple
modalities from common medical domains, including chromosome, cell, pathology,
ultrasound, X-ray, CT, and MRI images. Based on this dataset, we developed the
iMedImage model, which incorporates the following key features: (1) a unified
representation method for diverse modality inputs and medical imaging tasks;
(2) multi-level (case-level, image-level, patch-level) image recognition
capabilities enhanced by Chain of Thought (CoT) embedding and Mixture of
Experts (MoE) strategies. Results: The test set comprised data from 12
institutions across six regions in China, covering three mainstream scanning
devices, and included naturally distributed, unscreened abnormal cases. On this
diverse dataset, the model achieved a fully automated chromosome analysis
workflow, including segmentation, karyotyping, and abnormality detection,
reaching a sensitivity of 92.75% and a specificity of 91.5%. Conclusion: We
propose iMedImage, an end-to-end foundation model for medical image analysis,
demonstrating its superior performance across various medical imaging tasks.
iMedImage provides clinicians with a precise imaging analysis tool and
contributes to improving diagnostic accuracy and disease screening.
|
2503.21841 | Jingtao Li | Jingtao Li, Yingyi Liu, Xinyu Wang, Yunning Peng, Chen Sun, Shaoyu
Wang, Zhendong Sun, Tian Ke, Xiao Jiang, Tangwei Lu, Anran Zhao, Yanfei Zhong | HyperFree: A Channel-adaptive and Tuning-free Foundation Model for
Hyperspectral Remote Sensing Imagery | Accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advanced interpretation of hyperspectral remote sensing images benefits many
precise Earth observation tasks. Recently, visual foundation models have
promoted the remote sensing interpretation but concentrating on RGB and
multispectral images. Due to the varied hyperspectral channels,existing
foundation models would face image-by-image tuning situation, imposing great
pressure on hardware and time resources. In this paper, we propose a
tuning-free hyperspectral foundation model called HyperFree, by adapting the
existing visual prompt engineering. To process varied channel numbers, we
design a learned weight dictionary covering full-spectrum from $0.4 \sim 2.5 \,
\mu\text{m}$, supporting to build the embedding layer dynamically. To make the
prompt design more tractable, HyperFree can generate multiple semantic-aware
masks for one prompt by treating feature distance as semantic-similarity. After
pre-training HyperFree on constructed large-scale high-resolution hyperspectral
images, HyperFree (1 prompt) has shown comparable results with specialized
models (5 shots) on 5 tasks and 11 datasets.Code and dataset are accessible at
https://rsidea.whu.edu.cn/hyperfree.htm.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:27:10 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Jingtao",
""
],
[
"Liu",
"Yingyi",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Peng",
"Yunning",
""
],
[
"Sun",
"Chen",
""
],
[
"Wang",
"Shaoyu",
""
],
[
"Sun",
"Zhendong",
""
],
[
"Ke",
"Tian",
""
],
[
"Jiang",
"Xiao",
""
],
[
"Lu",
"Tangwei",
""
],
[
"Zhao",
"Anran",
""
],
[
"Zhong",
"Yanfei",
""
]
] | TITLE: HyperFree: A Channel-adaptive and Tuning-free Foundation Model for
Hyperspectral Remote Sensing Imagery
ABSTRACT: Advanced interpretation of hyperspectral remote sensing images benefits many
precise Earth observation tasks. Recently, visual foundation models have
promoted the remote sensing interpretation but concentrating on RGB and
multispectral images. Due to the varied hyperspectral channels,existing
foundation models would face image-by-image tuning situation, imposing great
pressure on hardware and time resources. In this paper, we propose a
tuning-free hyperspectral foundation model called HyperFree, by adapting the
existing visual prompt engineering. To process varied channel numbers, we
design a learned weight dictionary covering full-spectrum from $0.4 \sim 2.5 \,
\mu\text{m}$, supporting to build the embedding layer dynamically. To make the
prompt design more tractable, HyperFree can generate multiple semantic-aware
masks for one prompt by treating feature distance as semantic-similarity. After
pre-training HyperFree on constructed large-scale high-resolution hyperspectral
images, HyperFree (1 prompt) has shown comparable results with specialized
models (5 shots) on 5 tasks and 11 datasets.Code and dataset are accessible at
https://rsidea.whu.edu.cn/hyperfree.htm.
|
2503.21843 | Hang Xiao | Hanyu Liu, Siyao Li, Ying Yu, Yixuan Jiang, Hang Xiao, Jingxi Long,
Haotian Tang | CMD-HAR: Cross-Modal Disentanglement for Wearable Human Activity
Recognition | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human Activity Recognition (HAR) is a fundamental technology for numerous
human - centered intelligent applications. Although deep learning methods have
been utilized to accelerate feature extraction, issues such as multimodal data
mixing, activity heterogeneity, and complex model deployment remain largely
unresolved. The aim of this paper is to address issues such as multimodal data
mixing, activity heterogeneity, and complex model deployment in sensor-based
human activity recognition. We propose a spatiotemporal attention modal
decomposition alignment fusion strategy to tackle the problem of the mixed
distribution of sensor data. Key discriminative features of activities are
captured through cross-modal spatio-temporal disentangled representation, and
gradient modulation is combined to alleviate data heterogeneity. In addition, a
wearable deployment simulation system is constructed. We conducted experiments
on a large number of public datasets, demonstrating the effectiveness of the
model.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 15:21:49 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Liu",
"Hanyu",
""
],
[
"Li",
"Siyao",
""
],
[
"Yu",
"Ying",
""
],
[
"Jiang",
"Yixuan",
""
],
[
"Xiao",
"Hang",
""
],
[
"Long",
"Jingxi",
""
],
[
"Tang",
"Haotian",
""
]
] | TITLE: CMD-HAR: Cross-Modal Disentanglement for Wearable Human Activity
Recognition
ABSTRACT: Human Activity Recognition (HAR) is a fundamental technology for numerous
human - centered intelligent applications. Although deep learning methods have
been utilized to accelerate feature extraction, issues such as multimodal data
mixing, activity heterogeneity, and complex model deployment remain largely
unresolved. The aim of this paper is to address issues such as multimodal data
mixing, activity heterogeneity, and complex model deployment in sensor-based
human activity recognition. We propose a spatiotemporal attention modal
decomposition alignment fusion strategy to tackle the problem of the mixed
distribution of sensor data. Key discriminative features of activities are
captured through cross-modal spatio-temporal disentangled representation, and
gradient modulation is combined to alleviate data heterogeneity. In addition, a
wearable deployment simulation system is constructed. We conducted experiments
on a large number of public datasets, demonstrating the effectiveness of the
model.
|
2503.21846 | Giovanni Perin | Yesmine Abdennadher, Giovanni Perin, Riccardo Mazzieri, Jacopo
Pegoraro, Michele Rossi | LightSNN: Lightweight Architecture Search for Sparse and Accurate
Spiking Neural Networks | 6 pages, 3 figures, 2 tables. Submitted to conference | null | null | null | cs.NE cs.AI eess.SP | http://creativecommons.org/licenses/by/4.0/ | Spiking Neural Networks (SNNs) are highly regarded for their energy
efficiency, inherent activation sparsity, and suitability for real-time
processing in edge devices. However, most current SNN methods adopt
architectures resembling traditional artificial neural networks (ANNs), leading
to suboptimal performance when applied to SNNs. While SNNs excel in energy
efficiency, they have been associated with lower accuracy levels than
traditional ANNs when utilizing conventional architectures. In response, in
this work we present LightSNN, a rapid and efficient Neural Network
Architecture Search (NAS) technique specifically tailored for SNNs that
autonomously leverages the most suitable architecture, striking a good balance
between accuracy and efficiency by enforcing sparsity. Based on the spiking NAS
network (SNASNet) framework, a cell-based search space including backward
connections is utilized to build our training-free pruning-based NAS mechanism.
Our technique assesses diverse spike activation patterns across different data
samples using a sparsity-aware Hamming distance fitness evaluation. Thorough
experiments are conducted on both static (CIFAR10 and CIFAR100) and
neuromorphic datasets (DVS128-Gesture). Our LightSNN model achieves
state-of-the-art results on CIFAR10 and CIFAR100, improves performance on
DVS128Gesture by 4.49%, and significantly reduces search time, most notably
offering a 98x speedup over SNASNet and running 30% faster than the best
existing method on DVS128Gesture.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:38:13 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Abdennadher",
"Yesmine",
""
],
[
"Perin",
"Giovanni",
""
],
[
"Mazzieri",
"Riccardo",
""
],
[
"Pegoraro",
"Jacopo",
""
],
[
"Rossi",
"Michele",
""
]
] | TITLE: LightSNN: Lightweight Architecture Search for Sparse and Accurate
Spiking Neural Networks
ABSTRACT: Spiking Neural Networks (SNNs) are highly regarded for their energy
efficiency, inherent activation sparsity, and suitability for real-time
processing in edge devices. However, most current SNN methods adopt
architectures resembling traditional artificial neural networks (ANNs), leading
to suboptimal performance when applied to SNNs. While SNNs excel in energy
efficiency, they have been associated with lower accuracy levels than
traditional ANNs when utilizing conventional architectures. In response, in
this work we present LightSNN, a rapid and efficient Neural Network
Architecture Search (NAS) technique specifically tailored for SNNs that
autonomously leverages the most suitable architecture, striking a good balance
between accuracy and efficiency by enforcing sparsity. Based on the spiking NAS
network (SNASNet) framework, a cell-based search space including backward
connections is utilized to build our training-free pruning-based NAS mechanism.
Our technique assesses diverse spike activation patterns across different data
samples using a sparsity-aware Hamming distance fitness evaluation. Thorough
experiments are conducted on both static (CIFAR10 and CIFAR100) and
neuromorphic datasets (DVS128-Gesture). Our LightSNN model achieves
state-of-the-art results on CIFAR10 and CIFAR100, improves performance on
DVS128Gesture by 4.49%, and significantly reduces search time, most notably
offering a 98x speedup over SNASNet and running 30% faster than the best
existing method on DVS128Gesture.
|
2503.21847 | Yong Xie | Yong Xie, Yunlian Sun, Hongwen Zhang, Yebin Liu, Jinhui Tang | ReCoM: Realistic Co-Speech Motion Generation with Recurrent Embedded
Transformer | 8 pages, 6 figures, Project Page:
https://yong-xie-xy.github.io/ReCoM/ | null | null | null | cs.GR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present ReCoM, an efficient framework for generating high-fidelity and
generalizable human body motions synchronized with speech. The core innovation
lies in the Recurrent Embedded Transformer (RET), which integrates Dynamic
Embedding Regularization (DER) into a Vision Transformer (ViT) core
architecture to explicitly model co-speech motion dynamics. This architecture
enables joint spatial-temporal dependency modeling, thereby enhancing gesture
naturalness and fidelity through coherent motion synthesis. To enhance model
robustness, we incorporate the proposed DER strategy, which equips the model
with dual capabilities of noise resistance and cross-domain generalization,
thereby improving the naturalness and fluency of zero-shot motion generation
for unseen speech inputs. To mitigate inherent limitations of autoregressive
inference, including error accumulation and limited self-correction, we propose
an iterative reconstruction inference (IRI) strategy. IRI refines motion
sequences via cyclic pose reconstruction, driven by two key components: (1)
classifier-free guidance improves distribution alignment between generated and
real gestures without auxiliary supervision, and (2) a temporal smoothing
process eliminates abrupt inter-frame transitions while ensuring kinematic
continuity. Extensive experiments on benchmark datasets validate ReCoM's
effectiveness, achieving state-of-the-art performance across metrics. Notably,
it reduces the Fr\'echet Gesture Distance (FGD) from 18.70 to 2.48,
demonstrating an 86.7% improvement in motion realism. Our project page is
https://yong-xie-xy.github.io/ReCoM/.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:39:40 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Xie",
"Yong",
""
],
[
"Sun",
"Yunlian",
""
],
[
"Zhang",
"Hongwen",
""
],
[
"Liu",
"Yebin",
""
],
[
"Tang",
"Jinhui",
""
]
] | TITLE: ReCoM: Realistic Co-Speech Motion Generation with Recurrent Embedded
Transformer
ABSTRACT: We present ReCoM, an efficient framework for generating high-fidelity and
generalizable human body motions synchronized with speech. The core innovation
lies in the Recurrent Embedded Transformer (RET), which integrates Dynamic
Embedding Regularization (DER) into a Vision Transformer (ViT) core
architecture to explicitly model co-speech motion dynamics. This architecture
enables joint spatial-temporal dependency modeling, thereby enhancing gesture
naturalness and fidelity through coherent motion synthesis. To enhance model
robustness, we incorporate the proposed DER strategy, which equips the model
with dual capabilities of noise resistance and cross-domain generalization,
thereby improving the naturalness and fluency of zero-shot motion generation
for unseen speech inputs. To mitigate inherent limitations of autoregressive
inference, including error accumulation and limited self-correction, we propose
an iterative reconstruction inference (IRI) strategy. IRI refines motion
sequences via cyclic pose reconstruction, driven by two key components: (1)
classifier-free guidance improves distribution alignment between generated and
real gestures without auxiliary supervision, and (2) a temporal smoothing
process eliminates abrupt inter-frame transitions while ensuring kinematic
continuity. Extensive experiments on benchmark datasets validate ReCoM's
effectiveness, achieving state-of-the-art performance across metrics. Notably,
it reduces the Fr\'echet Gesture Distance (FGD) from 18.70 to 2.48,
demonstrating an 86.7% improvement in motion realism. Our project page is
https://yong-xie-xy.github.io/ReCoM/.
|
2503.21848 | Jonathan Attard | Jonathan Attard, Dylan Seychell | Comparative Analysis of Image, Video, and Audio Classifiers for
Automated News Video Segmentation | Preprint for paper in CAI 2025, 7 pages, 5 tables, 3 tables | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | News videos require efficient content organisation and retrieval systems, but
their unstructured nature poses significant challenges for automated
processing. This paper presents a comprehensive comparative analysis of image,
video, and audio classifiers for automated news video segmentation. This work
presents the development and evaluation of multiple deep learning approaches,
including ResNet, ViViT, AST, and multimodal architectures, to classify five
distinct segment types: advertisements, stories, studio scenes, transitions,
and visualisations. Using a custom-annotated dataset of 41 news videos
comprising 1,832 scene clips, our experiments demonstrate that image-based
classifiers achieve superior performance (84.34\% accuracy) compared to more
complex temporal models. Notably, the ResNet architecture outperformed
state-of-the-art video classifiers while requiring significantly fewer
computational resources. Binary classification models achieved high accuracy
for transitions (94.23\%) and advertisements (92.74\%). These findings advance
the understanding of effective architectures for news video segmentation and
provide practical insights for implementing automated content organisation
systems in media applications. These include media archiving, personalised
content delivery, and intelligent video search.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 16:42:50 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Attard",
"Jonathan",
""
],
[
"Seychell",
"Dylan",
""
]
] | TITLE: Comparative Analysis of Image, Video, and Audio Classifiers for
Automated News Video Segmentation
ABSTRACT: News videos require efficient content organisation and retrieval systems, but
their unstructured nature poses significant challenges for automated
processing. This paper presents a comprehensive comparative analysis of image,
video, and audio classifiers for automated news video segmentation. This work
presents the development and evaluation of multiple deep learning approaches,
including ResNet, ViViT, AST, and multimodal architectures, to classify five
distinct segment types: advertisements, stories, studio scenes, transitions,
and visualisations. Using a custom-annotated dataset of 41 news videos
comprising 1,832 scene clips, our experiments demonstrate that image-based
classifiers achieve superior performance (84.34\% accuracy) compared to more
complex temporal models. Notably, the ResNet architecture outperformed
state-of-the-art video classifiers while requiring significantly fewer
computational resources. Binary classification models achieved high accuracy
for transitions (94.23\%) and advertisements (92.74\%). These findings advance
the understanding of effective architectures for news video segmentation and
provide practical insights for implementing automated content organisation
systems in media applications. These include media archiving, personalised
content delivery, and intelligent video search.
|
2503.21860 | Kailin Li | Kailin Li, Puhao Li, Tengyu Liu, Yuyang Li, Siyuan Huang | ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via
Residual Learning | Accepted to CVPR 2025 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human hands play a central role in interacting, motivating increasing
research in dexterous robotic manipulation. Data-driven embodied AI algorithms
demand precise, large-scale, human-like manipulation sequences, which are
challenging to obtain with conventional reinforcement learning or real-world
teleoperation. To address this, we introduce ManipTrans, a novel two-stage
method for efficiently transferring human bimanual skills to dexterous robotic
hands in simulation. ManipTrans first pre-trains a generalist trajectory
imitator to mimic hand motion, then fine-tunes a specific residual module under
interaction constraints, enabling efficient learning and accurate execution of
complex bimanual tasks. Experiments show that ManipTrans surpasses
state-of-the-art methods in success rate, fidelity, and efficiency. Leveraging
ManipTrans, we transfer multiple hand-object datasets to robotic hands,
creating DexManipNet, a large-scale dataset featuring previously unexplored
tasks like pen capping and bottle unscrewing. DexManipNet comprises 3.3K
episodes of robotic manipulation and is easily extensible, facilitating further
policy training for dexterous hands and enabling real-world deployments.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 17:50:30 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Kailin",
""
],
[
"Li",
"Puhao",
""
],
[
"Liu",
"Tengyu",
""
],
[
"Li",
"Yuyang",
""
],
[
"Huang",
"Siyuan",
""
]
] | TITLE: ManipTrans: Efficient Dexterous Bimanual Manipulation Transfer via
Residual Learning
ABSTRACT: Human hands play a central role in interacting, motivating increasing
research in dexterous robotic manipulation. Data-driven embodied AI algorithms
demand precise, large-scale, human-like manipulation sequences, which are
challenging to obtain with conventional reinforcement learning or real-world
teleoperation. To address this, we introduce ManipTrans, a novel two-stage
method for efficiently transferring human bimanual skills to dexterous robotic
hands in simulation. ManipTrans first pre-trains a generalist trajectory
imitator to mimic hand motion, then fine-tunes a specific residual module under
interaction constraints, enabling efficient learning and accurate execution of
complex bimanual tasks. Experiments show that ManipTrans surpasses
state-of-the-art methods in success rate, fidelity, and efficiency. Leveraging
ManipTrans, we transfer multiple hand-object datasets to robotic hands,
creating DexManipNet, a large-scale dataset featuring previously unexplored
tasks like pen capping and bottle unscrewing. DexManipNet comprises 3.3K
episodes of robotic manipulation and is easily extensible, facilitating further
policy training for dexterous hands and enabling real-world deployments.
|
2503.21888 | Tharindu Kumarage | Zeyad Alghamdi, Tharindu Kumarage, Garima Agrawal, Mansooreh Karami,
Ibrahim Almuteb, Huan Liu | RedditESS: A Mental Health Social Support Interaction Dataset --
Understanding Effective Social Support to Refine AI-Driven Support Tools | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Effective mental health support is crucial for alleviating psychological
distress. While large language model (LLM)-based assistants have shown promise
in mental health interventions, existing research often defines "effective"
support primarily in terms of empathetic acknowledgments, overlooking other
essential dimensions such as informational guidance, community validation, and
tangible coping strategies. To address this limitation and better understand
what constitutes effective support, we introduce RedditESS, a novel real-world
dataset derived from Reddit posts, including supportive comments and original
posters' follow-up responses. Grounded in established social science theories,
we develop an ensemble labeling mechanism to annotate supportive comments as
effective or not and perform qualitative assessments to ensure the reliability
of the annotations. Additionally, we demonstrate the practical utility of
RedditESS by using it to guide LLM alignment toward generating more
context-sensitive and genuinely helpful supportive responses. By broadening the
understanding of effective support, our study paves the way for advanced
AI-driven mental health interventions.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 18:03:11 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Alghamdi",
"Zeyad",
""
],
[
"Kumarage",
"Tharindu",
""
],
[
"Agrawal",
"Garima",
""
],
[
"Karami",
"Mansooreh",
""
],
[
"Almuteb",
"Ibrahim",
""
],
[
"Liu",
"Huan",
""
]
] | TITLE: RedditESS: A Mental Health Social Support Interaction Dataset --
Understanding Effective Social Support to Refine AI-Driven Support Tools
ABSTRACT: Effective mental health support is crucial for alleviating psychological
distress. While large language model (LLM)-based assistants have shown promise
in mental health interventions, existing research often defines "effective"
support primarily in terms of empathetic acknowledgments, overlooking other
essential dimensions such as informational guidance, community validation, and
tangible coping strategies. To address this limitation and better understand
what constitutes effective support, we introduce RedditESS, a novel real-world
dataset derived from Reddit posts, including supportive comments and original
posters' follow-up responses. Grounded in established social science theories,
we develop an ensemble labeling mechanism to annotate supportive comments as
effective or not and perform qualitative assessments to ensure the reliability
of the annotations. Additionally, we demonstrate the practical utility of
RedditESS by using it to guide LLM alignment toward generating more
context-sensitive and genuinely helpful supportive responses. By broadening the
understanding of effective support, our study paves the way for advanced
AI-driven mental health interventions.
|
2503.21889 | Patrice Bechard | Patrice Bechard, Chao Wang, Amirhossein Abaskohi, Juan Rodriguez,
Christopher Pal, David Vazquez, Spandana Gella, Sai Rajeswar, Perouz
Taslakian | StarFlow: Generating Structured Workflow Outputs From Sketch Images | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Workflows are a fundamental component of automation in enterprise platforms,
enabling the orchestration of tasks, data processing, and system integrations.
Despite being widely used, building workflows can be complex, often requiring
manual configuration through low-code platforms or visual programming tools. To
simplify this process, we explore the use of generative foundation models,
particularly vision-language models (VLMs), to automatically generate
structured workflows from visual inputs. Translating hand-drawn sketches or
computer-generated diagrams into executable workflows is challenging due to the
ambiguity of free-form drawings, variations in diagram styles, and the
difficulty of inferring execution logic from visual elements. To address this,
we introduce StarFlow, a framework for generating structured workflow outputs
from sketches using vision-language models. We curate a diverse dataset of
workflow diagrams -- including synthetic, manually annotated, and real-world
samples -- to enable robust training and evaluation. We finetune and benchmark
multiple vision-language models, conducting a series of ablation studies to
analyze the strengths and limitations of our approach. Our results show that
finetuning significantly enhances structured workflow generation, outperforming
large vision-language models on this task.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 18:04:05 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Bechard",
"Patrice",
""
],
[
"Wang",
"Chao",
""
],
[
"Abaskohi",
"Amirhossein",
""
],
[
"Rodriguez",
"Juan",
""
],
[
"Pal",
"Christopher",
""
],
[
"Vazquez",
"David",
""
],
[
"Gella",
"Spandana",
""
],
[
"Rajeswar",
"Sai",
""
],
[
"Taslakian",
"Perouz",
""
]
] | TITLE: StarFlow: Generating Structured Workflow Outputs From Sketch Images
ABSTRACT: Workflows are a fundamental component of automation in enterprise platforms,
enabling the orchestration of tasks, data processing, and system integrations.
Despite being widely used, building workflows can be complex, often requiring
manual configuration through low-code platforms or visual programming tools. To
simplify this process, we explore the use of generative foundation models,
particularly vision-language models (VLMs), to automatically generate
structured workflows from visual inputs. Translating hand-drawn sketches or
computer-generated diagrams into executable workflows is challenging due to the
ambiguity of free-form drawings, variations in diagram styles, and the
difficulty of inferring execution logic from visual elements. To address this,
we introduce StarFlow, a framework for generating structured workflow outputs
from sketches using vision-language models. We curate a diverse dataset of
workflow diagrams -- including synthetic, manually annotated, and real-world
samples -- to enable robust training and evaluation. We finetune and benchmark
multiple vision-language models, conducting a series of ablation studies to
analyze the strengths and limitations of our approach. Our results show that
finetuning significantly enhances structured workflow generation, outperforming
large vision-language models on this task.
|
2503.21893 | Constantino \'Alvarez Casado | Taufiq Ahmed, Abhishek Kumar, Constantino \'Alvarez Casado, Anlan
Zhang, Tuomo H\"anninen, Lauri Loven, Miguel Bordallo L\'opez, Sasu Tarkoma | Exponentially Weighted Instance-Aware Repeat Factor Sampling for
Long-Tailed Object Detection Model Training in Unmanned Aerial Vehicles
Surveillance Scenarios | 6 pages, 2 figures, 9 tables, 6 formulas, conference paper | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object detection models often struggle with class imbalance, where rare
categories appear significantly less frequently than common ones. Existing
sampling-based rebalancing strategies, such as Repeat Factor Sampling (RFS) and
Instance-Aware Repeat Factor Sampling (IRFS), mitigate this issue by adjusting
sample frequencies based on image and instance counts. However, these methods
are based on linear adjustments, which limit their effectiveness in long-tailed
distributions. This work introduces Exponentially Weighted Instance-Aware
Repeat Factor Sampling (E-IRFS), an extension of IRFS that applies exponential
scaling to better differentiate between rare and frequent classes. E-IRFS
adjusts sampling probabilities using an exponential function applied to the
geometric mean of image and instance frequencies, ensuring a more adaptive
rebalancing strategy. We evaluate E-IRFS on a dataset derived from the
Fireman-UAV-RGBT Dataset and four additional public datasets, using YOLOv11
object detection models to identify fire, smoke, people and lakes in emergency
scenarios. The results show that E-IRFS improves detection performance by 22\%
over the baseline and outperforms RFS and IRFS, particularly for rare
categories. The analysis also highlights that E-IRFS has a stronger effect on
lightweight models with limited capacity, as these models rely more on data
sampling strategies to address class imbalance. The findings demonstrate that
E-IRFS improves rare object detection in resource-constrained environments,
making it a suitable solution for real-time applications such as UAV-based
emergency monitoring.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 18:09:37 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Ahmed",
"Taufiq",
""
],
[
"Kumar",
"Abhishek",
""
],
[
"Casado",
"Constantino Álvarez",
""
],
[
"Zhang",
"Anlan",
""
],
[
"Hänninen",
"Tuomo",
""
],
[
"Loven",
"Lauri",
""
],
[
"López",
"Miguel Bordallo",
""
],
[
"Tarkoma",
"Sasu",
""
]
] | TITLE: Exponentially Weighted Instance-Aware Repeat Factor Sampling for
Long-Tailed Object Detection Model Training in Unmanned Aerial Vehicles
Surveillance Scenarios
ABSTRACT: Object detection models often struggle with class imbalance, where rare
categories appear significantly less frequently than common ones. Existing
sampling-based rebalancing strategies, such as Repeat Factor Sampling (RFS) and
Instance-Aware Repeat Factor Sampling (IRFS), mitigate this issue by adjusting
sample frequencies based on image and instance counts. However, these methods
are based on linear adjustments, which limit their effectiveness in long-tailed
distributions. This work introduces Exponentially Weighted Instance-Aware
Repeat Factor Sampling (E-IRFS), an extension of IRFS that applies exponential
scaling to better differentiate between rare and frequent classes. E-IRFS
adjusts sampling probabilities using an exponential function applied to the
geometric mean of image and instance frequencies, ensuring a more adaptive
rebalancing strategy. We evaluate E-IRFS on a dataset derived from the
Fireman-UAV-RGBT Dataset and four additional public datasets, using YOLOv11
object detection models to identify fire, smoke, people and lakes in emergency
scenarios. The results show that E-IRFS improves detection performance by 22\%
over the baseline and outperforms RFS and IRFS, particularly for rare
categories. The analysis also highlights that E-IRFS has a stronger effect on
lightweight models with limited capacity, as these models rely more on data
sampling strategies to address class imbalance. The findings demonstrate that
E-IRFS improves rare object detection in resource-constrained environments,
making it a suitable solution for real-time applications such as UAV-based
emergency monitoring.
|
2503.21902 | Hamed Babaei Giglou | Hamed Babaei Giglou, Jennifer D'Souza, Oliver Karras, and S\"oren Auer | OntoAligner: A Comprehensive Modular and Robust Python Toolkit for
Ontology Alignment | 18 pages, 3 figures. Accepted for the ESWC 2025 Resource Track | null | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Ontology Alignment (OA) is fundamental for achieving semantic
interoperability across diverse knowledge systems. We present OntoAligner, a
comprehensive, modular, and robust Python toolkit for ontology alignment,
designed to address current limitations with existing tools faced by
practitioners. Existing tools are limited in scalability, modularity, and ease
of integration with recent AI advances. OntoAligner provides a flexible
architecture integrating existing lightweight OA techniques such as fuzzy
matching but goes beyond by supporting contemporary methods with
retrieval-augmented generation and large language models for OA. The framework
prioritizes extensibility, enabling researchers to integrate custom alignment
algorithms and datasets. This paper details the design principles,
architecture, and implementation of the OntoAligner, demonstrating its utility
through benchmarks on standard OA tasks. Our evaluation highlights
OntoAligner's ability to handle large-scale ontologies efficiently with few
lines of code while delivering high alignment quality. By making OntoAligner
open-source, we aim to provide a resource that fosters innovation and
collaboration within the OA community, empowering researchers and practitioners
with a toolkit for reproducible OA research and real-world applications.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 18:28:11 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Giglou",
"Hamed Babaei",
""
],
[
"D'Souza",
"Jennifer",
""
],
[
"Karras",
"Oliver",
""
],
[
"Auer",
"Sören",
""
]
] | TITLE: OntoAligner: A Comprehensive Modular and Robust Python Toolkit for
Ontology Alignment
ABSTRACT: Ontology Alignment (OA) is fundamental for achieving semantic
interoperability across diverse knowledge systems. We present OntoAligner, a
comprehensive, modular, and robust Python toolkit for ontology alignment,
designed to address current limitations with existing tools faced by
practitioners. Existing tools are limited in scalability, modularity, and ease
of integration with recent AI advances. OntoAligner provides a flexible
architecture integrating existing lightweight OA techniques such as fuzzy
matching but goes beyond by supporting contemporary methods with
retrieval-augmented generation and large language models for OA. The framework
prioritizes extensibility, enabling researchers to integrate custom alignment
algorithms and datasets. This paper details the design principles,
architecture, and implementation of the OntoAligner, demonstrating its utility
through benchmarks on standard OA tasks. Our evaluation highlights
OntoAligner's ability to handle large-scale ontologies efficiently with few
lines of code while delivering high alignment quality. By making OntoAligner
open-source, we aim to provide a resource that fosters innovation and
collaboration within the OA community, empowering researchers and practitioners
with a toolkit for reproducible OA research and real-world applications.
|
2503.21904 | Zhiwei Yang | Zhiwei Yang, Chen Gao, Jing Liu, Peng Wu, Guansong Pang, Mike Zheng
Shou | AssistPDA: An Online Video Surveillance Assistant for Video Anomaly
Prediction, Detection, and Analysis | 13 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancements in large language models (LLMs) have spurred growing
interest in LLM-based video anomaly detection (VAD). However, existing
approaches predominantly focus on video-level anomaly question answering or
offline detection, ignoring the real-time nature essential for practical VAD
applications. To bridge this gap and facilitate the practical deployment of
LLM-based VAD, we introduce AssistPDA, the first online video anomaly
surveillance assistant that unifies video anomaly prediction, detection, and
analysis (VAPDA) within a single framework. AssistPDA enables real-time
inference on streaming videos while supporting interactive user engagement.
Notably, we introduce a novel event-level anomaly prediction task, enabling
proactive anomaly forecasting before anomalies fully unfold. To enhance the
ability to model intricate spatiotemporal relationships in anomaly events, we
propose a Spatio-Temporal Relation Distillation (STRD) module. STRD transfers
the long-term spatiotemporal modeling capabilities of vision-language models
(VLMs) from offline settings to real-time scenarios. Thus it equips AssistPDA
with a robust understanding of complex temporal dependencies and long-sequence
memory. Additionally, we construct VAPDA-127K, the first large-scale benchmark
designed for VLM-based online VAPDA. Extensive experiments demonstrate that
AssistPDA outperforms existing offline VLM-based approaches, setting a new
state-of-the-art for real-time VAPDA. Our dataset and code will be open-sourced
to facilitate further research in the community.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 18:30:47 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Yang",
"Zhiwei",
""
],
[
"Gao",
"Chen",
""
],
[
"Liu",
"Jing",
""
],
[
"Wu",
"Peng",
""
],
[
"Pang",
"Guansong",
""
],
[
"Shou",
"Mike Zheng",
""
]
] | TITLE: AssistPDA: An Online Video Surveillance Assistant for Video Anomaly
Prediction, Detection, and Analysis
ABSTRACT: The rapid advancements in large language models (LLMs) have spurred growing
interest in LLM-based video anomaly detection (VAD). However, existing
approaches predominantly focus on video-level anomaly question answering or
offline detection, ignoring the real-time nature essential for practical VAD
applications. To bridge this gap and facilitate the practical deployment of
LLM-based VAD, we introduce AssistPDA, the first online video anomaly
surveillance assistant that unifies video anomaly prediction, detection, and
analysis (VAPDA) within a single framework. AssistPDA enables real-time
inference on streaming videos while supporting interactive user engagement.
Notably, we introduce a novel event-level anomaly prediction task, enabling
proactive anomaly forecasting before anomalies fully unfold. To enhance the
ability to model intricate spatiotemporal relationships in anomaly events, we
propose a Spatio-Temporal Relation Distillation (STRD) module. STRD transfers
the long-term spatiotemporal modeling capabilities of vision-language models
(VLMs) from offline settings to real-time scenarios. Thus it equips AssistPDA
with a robust understanding of complex temporal dependencies and long-sequence
memory. Additionally, we construct VAPDA-127K, the first large-scale benchmark
designed for VLM-based online VAPDA. Extensive experiments demonstrate that
AssistPDA outperforms existing offline VLM-based approaches, setting a new
state-of-the-art for real-time VAPDA. Our dataset and code will be open-sourced
to facilitate further research in the community.
|
2503.21910 | Karima Kadaoui | Karima Kadaoui and Hanin Atwany and Hamdan Al-Ali and Abdelrahman
Mohamed and Ali Mekky and Sergei Tilga and Natalia Fedorova and Ekaterina
Artemova and Hanan Aldarmaki and Yova Kementchedjhieva | JEEM: Vision-Language Understanding in Four Arabic Dialects | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce JEEM, a benchmark designed to evaluate Vision-Language Models
(VLMs) on visual understanding across four Arabic-speaking countries: Jordan,
The Emirates, Egypt, and Morocco. JEEM includes the tasks of image captioning
and visual question answering, and features culturally rich and regionally
diverse content. This dataset aims to assess the ability of VLMs to generalize
across dialects and accurately interpret cultural elements in visual contexts.
In an evaluation of five prominent open-source Arabic VLMs and GPT-4V, we find
that the Arabic VLMs consistently underperform, struggling with both visual
understanding and dialect-specific generation. While GPT-4V ranks best in this
comparison, the model's linguistic competence varies across dialects, and its
visual understanding capabilities lag behind. This underscores the need for
more inclusive models and the value of culturally-diverse evaluation paradigms.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 18:41:21 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Kadaoui",
"Karima",
""
],
[
"Atwany",
"Hanin",
""
],
[
"Al-Ali",
"Hamdan",
""
],
[
"Mohamed",
"Abdelrahman",
""
],
[
"Mekky",
"Ali",
""
],
[
"Tilga",
"Sergei",
""
],
[
"Fedorova",
"Natalia",
""
],
[
"Artemova",
"Ekaterina",
""
],
[
"Aldarmaki",
"Hanan",
""
],
[
"Kementchedjhieva",
"Yova",
""
]
] | TITLE: JEEM: Vision-Language Understanding in Four Arabic Dialects
ABSTRACT: We introduce JEEM, a benchmark designed to evaluate Vision-Language Models
(VLMs) on visual understanding across four Arabic-speaking countries: Jordan,
The Emirates, Egypt, and Morocco. JEEM includes the tasks of image captioning
and visual question answering, and features culturally rich and regionally
diverse content. This dataset aims to assess the ability of VLMs to generalize
across dialects and accurately interpret cultural elements in visual contexts.
In an evaluation of five prominent open-source Arabic VLMs and GPT-4V, we find
that the Arabic VLMs consistently underperform, struggling with both visual
understanding and dialect-specific generation. While GPT-4V ranks best in this
comparison, the model's linguistic competence varies across dialects, and its
visual understanding capabilities lag behind. This underscores the need for
more inclusive models and the value of culturally-diverse evaluation paradigms.
|
2503.21911 | Sayed Muddashir Hossain | Sayed Muddashir Hossain, Simon Ostermann, Patrick Gebhard, Cord
Benecke, Josef van Genabith and Philipp M\"uller | AutoPsyC: Automatic Recognition of Psychodynamic Conflicts from
Semi-structured Interviews with Large Language Models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Psychodynamic conflicts are persistent, often unconscious themes that shape a
person's behaviour and experiences. Accurate diagnosis of psychodynamic
conflicts is crucial for effective patient treatment and is commonly done via
long, manually scored semi-structured interviews. Existing automated solutions
for psychiatric diagnosis tend to focus on the recognition of broad disorder
categories such as depression, and it is unclear to what extent psychodynamic
conflicts which even the patient themselves may not have conscious access to
could be automatically recognised from conversation. In this paper, we propose
AutoPsyC, the first method for recognising the presence and significance of
psychodynamic conflicts from full-length Operationalized Psychodynamic
Diagnostics (OPD) interviews using Large Language Models (LLMs). Our approach
combines recent advances in parameter-efficient fine-tuning and
Retrieval-Augmented Generation (RAG) with a summarisation strategy to
effectively process entire 90 minute long conversations. In evaluations on a
dataset of 141 diagnostic interviews we show that AutoPsyC consistently
outperforms all baselines and ablation conditions on the recognition of four
highly relevant psychodynamic conflicts.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 18:41:35 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Hossain",
"Sayed Muddashir",
""
],
[
"Ostermann",
"Simon",
""
],
[
"Gebhard",
"Patrick",
""
],
[
"Benecke",
"Cord",
""
],
[
"van Genabith",
"Josef",
""
],
[
"Müller",
"Philipp",
""
]
] | TITLE: AutoPsyC: Automatic Recognition of Psychodynamic Conflicts from
Semi-structured Interviews with Large Language Models
ABSTRACT: Psychodynamic conflicts are persistent, often unconscious themes that shape a
person's behaviour and experiences. Accurate diagnosis of psychodynamic
conflicts is crucial for effective patient treatment and is commonly done via
long, manually scored semi-structured interviews. Existing automated solutions
for psychiatric diagnosis tend to focus on the recognition of broad disorder
categories such as depression, and it is unclear to what extent psychodynamic
conflicts which even the patient themselves may not have conscious access to
could be automatically recognised from conversation. In this paper, we propose
AutoPsyC, the first method for recognising the presence and significance of
psychodynamic conflicts from full-length Operationalized Psychodynamic
Diagnostics (OPD) interviews using Large Language Models (LLMs). Our approach
combines recent advances in parameter-efficient fine-tuning and
Retrieval-Augmented Generation (RAG) with a summarisation strategy to
effectively process entire 90 minute long conversations. In evaluations on a
dataset of 141 diagnostic interviews we show that AutoPsyC consistently
outperforms all baselines and ablation conditions on the recognition of four
highly relevant psychodynamic conflicts.
|
2503.21927 | Deshan Sumanathilaka Mr | Sahan Hewage Wewelwala, T.G.D.K. Sumanathilaka | Hybrid Emotion Recognition: Enhancing Customer Interactions Through
Acoustic and Textual Analysis | 5 pages, 1 figure, 2 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This research presents a hybrid emotion recognition system integrating
advanced Deep Learning, Natural Language Processing (NLP), and Large Language
Models (LLMs) to analyze audio and textual data for enhancing customer
interactions in contact centers. By combining acoustic features with textual
sentiment analysis, the system achieves nuanced emotion detection, addressing
the limitations of traditional approaches in understanding complex emotional
states. Leveraging LSTM and CNN models for audio analysis and DistilBERT for
textual evaluation, the methodology accommodates linguistic and cultural
variations while ensuring real-time processing. Rigorous testing on diverse
datasets demonstrates the system's robustness and accuracy, highlighting its
potential to transform customer service by enabling personalized, empathetic
interactions and improving operational efficiency. This research establishes a
foundation for more intelligent and human-centric digital communication,
redefining customer service standards.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 19:13:37 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wewelwala",
"Sahan Hewage",
""
],
[
"Sumanathilaka",
"T. G. D. K.",
""
]
] | TITLE: Hybrid Emotion Recognition: Enhancing Customer Interactions Through
Acoustic and Textual Analysis
ABSTRACT: This research presents a hybrid emotion recognition system integrating
advanced Deep Learning, Natural Language Processing (NLP), and Large Language
Models (LLMs) to analyze audio and textual data for enhancing customer
interactions in contact centers. By combining acoustic features with textual
sentiment analysis, the system achieves nuanced emotion detection, addressing
the limitations of traditional approaches in understanding complex emotional
states. Leveraging LSTM and CNN models for audio analysis and DistilBERT for
textual evaluation, the methodology accommodates linguistic and cultural
variations while ensuring real-time processing. Rigorous testing on diverse
datasets demonstrates the system's robustness and accuracy, highlighting its
potential to transform customer service by enabling personalized, empathetic
interactions and improving operational efficiency. This research establishes a
foundation for more intelligent and human-centric digital communication,
redefining customer service standards.
|
2503.21956 | Taqwa Alhadidi | Taqwa I.Alhadidi, Asmaa Alazmi, Shadi Jaradat, Ahmed Jaber, Huthaifa
Ashqar, Mohammed Elhenawy | Enhancing Pavement Crack Classification with Bidirectional Cascaded
Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pavement distress, such as cracks and potholes, is a significant issue
affecting road safety and maintenance. In this study, we present the
implementation and evaluation of Bidirectional Cascaded Neural Networks (BCNNs)
for the classification of pavement crack images following image augmentation.
We classified pavement cracks into three main categories: linear cracks,
potholes, and fatigue cracks on an enhanced dataset utilizing U-Net 50 for
image augmentation. The augmented dataset comprised 599 images. Our proposed
BCNN model was designed to leverage both forward and backward information
flows, with detection accuracy enhanced by its cascaded structure wherein each
layer progressively refines the output of the preceding one. Our model achieved
an overall accuracy of 87%, with precision, recall, and F1-score measures
indicating high effectiveness across the categories. For fatigue cracks, the
model recorded a precision of 0.87, recall of 0.83, and F1-score of 0.85 on 205
images. Linear cracks were detected with a precision of 0.81, recall of 0.89,
and F1-score of 0.85 on 205 images, and potholes with a precision of 0.96,
recall of 0.90, and F1-score of 0.93 on 189 images. The macro and weighted
average of precision, recall, and F1-score were identical at 0.88, confirming
the BCNN's excellent performance in classifying complex pavement crack
patterns. This research demonstrates the potential of BCNNs to significantly
enhance the accuracy and reliability of pavement distress classification,
resulting in more effective and efficient pavement maintenance and management
systems.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 20:08:15 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Alhadidi",
"Taqwa I.",
""
],
[
"Alazmi",
"Asmaa",
""
],
[
"Jaradat",
"Shadi",
""
],
[
"Jaber",
"Ahmed",
""
],
[
"Ashqar",
"Huthaifa",
""
],
[
"Elhenawy",
"Mohammed",
""
]
] | TITLE: Enhancing Pavement Crack Classification with Bidirectional Cascaded
Neural Networks
ABSTRACT: Pavement distress, such as cracks and potholes, is a significant issue
affecting road safety and maintenance. In this study, we present the
implementation and evaluation of Bidirectional Cascaded Neural Networks (BCNNs)
for the classification of pavement crack images following image augmentation.
We classified pavement cracks into three main categories: linear cracks,
potholes, and fatigue cracks on an enhanced dataset utilizing U-Net 50 for
image augmentation. The augmented dataset comprised 599 images. Our proposed
BCNN model was designed to leverage both forward and backward information
flows, with detection accuracy enhanced by its cascaded structure wherein each
layer progressively refines the output of the preceding one. Our model achieved
an overall accuracy of 87%, with precision, recall, and F1-score measures
indicating high effectiveness across the categories. For fatigue cracks, the
model recorded a precision of 0.87, recall of 0.83, and F1-score of 0.85 on 205
images. Linear cracks were detected with a precision of 0.81, recall of 0.89,
and F1-score of 0.85 on 205 images, and potholes with a precision of 0.96,
recall of 0.90, and F1-score of 0.93 on 189 images. The macro and weighted
average of precision, recall, and F1-score were identical at 0.88, confirming
the BCNN's excellent performance in classifying complex pavement crack
patterns. This research demonstrates the potential of BCNNs to significantly
enhance the accuracy and reliability of pavement distress classification,
resulting in more effective and efficient pavement maintenance and management
systems.
|
2503.21964 | Yanting Yang | Yanting Yang, Xiaoxiao Li | NeuroLIP: Interpretable and Fair Cross-Modal Alignment of fMRI and
Phenotypic Text | null | null | null | null | cs.LG q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Integrating functional magnetic resonance imaging (fMRI) connectivity data
with phenotypic textual descriptors (e.g., disease label, demographic data)
holds significant potential to advance our understanding of neurological
conditions. However, existing cross-modal alignment methods often lack
interpretability and risk introducing biases by encoding sensitive attributes
together with diagnostic-related features. In this work, we propose NeuroLIP, a
novel cross-modal contrastive learning framework. We introduce text
token-conditioned attention (TTCA) and cross-modal alignment via localized
tokens (CALT) to the brain region-level embeddings with each disease-related
phenotypic token. It improves interpretability via token-level attention maps,
revealing brain region-disease associations. To mitigate bias, we propose a
loss for sensitive attribute disentanglement that maximizes the attention
distance between disease tokens and sensitive attribute tokens, reducing
unintended correlations in downstream predictions. Additionally, we incorporate
a negative gradient technique that reverses the sign of CALT loss on sensitive
attributes, further discouraging the alignment of these features. Experiments
on neuroimaging datasets (ABIDE and ADHD-200) demonstrate NeuroLIP's
superiority in terms of fairness metrics while maintaining the overall best
standard metric performance. Qualitative visualization of attention maps
highlights neuroanatomical patterns aligned with diagnostic characteristics,
validated by the neuroscientific literature. Our work advances the development
of transparent and equitable neuroimaging AI.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 20:22:42 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Yang",
"Yanting",
""
],
[
"Li",
"Xiaoxiao",
""
]
] | TITLE: NeuroLIP: Interpretable and Fair Cross-Modal Alignment of fMRI and
Phenotypic Text
ABSTRACT: Integrating functional magnetic resonance imaging (fMRI) connectivity data
with phenotypic textual descriptors (e.g., disease label, demographic data)
holds significant potential to advance our understanding of neurological
conditions. However, existing cross-modal alignment methods often lack
interpretability and risk introducing biases by encoding sensitive attributes
together with diagnostic-related features. In this work, we propose NeuroLIP, a
novel cross-modal contrastive learning framework. We introduce text
token-conditioned attention (TTCA) and cross-modal alignment via localized
tokens (CALT) to the brain region-level embeddings with each disease-related
phenotypic token. It improves interpretability via token-level attention maps,
revealing brain region-disease associations. To mitigate bias, we propose a
loss for sensitive attribute disentanglement that maximizes the attention
distance between disease tokens and sensitive attribute tokens, reducing
unintended correlations in downstream predictions. Additionally, we incorporate
a negative gradient technique that reverses the sign of CALT loss on sensitive
attributes, further discouraging the alignment of these features. Experiments
on neuroimaging datasets (ABIDE and ADHD-200) demonstrate NeuroLIP's
superiority in terms of fairness metrics while maintaining the overall best
standard metric performance. Qualitative visualization of attention maps
highlights neuroanatomical patterns aligned with diagnostic characteristics,
validated by the neuroscientific literature. Our work advances the development
of transparent and equitable neuroimaging AI.
|
2503.21969 | Yuan Meng | Yuan Meng, Xiangtong Yao, Haihui Ye, Yirui Zhou, Shengqiang Zhang,
Zhenshan Bing, Alois Knoll | Data-Agnostic Robotic Long-Horizon Manipulation with
Vision-Language-Guided Closed-Loop Feedback | initial upload 8 page | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advances in language-conditioned robotic manipulation have leveraged
imitation and reinforcement learning to enable robots to execute tasks from
human commands. However, these methods often suffer from limited
generalization, adaptability, and the lack of large-scale specialized datasets,
unlike data-rich domains such as computer vision, making long-horizon task
execution challenging. To address these gaps, we introduce DAHLIA, a
data-agnostic framework for language-conditioned long-horizon robotic
manipulation, leveraging large language models (LLMs) for real-time task
planning and execution. DAHLIA employs a dual-tunnel architecture, where an
LLM-powered planner collaborates with co-planners to decompose tasks and
generate executable plans, while a reporter LLM provides closed-loop feedback,
enabling adaptive re-planning and ensuring task recovery from potential
failures. Moreover, DAHLIA integrates chain-of-thought (CoT) in task reasoning
and temporal abstraction for efficient action execution, enhancing traceability
and robustness. Our framework demonstrates state-of-the-art performance across
diverse long-horizon tasks, achieving strong generalization in both simulated
and real-world scenarios. Videos and code are available at
https://ghiara.github.io/DAHLIA/.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 20:32:58 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Meng",
"Yuan",
""
],
[
"Yao",
"Xiangtong",
""
],
[
"Ye",
"Haihui",
""
],
[
"Zhou",
"Yirui",
""
],
[
"Zhang",
"Shengqiang",
""
],
[
"Bing",
"Zhenshan",
""
],
[
"Knoll",
"Alois",
""
]
] | TITLE: Data-Agnostic Robotic Long-Horizon Manipulation with
Vision-Language-Guided Closed-Loop Feedback
ABSTRACT: Recent advances in language-conditioned robotic manipulation have leveraged
imitation and reinforcement learning to enable robots to execute tasks from
human commands. However, these methods often suffer from limited
generalization, adaptability, and the lack of large-scale specialized datasets,
unlike data-rich domains such as computer vision, making long-horizon task
execution challenging. To address these gaps, we introduce DAHLIA, a
data-agnostic framework for language-conditioned long-horizon robotic
manipulation, leveraging large language models (LLMs) for real-time task
planning and execution. DAHLIA employs a dual-tunnel architecture, where an
LLM-powered planner collaborates with co-planners to decompose tasks and
generate executable plans, while a reporter LLM provides closed-loop feedback,
enabling adaptive re-planning and ensuring task recovery from potential
failures. Moreover, DAHLIA integrates chain-of-thought (CoT) in task reasoning
and temporal abstraction for efficient action execution, enhancing traceability
and robustness. Our framework demonstrates state-of-the-art performance across
diverse long-horizon tasks, achieving strong generalization in both simulated
and real-world scenarios. Videos and code are available at
https://ghiara.github.io/DAHLIA/.
|
2503.21971 | Armin Abdollahi | Armin Abdollahi and Mehdi Kamal and Massoud Pedram | RocketPPA: Ultra-Fast LLM-Based PPA Estimator at Code-Level Abstraction | null | null | null | null | cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | Large language models have recently transformed hardware design, yet bridging
the gap between code synthesis and PPA (power, performance, and area)
estimation remains a challenge. In this work, we introduce a novel framework
that leverages a 21k dataset of thoroughly cleaned and synthesizable Verilog
modules, each annotated with detailed power, delay, and area metrics. By
employing chain-of-thought techniques, we automatically debug and curate this
dataset to ensure high fidelity in downstream applications. We then fine-tune
CodeLlama using LoRA-based parameter-efficient methods, framing the task as a
regression problem to accurately predict PPA metrics from Verilog code.
Furthermore, we augment our approach with a mixture-of-experts
architecture-integrating both LoRA and an additional MLP expert layer-to
further refine predictions. Experimental results demonstrate significant
improvements: power estimation accuracy is enhanced by 5.9% at a 20% error
threshold and by 7.2% at a 10% threshold, delay estimation improves by 5.1% and
3.9%, and area estimation sees gains of 4% and 7.9% for the 20% and 10%
thresholds, respectively. Notably, the incorporation of the mixture-of-experts
module contributes an additional 3--4% improvement across these tasks. Our
results establish a new benchmark for PPA-aware Verilog generation,
highlighting the effectiveness of our integrated dataset and modeling
strategies for next-generation EDA workflows.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 20:35:09 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Abdollahi",
"Armin",
""
],
[
"Kamal",
"Mehdi",
""
],
[
"Pedram",
"Massoud",
""
]
] | TITLE: RocketPPA: Ultra-Fast LLM-Based PPA Estimator at Code-Level Abstraction
ABSTRACT: Large language models have recently transformed hardware design, yet bridging
the gap between code synthesis and PPA (power, performance, and area)
estimation remains a challenge. In this work, we introduce a novel framework
that leverages a 21k dataset of thoroughly cleaned and synthesizable Verilog
modules, each annotated with detailed power, delay, and area metrics. By
employing chain-of-thought techniques, we automatically debug and curate this
dataset to ensure high fidelity in downstream applications. We then fine-tune
CodeLlama using LoRA-based parameter-efficient methods, framing the task as a
regression problem to accurately predict PPA metrics from Verilog code.
Furthermore, we augment our approach with a mixture-of-experts
architecture-integrating both LoRA and an additional MLP expert layer-to
further refine predictions. Experimental results demonstrate significant
improvements: power estimation accuracy is enhanced by 5.9% at a 20% error
threshold and by 7.2% at a 10% threshold, delay estimation improves by 5.1% and
3.9%, and area estimation sees gains of 4% and 7.9% for the 20% and 10%
thresholds, respectively. Notably, the incorporation of the mixture-of-experts
module contributes an additional 3--4% improvement across these tasks. Our
results establish a new benchmark for PPA-aware Verilog generation,
highlighting the effectiveness of our integrated dataset and modeling
strategies for next-generation EDA workflows.
|
2503.21991 | Hang Zhou | Hang Zhou, Xinxin Zuo, Rui Ma, Li Cheng | BOOTPLACE: Bootstrapped Object Placement with Detection Transformers | CVPR 2025. Project page: https://ryanhangzhou.github.io/bootplace/ ,
code: https://github.com/RyanHangZhou/BOOTPLACE | null | null | null | cs.CV cs.AI cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we tackle the copy-paste image-to-image composition problem
with a focus on object placement learning. Prior methods have leveraged
generative models to reduce the reliance for dense supervision. However, this
often limits their capacity to model complex data distributions. Alternatively,
transformer networks with a sparse contrastive loss have been explored, but
their over-relaxed regularization often leads to imprecise object placement. We
introduce BOOTPLACE, a novel paradigm that formulates object placement as a
placement-by-detection problem. Our approach begins by identifying suitable
regions of interest for object placement. This is achieved by training a
specialized detection transformer on object-subtracted backgrounds, enhanced
with multi-object supervisions. It then semantically associates each target
compositing object with detected regions based on their complementary
characteristics. Through a boostrapped training approach applied to randomly
object-subtracted images, our model enforces meaningful placements through
extensive paired data augmentation. Experimental results on established
benchmarks demonstrate BOOTPLACE's superior performance in object
repositioning, markedly surpassing state-of-the-art baselines on Cityscapes and
OPA datasets with notable improvements in IOU scores. Additional ablation
studies further showcase the compositionality and generalizability of our
approach, supported by user study evaluations.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 21:21:20 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhou",
"Hang",
""
],
[
"Zuo",
"Xinxin",
""
],
[
"Ma",
"Rui",
""
],
[
"Cheng",
"Li",
""
]
] | TITLE: BOOTPLACE: Bootstrapped Object Placement with Detection Transformers
ABSTRACT: In this paper, we tackle the copy-paste image-to-image composition problem
with a focus on object placement learning. Prior methods have leveraged
generative models to reduce the reliance for dense supervision. However, this
often limits their capacity to model complex data distributions. Alternatively,
transformer networks with a sparse contrastive loss have been explored, but
their over-relaxed regularization often leads to imprecise object placement. We
introduce BOOTPLACE, a novel paradigm that formulates object placement as a
placement-by-detection problem. Our approach begins by identifying suitable
regions of interest for object placement. This is achieved by training a
specialized detection transformer on object-subtracted backgrounds, enhanced
with multi-object supervisions. It then semantically associates each target
compositing object with detected regions based on their complementary
characteristics. Through a boostrapped training approach applied to randomly
object-subtracted images, our model enforces meaningful placements through
extensive paired data augmentation. Experimental results on established
benchmarks demonstrate BOOTPLACE's superior performance in object
repositioning, markedly surpassing state-of-the-art baselines on Cityscapes and
OPA datasets with notable improvements in IOU scores. Additional ablation
studies further showcase the compositionality and generalizability of our
approach, supported by user study evaluations.
|
2503.22005 | Junyoung Kim | Heejin Kook, Junyoung Kim, Seongmin Park, Jongwuk Lee | Empowering Retrieval-based Conversational Recommendation with
Contrasting User Preferences | NAACL 2025 | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Conversational recommender systems (CRSs) are designed to suggest the target
item that the user is likely to prefer through multi-turn conversations. Recent
studies stress that capturing sentiments in user conversations improves
recommendation accuracy. However, they employ a single user representation,
which may fail to distinguish between contrasting user intentions, such as
likes and dislikes, potentially leading to suboptimal performance. To this end,
we propose a novel conversational recommender model, called COntrasting user
pReference expAnsion and Learning (CORAL). Firstly, CORAL extracts the user's
hidden preferences through contrasting preference expansion using the reasoning
capacity of the LLMs. Based on the potential preference, CORAL explicitly
differentiates the contrasting preferences and leverages them into the
recommendation process via preference-aware learning. Extensive experiments
show that CORAL significantly outperforms existing methods in three benchmark
datasets, improving up to 99.72% in Recall@10. The code and datasets are
available at https://github.com/kookeej/CORAL
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 21:45:49 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Kook",
"Heejin",
""
],
[
"Kim",
"Junyoung",
""
],
[
"Park",
"Seongmin",
""
],
[
"Lee",
"Jongwuk",
""
]
] | TITLE: Empowering Retrieval-based Conversational Recommendation with
Contrasting User Preferences
ABSTRACT: Conversational recommender systems (CRSs) are designed to suggest the target
item that the user is likely to prefer through multi-turn conversations. Recent
studies stress that capturing sentiments in user conversations improves
recommendation accuracy. However, they employ a single user representation,
which may fail to distinguish between contrasting user intentions, such as
likes and dislikes, potentially leading to suboptimal performance. To this end,
we propose a novel conversational recommender model, called COntrasting user
pReference expAnsion and Learning (CORAL). Firstly, CORAL extracts the user's
hidden preferences through contrasting preference expansion using the reasoning
capacity of the LLMs. Based on the potential preference, CORAL explicitly
differentiates the contrasting preferences and leverages them into the
recommendation process via preference-aware learning. Extensive experiments
show that CORAL significantly outperforms existing methods in three benchmark
datasets, improving up to 99.72% in Recall@10. The code and datasets are
available at https://github.com/kookeej/CORAL
|
2503.22006 | Marc Felix Brinner | Marc Brinner, Tarek Al Mustafa, Sina Zarrie{\ss} | Enhancing Domain-Specific Encoder Models with LLM-Generated Data: How to
Leverage Ontologies, and How to Do Without Them | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | We investigate the use of LLM-generated data for continual pretraining of
encoder models in specialized domains with limited training data, using the
scientific domain of invasion biology as a case study. To this end, we leverage
domain-specific ontologies by enriching them with LLM-generated data and
pretraining the encoder model as an ontology-informed embedding model for
concept definitions. To evaluate the effectiveness of this method, we compile a
benchmark specifically designed for assessing model performance in invasion
biology. After demonstrating substantial improvements over standard LLM
pretraining, we investigate the feasibility of applying the proposed approach
to domains without comprehensive ontologies by substituting ontological
concepts with concepts automatically extracted from a small corpus of
scientific abstracts and establishing relationships between concepts through
distributional statistics. Our results demonstrate that this automated approach
achieves comparable performance using only a small set of scientific abstracts,
resulting in a fully automated pipeline for enhancing domain-specific
understanding of small encoder models that is especially suited for application
in low-resource settings and achieves performance comparable to masked language
modeling pretraining on much larger datasets.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 21:51:24 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Brinner",
"Marc",
""
],
[
"Mustafa",
"Tarek Al",
""
],
[
"Zarrieß",
"Sina",
""
]
] | TITLE: Enhancing Domain-Specific Encoder Models with LLM-Generated Data: How to
Leverage Ontologies, and How to Do Without Them
ABSTRACT: We investigate the use of LLM-generated data for continual pretraining of
encoder models in specialized domains with limited training data, using the
scientific domain of invasion biology as a case study. To this end, we leverage
domain-specific ontologies by enriching them with LLM-generated data and
pretraining the encoder model as an ontology-informed embedding model for
concept definitions. To evaluate the effectiveness of this method, we compile a
benchmark specifically designed for assessing model performance in invasion
biology. After demonstrating substantial improvements over standard LLM
pretraining, we investigate the feasibility of applying the proposed approach
to domains without comprehensive ontologies by substituting ontological
concepts with concepts automatically extracted from a small corpus of
scientific abstracts and establishing relationships between concepts through
distributional statistics. Our results demonstrate that this automated approach
achieves comparable performance using only a small set of scientific abstracts,
resulting in a fully automated pipeline for enhancing domain-specific
understanding of small encoder models that is especially suited for application
in low-resource settings and achieves performance comparable to masked language
modeling pretraining on much larger datasets.
|
2503.22015 | Ali Zafari | Ali Zafari, Xi Chen, Shirin Jalali | DeCompress: Denoising via Neural Compression | null | null | null | null | eess.IV cs.CV cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Learning-based denoising algorithms achieve state-of-the-art performance
across various denoising tasks. However, training such models relies on access
to large training datasets consisting of clean and noisy image pairs. On the
other hand, in many imaging applications, such as microscopy, collecting ground
truth images is often infeasible. To address this challenge, researchers have
recently developed algorithms that can be trained without requiring access to
ground truth data. However, training such models remains computationally
challenging and still requires access to large noisy training samples. In this
work, inspired by compression-based denoising and recent advances in neural
compression, we propose a new compression-based denoising algorithm, which we
name DeCompress, that i) does not require access to ground truth images, ii)
does not require access to large training dataset - only a single noisy image
is sufficient, iii) is robust to overfitting, and iv) achieves superior
performance compared with zero-shot or unsupervised learning-based denoisers.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 22:05:30 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zafari",
"Ali",
""
],
[
"Chen",
"Xi",
""
],
[
"Jalali",
"Shirin",
""
]
] | TITLE: DeCompress: Denoising via Neural Compression
ABSTRACT: Learning-based denoising algorithms achieve state-of-the-art performance
across various denoising tasks. However, training such models relies on access
to large training datasets consisting of clean and noisy image pairs. On the
other hand, in many imaging applications, such as microscopy, collecting ground
truth images is often infeasible. To address this challenge, researchers have
recently developed algorithms that can be trained without requiring access to
ground truth data. However, training such models remains computationally
challenging and still requires access to large noisy training samples. In this
work, inspired by compression-based denoising and recent advances in neural
compression, we propose a new compression-based denoising algorithm, which we
name DeCompress, that i) does not require access to ground truth images, ii)
does not require access to large training dataset - only a single noisy image
is sufficient, iii) is robust to overfitting, and iv) achieves superior
performance compared with zero-shot or unsupervised learning-based denoisers.
|
2503.22019 | Earl Ranario | Earl Ranario, Lars Lundqvist, Heesup Yun, Brian N. Bailey, J. Mason
Earles | AGILE: A Diffusion-Based Attention-Guided Image and Label Translation
for Efficient Cross-Domain Plant Trait Identification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Semantically consistent cross-domain image translation facilitates the
generation of training data by transferring labels across different domains,
making it particularly useful for plant trait identification in agriculture.
However, existing generative models struggle to maintain object-level accuracy
when translating images between domains, especially when domain gaps are
significant. In this work, we introduce AGILE (Attention-Guided Image and Label
Translation for Efficient Cross-Domain Plant Trait Identification), a
diffusion-based framework that leverages optimized text embeddings and
attention guidance to semantically constrain image translation. AGILE utilizes
pretrained diffusion models and publicly available agricultural datasets to
improve the fidelity of translated images while preserving critical object
semantics. Our approach optimizes text embeddings to strengthen the
correspondence between source and target images and guides attention maps
during the denoising process to control object placement. We evaluate AGILE on
cross-domain plant datasets and demonstrate its effectiveness in generating
semantically accurate translated images. Quantitative experiments show that
AGILE enhances object detection performance in the target domain while
maintaining realism and consistency. Compared to prior image translation
methods, AGILE achieves superior semantic alignment, particularly in
challenging cases where objects vary significantly or domain gaps are
substantial.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 22:20:15 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Ranario",
"Earl",
""
],
[
"Lundqvist",
"Lars",
""
],
[
"Yun",
"Heesup",
""
],
[
"Bailey",
"Brian N.",
""
],
[
"Earles",
"J. Mason",
""
]
] | TITLE: AGILE: A Diffusion-Based Attention-Guided Image and Label Translation
for Efficient Cross-Domain Plant Trait Identification
ABSTRACT: Semantically consistent cross-domain image translation facilitates the
generation of training data by transferring labels across different domains,
making it particularly useful for plant trait identification in agriculture.
However, existing generative models struggle to maintain object-level accuracy
when translating images between domains, especially when domain gaps are
significant. In this work, we introduce AGILE (Attention-Guided Image and Label
Translation for Efficient Cross-Domain Plant Trait Identification), a
diffusion-based framework that leverages optimized text embeddings and
attention guidance to semantically constrain image translation. AGILE utilizes
pretrained diffusion models and publicly available agricultural datasets to
improve the fidelity of translated images while preserving critical object
semantics. Our approach optimizes text embeddings to strengthen the
correspondence between source and target images and guides attention maps
during the denoising process to control object placement. We evaluate AGILE on
cross-domain plant datasets and demonstrate its effectiveness in generating
semantically accurate translated images. Quantitative experiments show that
AGILE enhances object detection performance in the target domain while
maintaining realism and consistency. Compared to prior image translation
methods, AGILE achieves superior semantic alignment, particularly in
challenging cases where objects vary significantly or domain gaps are
substantial.
|
2503.22035 | Isabella Loaiza | Isabella Loaiza and Roberto Rigobon | The Limits of AI in Financial Services | null | null | null | null | cs.CY q-fin.GN | http://creativecommons.org/licenses/by/4.0/ | AI is transforming industries, raising concerns about job displacement and
decision making reliability. AI, as a universal approximation function, excels
in data driven tasks but struggles with small datasets, subjective
probabilities, and contexts requiring human judgment, relationships, and
ethics.The EPOCH framework highlights five irreplaceable human capabilities:
Empathy, Presence, Opinion, Creativity, and Hope. These attributes are vital in
financial services for trust, inclusion, innovation, and consumer experience.
Although AI improves efficiency in risk management and compliance, it will not
eliminate jobs but redefine them, similar to how ATMs reshaped bank tellers'
roles. The challenge is ensuring professionals adapt, leveraging AI's strengths
while preserving essential human capabilities.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 23:04:11 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Loaiza",
"Isabella",
""
],
[
"Rigobon",
"Roberto",
""
]
] | TITLE: The Limits of AI in Financial Services
ABSTRACT: AI is transforming industries, raising concerns about job displacement and
decision making reliability. AI, as a universal approximation function, excels
in data driven tasks but struggles with small datasets, subjective
probabilities, and contexts requiring human judgment, relationships, and
ethics.The EPOCH framework highlights five irreplaceable human capabilities:
Empathy, Presence, Opinion, Creativity, and Hope. These attributes are vital in
financial services for trust, inclusion, innovation, and consumer experience.
Although AI improves efficiency in risk management and compliance, it will not
eliminate jobs but redefine them, similar to how ATMs reshaped bank tellers'
roles. The challenge is ensuring professionals adapt, leveraging AI's strengths
while preserving essential human capabilities.
|
2503.22038 | Yunting Yin | Ngoc Tuong Vy Nguyen, Felix D Childress, Yunting Yin | Debate-Driven Multi-Agent LLMs for Phishing Email Detection | Accepted to the 13th International Symposium on Digital Forensics and
Security (ISDFS 2025) | null | null | null | cs.MA cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phishing attacks remain a critical cybersecurity threat. Attackers constantly
refine their methods, making phishing emails harder to detect. Traditional
detection methods, including rule-based systems and supervised machine learning
models, either rely on predefined patterns like blacklists, which can be
bypassed with slight modifications, or require large datasets for training and
still can generate false positives and false negatives. In this work, we
propose a multi-agent large language model (LLM) prompting technique that
simulates debates among agents to detect whether the content presented on an
email is phishing. Our approach uses two LLM agents to present arguments for or
against the classification task, with a judge agent adjudicating the final
verdict based on the quality of reasoning provided. This debate mechanism
enables the models to critically analyze contextual cue and deceptive patterns
in text, which leads to improved classification accuracy. The proposed
framework is evaluated on multiple phishing email datasets and demonstrate that
mixed-agent configurations consistently outperform homogeneous configurations.
Results also show that the debate structure itself is sufficient to yield
accurate decisions without extra prompting strategies.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 23:18:14 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Nguyen",
"Ngoc Tuong Vy",
""
],
[
"Childress",
"Felix D",
""
],
[
"Yin",
"Yunting",
""
]
] | TITLE: Debate-Driven Multi-Agent LLMs for Phishing Email Detection
ABSTRACT: Phishing attacks remain a critical cybersecurity threat. Attackers constantly
refine their methods, making phishing emails harder to detect. Traditional
detection methods, including rule-based systems and supervised machine learning
models, either rely on predefined patterns like blacklists, which can be
bypassed with slight modifications, or require large datasets for training and
still can generate false positives and false negatives. In this work, we
propose a multi-agent large language model (LLM) prompting technique that
simulates debates among agents to detect whether the content presented on an
email is phishing. Our approach uses two LLM agents to present arguments for or
against the classification task, with a judge agent adjudicating the final
verdict based on the quality of reasoning provided. This debate mechanism
enables the models to critically analyze contextual cue and deceptive patterns
in text, which leads to improved classification accuracy. The proposed
framework is evaluated on multiple phishing email datasets and demonstrate that
mixed-agent configurations consistently outperform homogeneous configurations.
Results also show that the debate structure itself is sufficient to yield
accurate decisions without extra prompting strategies.
|
2503.22049 | Jinze Wang | Jinze Wang, Tiehua Zhang, Lu Zhang, Yang Bai, Xin Li, Jiong Jin | HyperMAN: Hypergraph-enhanced Meta-learning Adaptive Network for Next
POI Recommendation | null | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Next Point-of-Interest (POI) recommendation aims to predict users' next
locations by leveraging historical check-in sequences. Although existing
methods have shown promising results, they often struggle to capture complex
high-order relationships and effectively adapt to diverse user behaviors,
particularly when addressing the cold-start issue. To address these challenges,
we propose Hypergraph-enhanced Meta-learning Adaptive Network (HyperMAN), a
novel framework that integrates heterogeneous hypergraph modeling with a
difficulty-aware meta-learning mechanism for next POI recommendation.
Specifically, three types of heterogeneous hyperedges are designed to capture
high-order relationships: user visit behaviors at specific times (Temporal
behavioral hyperedge), spatial correlations among POIs (spatial functional
hyperedge), and user long-term preferences (user preference hyperedge).
Furthermore, a diversity-aware meta-learning mechanism is introduced to
dynamically adjust learning strategies, considering users behavioral diversity.
Extensive experiments on real-world datasets demonstrate that HyperMAN achieves
superior performance, effectively addressing cold start challenges and
significantly enhancing recommendation accuracy.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 23:58:57 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wang",
"Jinze",
""
],
[
"Zhang",
"Tiehua",
""
],
[
"Zhang",
"Lu",
""
],
[
"Bai",
"Yang",
""
],
[
"Li",
"Xin",
""
],
[
"Jin",
"Jiong",
""
]
] | TITLE: HyperMAN: Hypergraph-enhanced Meta-learning Adaptive Network for Next
POI Recommendation
ABSTRACT: Next Point-of-Interest (POI) recommendation aims to predict users' next
locations by leveraging historical check-in sequences. Although existing
methods have shown promising results, they often struggle to capture complex
high-order relationships and effectively adapt to diverse user behaviors,
particularly when addressing the cold-start issue. To address these challenges,
we propose Hypergraph-enhanced Meta-learning Adaptive Network (HyperMAN), a
novel framework that integrates heterogeneous hypergraph modeling with a
difficulty-aware meta-learning mechanism for next POI recommendation.
Specifically, three types of heterogeneous hyperedges are designed to capture
high-order relationships: user visit behaviors at specific times (Temporal
behavioral hyperedge), spatial correlations among POIs (spatial functional
hyperedge), and user long-term preferences (user preference hyperedge).
Furthermore, a diversity-aware meta-learning mechanism is introduced to
dynamically adjust learning strategies, considering users behavioral diversity.
Extensive experiments on real-world datasets demonstrate that HyperMAN achieves
superior performance, effectively addressing cold start challenges and
significantly enhancing recommendation accuracy.
|
2503.22050 | Tai An | Tai An, Weiqiang Huang, Da Xu, Qingyuan He, Jiacheng Hu, Yujia Lou | A Deep Learning Framework for Boundary-Aware Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a fundamental task in computer vision, semantic segmentation is widely
applied in fields such as autonomous driving, remote sensing image analysis,
and medical image processing. In recent years, Transformer-based segmentation
methods have demonstrated strong performance in global feature modeling.
However, they still struggle with blurred target boundaries and insufficient
recognition of small targets. To address these issues, this study proposes a
Mask2Former-based semantic segmentation algorithm incorporating a boundary
enhancement feature bridging module (BEFBM). The goal is to improve target
boundary accuracy and segmentation consistency. Built upon the Mask2Former
framework, this method constructs a boundary-aware feature map and introduces a
feature bridging mechanism. This enables effective cross-scale feature fusion,
enhancing the model's ability to focus on target boundaries. Experiments on the
Cityscapes dataset demonstrate that, compared to mainstream segmentation
methods, the proposed approach achieves significant improvements in metrics
such as mIOU, mDICE, and mRecall. It also exhibits superior boundary retention
in complex scenes. Visual analysis further confirms the model's advantages in
fine-grained regions. Future research will focus on optimizing computational
efficiency and exploring its potential in other high-precision segmentation
tasks.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 00:00:08 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"An",
"Tai",
""
],
[
"Huang",
"Weiqiang",
""
],
[
"Xu",
"Da",
""
],
[
"He",
"Qingyuan",
""
],
[
"Hu",
"Jiacheng",
""
],
[
"Lou",
"Yujia",
""
]
] | TITLE: A Deep Learning Framework for Boundary-Aware Semantic Segmentation
ABSTRACT: As a fundamental task in computer vision, semantic segmentation is widely
applied in fields such as autonomous driving, remote sensing image analysis,
and medical image processing. In recent years, Transformer-based segmentation
methods have demonstrated strong performance in global feature modeling.
However, they still struggle with blurred target boundaries and insufficient
recognition of small targets. To address these issues, this study proposes a
Mask2Former-based semantic segmentation algorithm incorporating a boundary
enhancement feature bridging module (BEFBM). The goal is to improve target
boundary accuracy and segmentation consistency. Built upon the Mask2Former
framework, this method constructs a boundary-aware feature map and introduces a
feature bridging mechanism. This enables effective cross-scale feature fusion,
enhancing the model's ability to focus on target boundaries. Experiments on the
Cityscapes dataset demonstrate that, compared to mainstream segmentation
methods, the proposed approach achieves significant improvements in metrics
such as mIOU, mDICE, and mRecall. It also exhibits superior boundary retention
in complex scenes. Visual analysis further confirms the model's advantages in
fine-grained regions. Future research will focus on optimizing computational
efficiency and exploring its potential in other high-precision segmentation
tasks.
|
2503.22052 | Jan Hurtado | Jan Hurtado, Joao P. Maia, Cesar A. Sierra-Franco, and Alberto Raposo | Improving the generalization of deep learning models in the segmentation
of mammography images | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Mammography stands as the main screening method for detecting breast cancer
early, enhancing treatment success rates. The segmentation of landmark
structures in mammography images can aid the medical assessment in the
evaluation of cancer risk and the image acquisition adequacy. We introduce a
series of data-centric strategies aimed at enriching the training data for deep
learning-based segmentation of landmark structures. Our approach involves
augmenting the training samples through annotation-guided image intensity
manipulation and style transfer to achieve better generalization than standard
training procedures. These augmentations are applied in a balanced manner to
ensure the model learns to process a diverse range of images generated by
different vendor equipments while retaining its efficacy on the original data.
We present extensive numerical and visual results that demonstrate the superior
generalization capabilities of our methods when compared to the standard
training. For this evaluation, we consider a large dataset that includes
mammography images generated by different vendor equipments. Further, we
present complementary results that show both the strengths and limitations of
our methods across various scenarios. The accuracy and robustness demonstrated
in the experiments suggest that our method is well-suited for integration into
clinical practice.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 00:11:00 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Hurtado",
"Jan",
""
],
[
"Maia",
"Joao P.",
""
],
[
"Sierra-Franco",
"Cesar A.",
""
],
[
"Raposo",
"Alberto",
""
]
] | TITLE: Improving the generalization of deep learning models in the segmentation
of mammography images
ABSTRACT: Mammography stands as the main screening method for detecting breast cancer
early, enhancing treatment success rates. The segmentation of landmark
structures in mammography images can aid the medical assessment in the
evaluation of cancer risk and the image acquisition adequacy. We introduce a
series of data-centric strategies aimed at enriching the training data for deep
learning-based segmentation of landmark structures. Our approach involves
augmenting the training samples through annotation-guided image intensity
manipulation and style transfer to achieve better generalization than standard
training procedures. These augmentations are applied in a balanced manner to
ensure the model learns to process a diverse range of images generated by
different vendor equipments while retaining its efficacy on the original data.
We present extensive numerical and visual results that demonstrate the superior
generalization capabilities of our methods when compared to the standard
training. For this evaluation, we consider a large dataset that includes
mammography images generated by different vendor equipments. Further, we
present complementary results that show both the strengths and limitations of
our methods across various scenarios. The accuracy and robustness demonstrated
in the experiments suggest that our method is well-suited for integration into
clinical practice.
|
2503.22060 | Ukcheol Shin | Ukcheol Shin, Jinsun Park | Deep Depth Estimation from Thermal Image: Dataset, Benchmark, and
Challenges | MS^2 dataset:
https://sites.google.com/view/multi-spectral-stereo-dataset, Source code:
https://github.com/UkcheolShin/SupDepth4Thermal | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Achieving robust and accurate spatial perception under adverse weather and
lighting conditions is crucial for the high-level autonomy of self-driving
vehicles and robots. However, existing perception algorithms relying on the
visible spectrum are highly affected by weather and lighting conditions. A
long-wave infrared camera (i.e., thermal imaging camera) can be a potential
solution to achieve high-level robustness. However, the absence of large-scale
datasets and standardized benchmarks remains a significant bottleneck to
progress in active research for robust visual perception from thermal images.
To this end, this manuscript provides a large-scale Multi-Spectral Stereo
(MS$^2$) dataset that consists of stereo RGB, stereo NIR, stereo thermal,
stereo LiDAR data, and GNSS/IMU information along with semi-dense depth ground
truth. MS$^2$ dataset includes 162K synchronized multi-modal data pairs
captured across diverse locations (e.g., urban city, residential area, campus,
and high-way road) at different times (e.g., morning, daytime, and nighttime)
and under various weather conditions (e.g., clear-sky, cloudy, and rainy).
Secondly, we conduct a thorough evaluation of monocular and stereo depth
estimation networks across RGB, NIR, and thermal modalities to establish
standardized benchmark results on MS$^2$ depth test sets (e.g., day, night, and
rainy). Lastly, we provide in-depth analyses and discuss the challenges
revealed by the benchmark results, such as the performance variability for each
modality under adverse conditions, domain shift between different sensor
modalities, and potential research direction for thermal perception. Our
dataset and source code are publicly available at
https://sites.google.com/view/multi-spectral-stereo-dataset and
https://github.com/UkcheolShin/SupDepth4Thermal.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 00:46:55 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Shin",
"Ukcheol",
""
],
[
"Park",
"Jinsun",
""
]
] | TITLE: Deep Depth Estimation from Thermal Image: Dataset, Benchmark, and
Challenges
ABSTRACT: Achieving robust and accurate spatial perception under adverse weather and
lighting conditions is crucial for the high-level autonomy of self-driving
vehicles and robots. However, existing perception algorithms relying on the
visible spectrum are highly affected by weather and lighting conditions. A
long-wave infrared camera (i.e., thermal imaging camera) can be a potential
solution to achieve high-level robustness. However, the absence of large-scale
datasets and standardized benchmarks remains a significant bottleneck to
progress in active research for robust visual perception from thermal images.
To this end, this manuscript provides a large-scale Multi-Spectral Stereo
(MS$^2$) dataset that consists of stereo RGB, stereo NIR, stereo thermal,
stereo LiDAR data, and GNSS/IMU information along with semi-dense depth ground
truth. MS$^2$ dataset includes 162K synchronized multi-modal data pairs
captured across diverse locations (e.g., urban city, residential area, campus,
and high-way road) at different times (e.g., morning, daytime, and nighttime)
and under various weather conditions (e.g., clear-sky, cloudy, and rainy).
Secondly, we conduct a thorough evaluation of monocular and stereo depth
estimation networks across RGB, NIR, and thermal modalities to establish
standardized benchmark results on MS$^2$ depth test sets (e.g., day, night, and
rainy). Lastly, we provide in-depth analyses and discuss the challenges
revealed by the benchmark results, such as the performance variability for each
modality under adverse conditions, domain shift between different sensor
modalities, and potential research direction for thermal perception. Our
dataset and source code are publicly available at
https://sites.google.com/view/multi-spectral-stereo-dataset and
https://github.com/UkcheolShin/SupDepth4Thermal.
|
2503.22069 | Ekansh Chauhan | Ekansh Chauhan, Anila Sharma, Amit Sharma, Vikas Nishadham, Asha
Ghughtyal, Ankur Kumar, Gurudutt Gupta, Anurag Mehta, C.V. Jawahar, P.K.
Vinod | Contrasting Low and High-Resolution Features for HER2 Scoring using Deep
Learning | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Breast cancer, the most common malignancy among women, requires precise
detection and classification for effective treatment. Immunohistochemistry
(IHC) biomarkers like HER2, ER, and PR are critical for identifying breast
cancer subtypes. However, traditional IHC classification relies on
pathologists' expertise, making it labor-intensive and subject to significant
inter-observer variability. To address these challenges, this study introduces
the India Pathology Breast Cancer Dataset (IPD-Breast), comprising of 1,272 IHC
slides (HER2, ER, and PR) aimed at automating receptor status classification.
The primary focus is on developing predictive models for HER2 3-way
classification (0, Low, High) to enhance prognosis. Evaluation of multiple deep
learning models revealed that an end-to-end ConvNeXt network utilizing
low-resolution IHC images achieved an AUC, F1, and accuracy of 91.79%, 83.52%,
and 83.56%, respectively, for 3-way classification, outperforming patch-based
methods by over 5.35% in F1 score. This study highlights the potential of
simple yet effective deep learning techniques to significantly improve accuracy
and reproducibility in breast cancer classification, supporting their
integration into clinical workflows for better patient outcomes.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 01:24:08 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Chauhan",
"Ekansh",
""
],
[
"Sharma",
"Anila",
""
],
[
"Sharma",
"Amit",
""
],
[
"Nishadham",
"Vikas",
""
],
[
"Ghughtyal",
"Asha",
""
],
[
"Kumar",
"Ankur",
""
],
[
"Gupta",
"Gurudutt",
""
],
[
"Mehta",
"Anurag",
""
],
[
"Jawahar",
"C. V.",
""
],
[
"Vinod",
"P. K.",
""
]
] | TITLE: Contrasting Low and High-Resolution Features for HER2 Scoring using Deep
Learning
ABSTRACT: Breast cancer, the most common malignancy among women, requires precise
detection and classification for effective treatment. Immunohistochemistry
(IHC) biomarkers like HER2, ER, and PR are critical for identifying breast
cancer subtypes. However, traditional IHC classification relies on
pathologists' expertise, making it labor-intensive and subject to significant
inter-observer variability. To address these challenges, this study introduces
the India Pathology Breast Cancer Dataset (IPD-Breast), comprising of 1,272 IHC
slides (HER2, ER, and PR) aimed at automating receptor status classification.
The primary focus is on developing predictive models for HER2 3-way
classification (0, Low, High) to enhance prognosis. Evaluation of multiple deep
learning models revealed that an end-to-end ConvNeXt network utilizing
low-resolution IHC images achieved an AUC, F1, and accuracy of 91.79%, 83.52%,
and 83.56%, respectively, for 3-way classification, outperforming patch-based
methods by over 5.35% in F1 score. This study highlights the potential of
simple yet effective deep learning techniques to significantly improve accuracy
and reproducibility in breast cancer classification, supporting their
integration into clinical workflows for better patient outcomes.
|
2503.22079 | Mengmeng Jing | Kunshan Yang, Wenwei Luo, Yuguo Hu, Jiafu Yan, Mengmeng Jing and Lin
Zuo | A Semantic-Enhanced Heterogeneous Graph Learning Method for Flexible
Objects Recognition | Accepted by ICME 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Flexible objects recognition remains a significant challenge due to its
inherently diverse shapes and sizes, translucent attributes, and subtle
inter-class differences. Graph-based models, such as graph convolution networks
and graph vision models, are promising in flexible objects recognition due to
their ability of capturing variable relations within the flexible objects.
These methods, however, often focus on global visual relationships or fail to
align semantic and visual information. To alleviate these limitations, we
propose a semantic-enhanced heterogeneous graph learning method. First, an
adaptive scanning module is employed to extract discriminative semantic
context, facilitating the matching of flexible objects with varying shapes and
sizes while aligning semantic and visual nodes to enhance cross-modal feature
correlation. Second, a heterogeneous graph generation module aggregates global
visual and local semantic node features, improving the recognition of flexible
objects. Additionally, We introduce the FSCW, a large-scale flexible dataset
curated from existing sources. We validate our method through extensive
experiments on flexible datasets (FDA and FSCW), and challenge benchmarks
(CIFAR-100 and ImageNet-Hard), demonstrating competitive performance.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 01:55:43 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Yang",
"Kunshan",
""
],
[
"Luo",
"Wenwei",
""
],
[
"Hu",
"Yuguo",
""
],
[
"Yan",
"Jiafu",
""
],
[
"Jing",
"Mengmeng",
""
],
[
"Zuo",
"Lin",
""
]
] | TITLE: A Semantic-Enhanced Heterogeneous Graph Learning Method for Flexible
Objects Recognition
ABSTRACT: Flexible objects recognition remains a significant challenge due to its
inherently diverse shapes and sizes, translucent attributes, and subtle
inter-class differences. Graph-based models, such as graph convolution networks
and graph vision models, are promising in flexible objects recognition due to
their ability of capturing variable relations within the flexible objects.
These methods, however, often focus on global visual relationships or fail to
align semantic and visual information. To alleviate these limitations, we
propose a semantic-enhanced heterogeneous graph learning method. First, an
adaptive scanning module is employed to extract discriminative semantic
context, facilitating the matching of flexible objects with varying shapes and
sizes while aligning semantic and visual nodes to enhance cross-modal feature
correlation. Second, a heterogeneous graph generation module aggregates global
visual and local semantic node features, improving the recognition of flexible
objects. Additionally, We introduce the FSCW, a large-scale flexible dataset
curated from existing sources. We validate our method through extensive
experiments on flexible datasets (FDA and FSCW), and challenge benchmarks
(CIFAR-100 and ImageNet-Hard), demonstrating competitive performance.
|
2503.22081 | Ziyue Huang | Ziyue Huang, Hongxi Yan, Qiqi Zhan, Shuai Yang, Mingming Zhang,
Chenkai Zhang, YiMing Lei, Zeming Liu, Qingjie Liu and Yunhong Wang | A Survey on Remote Sensing Foundation Models: From Vision to
Multimodality | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The rapid advancement of remote sensing foundation models, particularly
vision and multimodal models, has significantly enhanced the capabilities of
intelligent geospatial data interpretation. These models combine various data
modalities, such as optical, radar, and LiDAR imagery, with textual and
geographic information, enabling more comprehensive analysis and understanding
of remote sensing data. The integration of multiple modalities allows for
improved performance in tasks like object detection, land cover classification,
and change detection, which are often challenged by the complex and
heterogeneous nature of remote sensing data. However, despite these
advancements, several challenges remain. The diversity in data types, the need
for large-scale annotated datasets, and the complexity of multimodal fusion
techniques pose significant obstacles to the effective deployment of these
models. Moreover, the computational demands of training and fine-tuning
multimodal models require significant resources, further complicating their
practical application in remote sensing image interpretation tasks. This paper
provides a comprehensive review of the state-of-the-art in vision and
multimodal foundation models for remote sensing, focusing on their
architecture, training methods, datasets and application scenarios. We discuss
the key challenges these models face, such as data alignment, cross-modal
transfer learning, and scalability, while also identifying emerging research
directions aimed at overcoming these limitations. Our goal is to provide a
clear understanding of the current landscape of remote sensing foundation
models and inspire future research that can push the boundaries of what these
models can achieve in real-world applications. The list of resources collected
by the paper can be found in the
https://github.com/IRIP-BUAA/A-Review-for-remote-sensing-vision-language-models.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 01:57:35 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Huang",
"Ziyue",
""
],
[
"Yan",
"Hongxi",
""
],
[
"Zhan",
"Qiqi",
""
],
[
"Yang",
"Shuai",
""
],
[
"Zhang",
"Mingming",
""
],
[
"Zhang",
"Chenkai",
""
],
[
"Lei",
"YiMing",
""
],
[
"Liu",
"Zeming",
""
],
[
"Liu",
"Qingjie",
""
],
[
"Wang",
"Yunhong",
""
]
] | TITLE: A Survey on Remote Sensing Foundation Models: From Vision to
Multimodality
ABSTRACT: The rapid advancement of remote sensing foundation models, particularly
vision and multimodal models, has significantly enhanced the capabilities of
intelligent geospatial data interpretation. These models combine various data
modalities, such as optical, radar, and LiDAR imagery, with textual and
geographic information, enabling more comprehensive analysis and understanding
of remote sensing data. The integration of multiple modalities allows for
improved performance in tasks like object detection, land cover classification,
and change detection, which are often challenged by the complex and
heterogeneous nature of remote sensing data. However, despite these
advancements, several challenges remain. The diversity in data types, the need
for large-scale annotated datasets, and the complexity of multimodal fusion
techniques pose significant obstacles to the effective deployment of these
models. Moreover, the computational demands of training and fine-tuning
multimodal models require significant resources, further complicating their
practical application in remote sensing image interpretation tasks. This paper
provides a comprehensive review of the state-of-the-art in vision and
multimodal foundation models for remote sensing, focusing on their
architecture, training methods, datasets and application scenarios. We discuss
the key challenges these models face, such as data alignment, cross-modal
transfer learning, and scalability, while also identifying emerging research
directions aimed at overcoming these limitations. Our goal is to provide a
clear understanding of the current landscape of remote sensing foundation
models and inspire future research that can push the boundaries of what these
models can achieve in real-world applications. The list of resources collected
by the paper can be found in the
https://github.com/IRIP-BUAA/A-Review-for-remote-sensing-vision-language-models.
|
2503.22087 | Seokha Moon | Seokha Moon, Janghyun Baek, Giseop Kim, Jinkyu Kim, Sunwook Choi | Mitigating Trade-off: Stream and Query-guided Aggregation for Efficient
and Effective 3D Occupancy Prediction | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | 3D occupancy prediction has emerged as a key perception task for autonomous
driving, as it reconstructs 3D environments to provide a comprehensive scene
understanding. Recent studies focus on integrating spatiotemporal information
obtained from past observations to improve prediction accuracy, using a
multi-frame fusion approach that processes multiple past frames together.
However, these methods struggle with a trade-off between efficiency and
accuracy, which significantly limits their practicality. To mitigate this
trade-off, we propose StreamOcc, a novel framework that aggregates
spatio-temporal information in a stream-based manner. StreamOcc consists of two
key components: (i) Stream-based Voxel Aggregation, which effectively
accumulates past observations while minimizing computational costs, and (ii)
Query-guided Aggregation, which recurrently aggregates instance-level features
of dynamic objects into corresponding voxel features, refining fine-grained
details of dynamic objects. Experiments on the Occ3D-nuScenes dataset show that
StreamOcc achieves state-of-the-art performance in real-time settings, while
reducing memory usage by more than 50% compared to previous methods.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 02:05:53 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Moon",
"Seokha",
""
],
[
"Baek",
"Janghyun",
""
],
[
"Kim",
"Giseop",
""
],
[
"Kim",
"Jinkyu",
""
],
[
"Choi",
"Sunwook",
""
]
] | TITLE: Mitigating Trade-off: Stream and Query-guided Aggregation for Efficient
and Effective 3D Occupancy Prediction
ABSTRACT: 3D occupancy prediction has emerged as a key perception task for autonomous
driving, as it reconstructs 3D environments to provide a comprehensive scene
understanding. Recent studies focus on integrating spatiotemporal information
obtained from past observations to improve prediction accuracy, using a
multi-frame fusion approach that processes multiple past frames together.
However, these methods struggle with a trade-off between efficiency and
accuracy, which significantly limits their practicality. To mitigate this
trade-off, we propose StreamOcc, a novel framework that aggregates
spatio-temporal information in a stream-based manner. StreamOcc consists of two
key components: (i) Stream-based Voxel Aggregation, which effectively
accumulates past observations while minimizing computational costs, and (ii)
Query-guided Aggregation, which recurrently aggregates instance-level features
of dynamic objects into corresponding voxel features, refining fine-grained
details of dynamic objects. Experiments on the Occ3D-nuScenes dataset show that
StreamOcc achieves state-of-the-art performance in real-time settings, while
reducing memory usage by more than 50% compared to previous methods.
|
2503.22092 | Dina Albassam | Dina Albassam, Adam Cross, and Chengxiang Zhai | Leveraging LLMs for Predicting Unknown Diagnoses from Clinical Notes | 19 pages, 3 figures, 5 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Electronic Health Records (EHRs) often lack explicit links between
medications and diagnoses, making clinical decision-making and research more
difficult. Even when links exist, diagnosis lists may be incomplete, especially
during early patient visits. Discharge summaries tend to provide more complete
information, which can help infer accurate diagnoses, especially with the help
of large language models (LLMs). This study investigates whether LLMs can
predict implicitly mentioned diagnoses from clinical notes and link them to
corresponding medications. We address two research questions: (1) Does majority
voting across diverse LLM configurations outperform the best single
configuration in diagnosis prediction? (2) How sensitive is majority voting
accuracy to LLM hyperparameters such as temperature, top-p, and summary length?
To evaluate, we created a new dataset of 240 expert-annotated
medication-diagnosis pairs from 20 MIMIC-IV notes. Using GPT-3.5 Turbo, we ran
18 prompting configurations across short and long summary lengths, generating
8568 test cases. Results show that majority voting achieved 75 percent
accuracy, outperforming the best single configuration at 66 percent. No single
hyperparameter setting dominated, but combining deterministic, balanced, and
exploratory strategies improved performance. Shorter summaries generally led to
higher accuracy.In conclusion, ensemble-style majority voting with diverse LLM
configurations improves diagnosis prediction in EHRs and offers a promising
method to link medications and diagnoses in clinical texts.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 02:15:57 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Albassam",
"Dina",
""
],
[
"Cross",
"Adam",
""
],
[
"Zhai",
"Chengxiang",
""
]
] | TITLE: Leveraging LLMs for Predicting Unknown Diagnoses from Clinical Notes
ABSTRACT: Electronic Health Records (EHRs) often lack explicit links between
medications and diagnoses, making clinical decision-making and research more
difficult. Even when links exist, diagnosis lists may be incomplete, especially
during early patient visits. Discharge summaries tend to provide more complete
information, which can help infer accurate diagnoses, especially with the help
of large language models (LLMs). This study investigates whether LLMs can
predict implicitly mentioned diagnoses from clinical notes and link them to
corresponding medications. We address two research questions: (1) Does majority
voting across diverse LLM configurations outperform the best single
configuration in diagnosis prediction? (2) How sensitive is majority voting
accuracy to LLM hyperparameters such as temperature, top-p, and summary length?
To evaluate, we created a new dataset of 240 expert-annotated
medication-diagnosis pairs from 20 MIMIC-IV notes. Using GPT-3.5 Turbo, we ran
18 prompting configurations across short and long summary lengths, generating
8568 test cases. Results show that majority voting achieved 75 percent
accuracy, outperforming the best single configuration at 66 percent. No single
hyperparameter setting dominated, but combining deterministic, balanced, and
exploratory strategies improved performance. Shorter summaries generally led to
higher accuracy.In conclusion, ensemble-style majority voting with diverse LLM
configurations improves diagnosis prediction in EHRs and offers a promising
method to link medications and diagnoses in clinical texts.
|
2503.22093 | Ximing Wen | Ximing Wen, Mallika Mainali, Anik Sen | How Well Can Vison-Language Models Understand Humans' Intention? An
Open-ended Theory of Mind Question Evaluation Benchmark | 2 pages, accepted by ToM@AAAI25 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Vision Language Models (VLMs) have demonstrated strong reasoning capabilities
in Visual Question Answering (VQA) tasks; However, their ability to perform
Theory of Mind (ToM) tasks such as accurately inferring human intentions,
beliefs, and other mental states remains underexplored. In this work, we
propose an open-ended question framework to comprehensively evaluate VLMs'
performance across diverse categories of ToM tasks. We curated and annotated a
benchmark dataset composed of 30 images. We then assessed the performance of
four VLMs of varying sizes on this dataset. Our experimental results show that
the GPT-4 model outperformed all others, with only one smaller model,
GPT-4o-mini, achieving comparable performance. Additionally, we observed that
VLMs often struggle to accurately infer intentions in complex scenarios such as
bullying or cheating. Moreover, our findings also reveal that smaller models
can sometimes infer correct intentions despite relying on incorrect visual
cues.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 02:26:32 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wen",
"Ximing",
""
],
[
"Mainali",
"Mallika",
""
],
[
"Sen",
"Anik",
""
]
] | TITLE: How Well Can Vison-Language Models Understand Humans' Intention? An
Open-ended Theory of Mind Question Evaluation Benchmark
ABSTRACT: Vision Language Models (VLMs) have demonstrated strong reasoning capabilities
in Visual Question Answering (VQA) tasks; However, their ability to perform
Theory of Mind (ToM) tasks such as accurately inferring human intentions,
beliefs, and other mental states remains underexplored. In this work, we
propose an open-ended question framework to comprehensively evaluate VLMs'
performance across diverse categories of ToM tasks. We curated and annotated a
benchmark dataset composed of 30 images. We then assessed the performance of
four VLMs of varying sizes on this dataset. Our experimental results show that
the GPT-4 model outperformed all others, with only one smaller model,
GPT-4o-mini, achieving comparable performance. Additionally, we observed that
VLMs often struggle to accurately infer intentions in complex scenarios such as
bullying or cheating. Moreover, our findings also reveal that smaller models
can sometimes infer correct intentions despite relying on incorrect visual
cues.
|
2503.22097 | Haoyan Xu | Haoyan Xu, Zhengtao Yao, Yushun Dong, Ziyi Wang, Ryan A. Rossi,
Mengyuan Li, Yue Zhao | Few-Shot Graph Out-of-Distribution Detection with LLMs | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Existing methods for graph out-of-distribution (OOD) detection typically
depend on training graph neural network (GNN) classifiers using a substantial
amount of labeled in-distribution (ID) data. However, acquiring high-quality
labeled nodes in text-attributed graphs (TAGs) is challenging and costly due to
their complex textual and structural characteristics. Large language models
(LLMs), known for their powerful zero-shot capabilities in textual tasks, show
promise but struggle to naturally capture the critical structural information
inherent to TAGs, limiting their direct effectiveness.
To address these challenges, we propose LLM-GOOD, a general framework that
effectively combines the strengths of LLMs and GNNs to enhance data efficiency
in graph OOD detection. Specifically, we first leverage LLMs' strong zero-shot
capabilities to filter out likely OOD nodes, significantly reducing the human
annotation burden. To minimize the usage and cost of the LLM, we employ it only
to annotate a small subset of unlabeled nodes. We then train a lightweight GNN
filter using these noisy labels, enabling efficient predictions of ID status
for all other unlabeled nodes by leveraging both textual and structural
information. After obtaining node embeddings from the GNN filter, we can apply
informativeness-based methods to select the most valuable nodes for precise
human annotation. Finally, we train the target ID classifier using these
accurately annotated ID nodes. Extensive experiments on four real-world TAG
datasets demonstrate that LLM-GOOD significantly reduces human annotation costs
and outperforms state-of-the-art baselines in terms of both ID classification
accuracy and OOD detection performance.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 02:37:18 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Xu",
"Haoyan",
""
],
[
"Yao",
"Zhengtao",
""
],
[
"Dong",
"Yushun",
""
],
[
"Wang",
"Ziyi",
""
],
[
"Rossi",
"Ryan A.",
""
],
[
"Li",
"Mengyuan",
""
],
[
"Zhao",
"Yue",
""
]
] | TITLE: Few-Shot Graph Out-of-Distribution Detection with LLMs
ABSTRACT: Existing methods for graph out-of-distribution (OOD) detection typically
depend on training graph neural network (GNN) classifiers using a substantial
amount of labeled in-distribution (ID) data. However, acquiring high-quality
labeled nodes in text-attributed graphs (TAGs) is challenging and costly due to
their complex textual and structural characteristics. Large language models
(LLMs), known for their powerful zero-shot capabilities in textual tasks, show
promise but struggle to naturally capture the critical structural information
inherent to TAGs, limiting their direct effectiveness.
To address these challenges, we propose LLM-GOOD, a general framework that
effectively combines the strengths of LLMs and GNNs to enhance data efficiency
in graph OOD detection. Specifically, we first leverage LLMs' strong zero-shot
capabilities to filter out likely OOD nodes, significantly reducing the human
annotation burden. To minimize the usage and cost of the LLM, we employ it only
to annotate a small subset of unlabeled nodes. We then train a lightweight GNN
filter using these noisy labels, enabling efficient predictions of ID status
for all other unlabeled nodes by leveraging both textual and structural
information. After obtaining node embeddings from the GNN filter, we can apply
informativeness-based methods to select the most valuable nodes for precise
human annotation. Finally, we train the target ID classifier using these
accurately annotated ID nodes. Extensive experiments on four real-world TAG
datasets demonstrate that LLM-GOOD significantly reduces human annotation costs
and outperforms state-of-the-art baselines in terms of both ID classification
accuracy and OOD detection performance.
|
2503.22115 | Qimeng Liu | Yazhou Zhang, Qimeng Liu, Qiuchi Li, Peng Zhang, Jing Qin | Beyond Single-Sentence Prompts: Upgrading Value Alignment Benchmarks
with Dialogues and Stories | null | null | null | null | cs.CL cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating the value alignment of large language models (LLMs) has
traditionally relied on single-sentence adversarial prompts, which directly
probe models with ethically sensitive or controversial questions. However, with
the rapid advancements in AI safety techniques, models have become increasingly
adept at circumventing these straightforward tests, limiting their
effectiveness in revealing underlying biases and ethical stances. To address
this limitation, we propose an upgraded value alignment benchmark that moves
beyond single-sentence prompts by incorporating multi-turn dialogues and
narrative-based scenarios. This approach enhances the stealth and adversarial
nature of the evaluation, making it more robust against superficial safeguards
implemented in modern LLMs. We design and implement a dataset that includes
conversational traps and ethically ambiguous storytelling, systematically
assessing LLMs' responses in more nuanced and context-rich settings.
Experimental results demonstrate that this enhanced methodology can effectively
expose latent biases that remain undetected in traditional single-shot
evaluations. Our findings highlight the necessity of contextual and dynamic
testing for value alignment in LLMs, paving the way for more sophisticated and
realistic assessments of AI ethics and safety.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 03:31:37 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhang",
"Yazhou",
""
],
[
"Liu",
"Qimeng",
""
],
[
"Li",
"Qiuchi",
""
],
[
"Zhang",
"Peng",
""
],
[
"Qin",
"Jing",
""
]
] | TITLE: Beyond Single-Sentence Prompts: Upgrading Value Alignment Benchmarks
with Dialogues and Stories
ABSTRACT: Evaluating the value alignment of large language models (LLMs) has
traditionally relied on single-sentence adversarial prompts, which directly
probe models with ethically sensitive or controversial questions. However, with
the rapid advancements in AI safety techniques, models have become increasingly
adept at circumventing these straightforward tests, limiting their
effectiveness in revealing underlying biases and ethical stances. To address
this limitation, we propose an upgraded value alignment benchmark that moves
beyond single-sentence prompts by incorporating multi-turn dialogues and
narrative-based scenarios. This approach enhances the stealth and adversarial
nature of the evaluation, making it more robust against superficial safeguards
implemented in modern LLMs. We design and implement a dataset that includes
conversational traps and ethically ambiguous storytelling, systematically
assessing LLMs' responses in more nuanced and context-rich settings.
Experimental results demonstrate that this enhanced methodology can effectively
expose latent biases that remain undetected in traditional single-shot
evaluations. Our findings highlight the necessity of contextual and dynamic
testing for value alignment in LLMs, paving the way for more sophisticated and
realistic assessments of AI ethics and safety.
|
2503.22120 | Protyay Dey | Protyay Dey and Rejoy Chakraborty and Abhilasha S. Jadhav and Kapil
Rana and Gaurav Sharma and Puneet Goyal | Camera Model Identification with SPAIR-Swin and Entropy based
Non-Homogeneous Patches | 10 pages, 5 figures | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Source camera model identification (SCMI) plays a pivotal role in image
forensics with applications including authenticity verification and copyright
protection. For identifying the camera model used to capture a given image, we
propose SPAIR-Swin, a novel model combining a modified spatial attention
mechanism and inverted residual block (SPAIR) with a Swin Transformer.
SPAIR-Swin effectively captures both global and local features, enabling robust
identification of artifacts such as noise patterns that are particularly
effective for SCMI. Additionally, unlike conventional methods focusing on
homogeneous patches, we propose a patch selection strategy for SCMI that
emphasizes high-entropy regions rich in patterns and textures. Extensive
evaluations on four benchmark SCMI datasets demonstrate that SPAIR-Swin
outperforms existing methods, achieving patch-level accuracies of 99.45%,
98.39%, 99.45%, and 97.46% and image-level accuracies of 99.87%, 99.32%, 100%,
and 98.61% on the Dresden, Vision, Forchheim, and Socrates datasets,
respectively. Our findings highlight that high-entropy patches, which contain
high-frequency information such as edge sharpness, noise, and compression
artifacts, are more favorable in improving SCMI accuracy. Code will be made
available upon request.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 03:47:28 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Dey",
"Protyay",
""
],
[
"Chakraborty",
"Rejoy",
""
],
[
"Jadhav",
"Abhilasha S.",
""
],
[
"Rana",
"Kapil",
""
],
[
"Sharma",
"Gaurav",
""
],
[
"Goyal",
"Puneet",
""
]
] | TITLE: Camera Model Identification with SPAIR-Swin and Entropy based
Non-Homogeneous Patches
ABSTRACT: Source camera model identification (SCMI) plays a pivotal role in image
forensics with applications including authenticity verification and copyright
protection. For identifying the camera model used to capture a given image, we
propose SPAIR-Swin, a novel model combining a modified spatial attention
mechanism and inverted residual block (SPAIR) with a Swin Transformer.
SPAIR-Swin effectively captures both global and local features, enabling robust
identification of artifacts such as noise patterns that are particularly
effective for SCMI. Additionally, unlike conventional methods focusing on
homogeneous patches, we propose a patch selection strategy for SCMI that
emphasizes high-entropy regions rich in patterns and textures. Extensive
evaluations on four benchmark SCMI datasets demonstrate that SPAIR-Swin
outperforms existing methods, achieving patch-level accuracies of 99.45%,
98.39%, 99.45%, and 97.46% and image-level accuracies of 99.87%, 99.32%, 100%,
and 98.61% on the Dresden, Vision, Forchheim, and Socrates datasets,
respectively. Our findings highlight that high-entropy patches, which contain
high-frequency information such as edge sharpness, noise, and compression
artifacts, are more favorable in improving SCMI accuracy. Code will be made
available upon request.
|
2503.22121 | Tharun Anand | Tharun Anand, Siva Sankar, Pravin Nair | Detecting Localized Deepfake Manipulations Using Action Unit-Guided
Video Representations | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | With rapid advancements in generative modeling, deepfake techniques are
increasingly narrowing the gap between real and synthetic videos, raising
serious privacy and security concerns. Beyond traditional face swapping and
reenactment, an emerging trend in recent state-of-the-art deepfake generation
methods involves localized edits such as subtle manipulations of specific
facial features like raising eyebrows, altering eye shapes, or modifying mouth
expressions. These fine-grained manipulations pose a significant challenge for
existing detection models, which struggle to capture such localized variations.
To the best of our knowledge, this work presents the first detection approach
explicitly designed to generalize to localized edits in deepfake videos by
leveraging spatiotemporal representations guided by facial action units. Our
method leverages a cross-attention-based fusion of representations learned from
pretext tasks like random masking and action unit detection, to create an
embedding that effectively encodes subtle, localized changes. Comprehensive
evaluations across multiple deepfake generation methods demonstrate that our
approach, despite being trained solely on the traditional FF+ dataset, sets a
new benchmark in detecting recent deepfake-generated videos with fine-grained
local edits, achieving a $20\%$ improvement in accuracy over current
state-of-the-art detection methods. Additionally, our method delivers
competitive performance on standard datasets, highlighting its robustness and
generalization across diverse types of local and global forgeries.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 03:49:00 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Anand",
"Tharun",
""
],
[
"Sankar",
"Siva",
""
],
[
"Nair",
"Pravin",
""
]
] | TITLE: Detecting Localized Deepfake Manipulations Using Action Unit-Guided
Video Representations
ABSTRACT: With rapid advancements in generative modeling, deepfake techniques are
increasingly narrowing the gap between real and synthetic videos, raising
serious privacy and security concerns. Beyond traditional face swapping and
reenactment, an emerging trend in recent state-of-the-art deepfake generation
methods involves localized edits such as subtle manipulations of specific
facial features like raising eyebrows, altering eye shapes, or modifying mouth
expressions. These fine-grained manipulations pose a significant challenge for
existing detection models, which struggle to capture such localized variations.
To the best of our knowledge, this work presents the first detection approach
explicitly designed to generalize to localized edits in deepfake videos by
leveraging spatiotemporal representations guided by facial action units. Our
method leverages a cross-attention-based fusion of representations learned from
pretext tasks like random masking and action unit detection, to create an
embedding that effectively encodes subtle, localized changes. Comprehensive
evaluations across multiple deepfake generation methods demonstrate that our
approach, despite being trained solely on the traditional FF+ dataset, sets a
new benchmark in detecting recent deepfake-generated videos with fine-grained
local edits, achieving a $20\%$ improvement in accuracy over current
state-of-the-art detection methods. Additionally, our method delivers
competitive performance on standard datasets, highlighting its robustness and
generalization across diverse types of local and global forgeries.
|
2503.22125 | Ivan Beleacov | Ivan Beleacov | Semantic segmentation for building houses from wooden cubes | 10 pages, 6 figures, 2 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Automated construction is one of the most promising areas that can improve
efficiency, reduce costs and minimize errors in the process of building
construction. In this paper, a comparative analysis of three neural network
models for semantic segmentation, U-Net(light), LinkNet and PSPNet, is
performed. Two specialized datasets with images of houses built from wooden
cubes were created for the experiments. The first dataset contains 4 classes
(background, foundation, walls, roof ) and is designed for basic model
evaluation, while the second dataset includes 44 classes where each cube is
labeled as a separate object. The models were trained with the same
hyperparameters and their accuracy was evaluated using MeanIoU and F1 Score
metrics. According to the results obtained, U-Net(light) showed the best
performance with 78% MeanIoU and 87% F1 Score on the first dataset and 17% and
25% respectively on the second dataset. The poor results on the second dataset
are due to the limited amount of data, the complexity of the partitioning and
the imbalance of classes, making it difficult to accurately select individual
cubes. In addition, overtraining was observed in all experiments, manifested by
high accuracy on the training dataset and its significant decrease on the
validation dataset. The present work is the basis for the development of
algorithms for automatic generation of staged building plans, which can be
further scaled to design complete buildings. Future research is planned to
extend the datasets and apply methods to combat overfitting (L1/L2
regularization, Early Stopping). The next stage of work will be the development
of algorithms for automatic generation of a step-by-step plan for building
houses from cubes using manipulators. Index Terms-Deep Learning, Computer
vision, CNN, Semantic segmentation, Construction materials.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 03:58:12 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Beleacov",
"Ivan",
""
]
] | TITLE: Semantic segmentation for building houses from wooden cubes
ABSTRACT: Automated construction is one of the most promising areas that can improve
efficiency, reduce costs and minimize errors in the process of building
construction. In this paper, a comparative analysis of three neural network
models for semantic segmentation, U-Net(light), LinkNet and PSPNet, is
performed. Two specialized datasets with images of houses built from wooden
cubes were created for the experiments. The first dataset contains 4 classes
(background, foundation, walls, roof ) and is designed for basic model
evaluation, while the second dataset includes 44 classes where each cube is
labeled as a separate object. The models were trained with the same
hyperparameters and their accuracy was evaluated using MeanIoU and F1 Score
metrics. According to the results obtained, U-Net(light) showed the best
performance with 78% MeanIoU and 87% F1 Score on the first dataset and 17% and
25% respectively on the second dataset. The poor results on the second dataset
are due to the limited amount of data, the complexity of the partitioning and
the imbalance of classes, making it difficult to accurately select individual
cubes. In addition, overtraining was observed in all experiments, manifested by
high accuracy on the training dataset and its significant decrease on the
validation dataset. The present work is the basis for the development of
algorithms for automatic generation of staged building plans, which can be
further scaled to design complete buildings. Future research is planned to
extend the datasets and apply methods to combat overfitting (L1/L2
regularization, Early Stopping). The next stage of work will be the development
of algorithms for automatic generation of a step-by-step plan for building
houses from cubes using manipulators. Index Terms-Deep Learning, Computer
vision, CNN, Semantic segmentation, Construction materials.
|
2503.22132 | Kanta Tachibana | Toma Masaki and Kanta Tachibana | Long-Term Electricity Demand Prediction Using Non-negative Tensor
Factorization and Genetic Algorithm-Driven Temporal Modeling | 17 pages, 9 figures, 10 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This study proposes a novel framework for long-term electricity demand
prediction based solely on historical consumption data, without relying on
external variables such as temperature or economic indicators. The method
combines Non-negative Tensor Factorization (NTF) to extract low-dimensional
temporal features from multi-way electricity usage data, with a Genetic
Algorithm that optimizes the hyperparameters of time series models applied to
the latent annual factors. We model the dataset as a third-order tensor
spanning electric utilities, industrial sectors, and years, and apply canonical
polyadic decomposition under non-negativity constraints. The annual component
is forecasted using autoregressive models, with hyperparameter tuning guided by
the prediction error or reconstruction accuracy on a validation set.
Comparative experiments using real-world electricity data from Japan
demonstrate that the proposed method achieves lower mean squared error than
baseline approaches without tensor decomposition or evolutionary optimization.
Moreover, we find that reducing the model's degrees of freedom via tensor
decomposition improves generalization performance, and that initialization
sensitivity in NTF can be mitigated through multiple runs or ensemble
strategies. These findings suggest that the proposed framework offers an
interpretable, flexible, and scalable approach to long-term electricity demand
prediction and can be extended to other structured time series forecasting
tasks.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 04:05:00 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Masaki",
"Toma",
""
],
[
"Tachibana",
"Kanta",
""
]
] | TITLE: Long-Term Electricity Demand Prediction Using Non-negative Tensor
Factorization and Genetic Algorithm-Driven Temporal Modeling
ABSTRACT: This study proposes a novel framework for long-term electricity demand
prediction based solely on historical consumption data, without relying on
external variables such as temperature or economic indicators. The method
combines Non-negative Tensor Factorization (NTF) to extract low-dimensional
temporal features from multi-way electricity usage data, with a Genetic
Algorithm that optimizes the hyperparameters of time series models applied to
the latent annual factors. We model the dataset as a third-order tensor
spanning electric utilities, industrial sectors, and years, and apply canonical
polyadic decomposition under non-negativity constraints. The annual component
is forecasted using autoregressive models, with hyperparameter tuning guided by
the prediction error or reconstruction accuracy on a validation set.
Comparative experiments using real-world electricity data from Japan
demonstrate that the proposed method achieves lower mean squared error than
baseline approaches without tensor decomposition or evolutionary optimization.
Moreover, we find that reducing the model's degrees of freedom via tensor
decomposition improves generalization performance, and that initialization
sensitivity in NTF can be mitigated through multiple runs or ensemble
strategies. These findings suggest that the proposed framework offers an
interpretable, flexible, and scalable approach to long-term electricity demand
prediction and can be extended to other structured time series forecasting
tasks.
|
2503.22134 | Costain Nachuma | Costain Nachuma, Md Mosharaf Hossan, Asif Kamal Turzo, Minhaz F.
Zibran | Decoding Dependency Risks: A Quantitative Study of Vulnerabilities in
the Maven Ecosystem | 5 pages, 4 figures,2 tables, Submitted to the 2025 Mining Software
Repositories (MSR) conference | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | This study investigates vulnerabilities within the Maven ecosystem by
analyzing a comprehensive dataset of 14,459,139 releases. Our analysis reveals
the most critical weaknesses that pose significant threats to developers and
their projects as they look to streamline their development tasks through code
reuse. We show risky weaknesses, those unique to Maven, and emphasize those
becoming increasingly dangerous over time. Furthermore, we reveal how
vulnerabilities subtly propagate, impacting 31.39% of the 635,003 latest
releases through direct dependencies and 62.89% through transitive
dependencies. Our findings suggest that improper handling of input and
mismanagement of resources pose the most risk. Additionally, Insufficient
session-ID length in J2EE configuration and no throttling while allocating
resources uniquely threaten the Maven ecosystem. We also find that weaknesses
related to improper authentication and managing sensitive data without
encryption have quickly gained prominence in recent years. These findings
emphasize the need for proactive strategies to mitigate security risks in the
Maven ecosystem.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 04:16:46 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Nachuma",
"Costain",
""
],
[
"Hossan",
"Md Mosharaf",
""
],
[
"Turzo",
"Asif Kamal",
""
],
[
"Zibran",
"Minhaz F.",
""
]
] | TITLE: Decoding Dependency Risks: A Quantitative Study of Vulnerabilities in
the Maven Ecosystem
ABSTRACT: This study investigates vulnerabilities within the Maven ecosystem by
analyzing a comprehensive dataset of 14,459,139 releases. Our analysis reveals
the most critical weaknesses that pose significant threats to developers and
their projects as they look to streamline their development tasks through code
reuse. We show risky weaknesses, those unique to Maven, and emphasize those
becoming increasingly dangerous over time. Furthermore, we reveal how
vulnerabilities subtly propagate, impacting 31.39% of the 635,003 latest
releases through direct dependencies and 62.89% through transitive
dependencies. Our findings suggest that improper handling of input and
mismanagement of resources pose the most risk. Additionally, Insufficient
session-ID length in J2EE configuration and no throttling while allocating
resources uniquely threaten the Maven ecosystem. We also find that weaknesses
related to improper authentication and managing sensitive data without
encryption have quickly gained prominence in recent years. These findings
emphasize the need for proactive strategies to mitigate security risks in the
Maven ecosystem.
|
2503.22137 | Syrine Belakaria | Syrine Belakaria, Joshua Kazdan, Charles Marx, Chris Cundy, Willie
Neiswanger, Sanmi Koyejo, Barbara E. Engelhardt, and Stefano Ermon | Sharpe Ratio-Guided Active Learning for Preference Optimization in RLHF | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Reinforcement learning from human feedback (RLHF) has become a cornerstone of
the training and alignment pipeline for large language models (LLMs). Recent
advances, such as direct preference optimization (DPO), have simplified the
preference learning step. However, collecting preference data remains a
challenging and costly process, often requiring expert annotation. This cost
can be mitigated by carefully selecting the data points presented for
annotation. In this work, we propose an active learning approach to efficiently
select prompt and preference pairs using a risk assessment strategy based on
the Sharpe Ratio. To address the challenge of unknown preferences prior to
annotation, our method evaluates the gradients of all potential preference
annotations to assess their impact on model updates. These gradient-based
evaluations enable risk assessment of data points regardless of the annotation
outcome. By leveraging the DPO loss derivations, we derive a closed-form
expression for computing these Sharpe ratios on a per-tuple basis, ensuring our
approach remains both tractable and computationally efficient. We also
introduce two variants of our method, each making different assumptions about
prior information. Experimental results demonstrate that our method outperforms
the baseline by up to 5% in win rates against the chosen completion with
limited human preference data across several language models and real-world
datasets.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 04:22:53 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Belakaria",
"Syrine",
""
],
[
"Kazdan",
"Joshua",
""
],
[
"Marx",
"Charles",
""
],
[
"Cundy",
"Chris",
""
],
[
"Neiswanger",
"Willie",
""
],
[
"Koyejo",
"Sanmi",
""
],
[
"Engelhardt",
"Barbara E.",
""
],
[
"Ermon",
"Stefano",
""
]
] | TITLE: Sharpe Ratio-Guided Active Learning for Preference Optimization in RLHF
ABSTRACT: Reinforcement learning from human feedback (RLHF) has become a cornerstone of
the training and alignment pipeline for large language models (LLMs). Recent
advances, such as direct preference optimization (DPO), have simplified the
preference learning step. However, collecting preference data remains a
challenging and costly process, often requiring expert annotation. This cost
can be mitigated by carefully selecting the data points presented for
annotation. In this work, we propose an active learning approach to efficiently
select prompt and preference pairs using a risk assessment strategy based on
the Sharpe Ratio. To address the challenge of unknown preferences prior to
annotation, our method evaluates the gradients of all potential preference
annotations to assess their impact on model updates. These gradient-based
evaluations enable risk assessment of data points regardless of the annotation
outcome. By leveraging the DPO loss derivations, we derive a closed-form
expression for computing these Sharpe ratios on a per-tuple basis, ensuring our
approach remains both tractable and computationally efficient. We also
introduce two variants of our method, each making different assumptions about
prior information. Experimental results demonstrate that our method outperforms
the baseline by up to 5% in win rates against the chosen completion with
limited human preference data across several language models and real-world
datasets.
|
2503.22138 | Changchang Sun | Changchang Sun and Gaowen Liu and Charles Fleming and Yan Yan | Enhancing Dance-to-Music Generation via Negative Conditioning Latent
Diffusion Model | null | null | null | null | cs.SD cs.CV eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conditional diffusion models have gained increasing attention since their
impressive results for cross-modal synthesis, where the strong alignment
between conditioning input and generated output can be achieved by training a
time-conditioned U-Net augmented with cross-attention mechanism. In this paper,
we focus on the problem of generating music synchronized with rhythmic visual
cues of the given dance video. Considering that bi-directional guidance is more
beneficial for training a diffusion model, we propose to enhance the quality of
generated music and its synchronization with dance videos by adopting both
positive rhythmic information and negative ones (PN-Diffusion) as conditions,
where a dual diffusion and reverse processes is devised. Specifically, to train
a sequential multi-modal U-Net structure, PN-Diffusion consists of a noise
prediction objective for positive conditioning and an additional noise
prediction objective for negative conditioning. To accurately define and select
both positive and negative conditioning, we ingeniously utilize temporal
correlations in dance videos, capturing positive and negative rhythmic cues by
playing them forward and backward, respectively. Through subjective and
objective evaluations of input-output correspondence in terms of dance-music
beat alignment and the quality of generated music, experimental results on the
AIST++ and TikTok dance video datasets demonstrate that our model outperforms
SOTA dance-to-music generation models.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 04:23:03 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Sun",
"Changchang",
""
],
[
"Liu",
"Gaowen",
""
],
[
"Fleming",
"Charles",
""
],
[
"Yan",
"Yan",
""
]
] | TITLE: Enhancing Dance-to-Music Generation via Negative Conditioning Latent
Diffusion Model
ABSTRACT: Conditional diffusion models have gained increasing attention since their
impressive results for cross-modal synthesis, where the strong alignment
between conditioning input and generated output can be achieved by training a
time-conditioned U-Net augmented with cross-attention mechanism. In this paper,
we focus on the problem of generating music synchronized with rhythmic visual
cues of the given dance video. Considering that bi-directional guidance is more
beneficial for training a diffusion model, we propose to enhance the quality of
generated music and its synchronization with dance videos by adopting both
positive rhythmic information and negative ones (PN-Diffusion) as conditions,
where a dual diffusion and reverse processes is devised. Specifically, to train
a sequential multi-modal U-Net structure, PN-Diffusion consists of a noise
prediction objective for positive conditioning and an additional noise
prediction objective for negative conditioning. To accurately define and select
both positive and negative conditioning, we ingeniously utilize temporal
correlations in dance videos, capturing positive and negative rhythmic cues by
playing them forward and backward, respectively. Through subjective and
objective evaluations of input-output correspondence in terms of dance-music
beat alignment and the quality of generated music, experimental results on the
AIST++ and TikTok dance video datasets demonstrate that our model outperforms
SOTA dance-to-music generation models.
|
2503.22140 | Chang Cai | Chang Cai, Xiaojun Yuan, Ying-Jun Angela Zhang | Score-Based Turbo Message Passing for Plug-and-Play Compressive Image
Recovery | null | null | null | null | eess.IV cs.CV eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Message passing algorithms have been tailored for compressive imaging
applications by plugging in different types of off-the-shelf image denoisers.
These off-the-shelf denoisers mostly rely on some generic or hand-crafted
priors for denoising. Due to their insufficient accuracy in capturing the true
image prior, these methods often fail to produce satisfactory results,
especially in largely underdetermined scenarios. On the other hand, score-based
generative modeling offers a promising way to accurately characterize the
sophisticated image distribution. In this paper, by exploiting the close
relation between score-based modeling and empirical Bayes-optimal denoising, we
devise a message passing framework that integrates a score-based minimum mean
squared error (MMSE) denoiser for compressive image recovery. This framework is
firmly rooted in Bayesian formalism, in which state evolution (SE) equations
accurately predict its asymptotic performance. Experiments on the FFHQ dataset
demonstrate that our method strikes a significantly better
performance-complexity tradeoff than conventional message passing, regularized
linear regression, and score-based posterior sampling baselines. Remarkably,
our method typically requires less than 20 neural function evaluations (NFEs)
to converge.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 04:30:58 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Cai",
"Chang",
""
],
[
"Yuan",
"Xiaojun",
""
],
[
"Zhang",
"Ying-Jun Angela",
""
]
] | TITLE: Score-Based Turbo Message Passing for Plug-and-Play Compressive Image
Recovery
ABSTRACT: Message passing algorithms have been tailored for compressive imaging
applications by plugging in different types of off-the-shelf image denoisers.
These off-the-shelf denoisers mostly rely on some generic or hand-crafted
priors for denoising. Due to their insufficient accuracy in capturing the true
image prior, these methods often fail to produce satisfactory results,
especially in largely underdetermined scenarios. On the other hand, score-based
generative modeling offers a promising way to accurately characterize the
sophisticated image distribution. In this paper, by exploiting the close
relation between score-based modeling and empirical Bayes-optimal denoising, we
devise a message passing framework that integrates a score-based minimum mean
squared error (MMSE) denoiser for compressive image recovery. This framework is
firmly rooted in Bayesian formalism, in which state evolution (SE) equations
accurately predict its asymptotic performance. Experiments on the FFHQ dataset
demonstrate that our method strikes a significantly better
performance-complexity tradeoff than conventional message passing, regularized
linear regression, and score-based posterior sampling baselines. Remarkably,
our method typically requires less than 20 neural function evaluations (NFEs)
to converge.
|
2503.22143 | Sungyu Jeong | Sungyu Jeong, Won Joon Choi, Junung Choi, Anik Biswas, and Byungsub
Kim | A Self-Supervised Learning of a Foundation Model for Analog Layout
Design Automation | 8 pages, 11 figures | null | null | null | eess.SP cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose a UNet-based foundation model and its self-supervised learning
method to address two key challenges: 1) lack of qualified annotated analog
layout data, and 2) excessive variety in analog layout design tasks. For
self-supervised learning, we propose random patch sampling and random masking
techniques automatically to obtain enough training data from a small
unannotated layout dataset. The obtained data are greatly augmented, less
biased, equally sized, and contain enough information for excessive varieties
of qualified layout patterns. By pre-training with the obtained data, the
proposed foundation model can learn implicit general knowledge on layout
patterns so that it can be fine-tuned for various downstream layout tasks with
small task-specific datasets. Fine-tuning provides an efficient and
consolidated methodology for diverse downstream tasks, reducing the enormous
human effort to develop a model per task separately. In experiments, the
foundation model was pre-trained using 324,000 samples obtained from 6
silicon-proved manually designed analog circuits, then it was fine-tuned for
the five example downstream tasks: generating contacts, vias, dummy fingers,
N-wells, and metal routings. The fine-tuned models successfully performed these
tasks for more than one thousand unseen layout inputs, generating DRC/LVS-clean
layouts for 96.6% of samples. Compared with training the model from scratch for
the metal routing task, fine-tuning required only 1/8 of the data to achieve
the same dice score of 0.95. With the same data, fine-tuning achieved a 90%
lower validation loss and a 40% higher benchmark score than training from
scratch.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 04:37:33 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Jeong",
"Sungyu",
""
],
[
"Choi",
"Won Joon",
""
],
[
"Choi",
"Junung",
""
],
[
"Biswas",
"Anik",
""
],
[
"Kim",
"Byungsub",
""
]
] | TITLE: A Self-Supervised Learning of a Foundation Model for Analog Layout
Design Automation
ABSTRACT: We propose a UNet-based foundation model and its self-supervised learning
method to address two key challenges: 1) lack of qualified annotated analog
layout data, and 2) excessive variety in analog layout design tasks. For
self-supervised learning, we propose random patch sampling and random masking
techniques automatically to obtain enough training data from a small
unannotated layout dataset. The obtained data are greatly augmented, less
biased, equally sized, and contain enough information for excessive varieties
of qualified layout patterns. By pre-training with the obtained data, the
proposed foundation model can learn implicit general knowledge on layout
patterns so that it can be fine-tuned for various downstream layout tasks with
small task-specific datasets. Fine-tuning provides an efficient and
consolidated methodology for diverse downstream tasks, reducing the enormous
human effort to develop a model per task separately. In experiments, the
foundation model was pre-trained using 324,000 samples obtained from 6
silicon-proved manually designed analog circuits, then it was fine-tuned for
the five example downstream tasks: generating contacts, vias, dummy fingers,
N-wells, and metal routings. The fine-tuned models successfully performed these
tasks for more than one thousand unseen layout inputs, generating DRC/LVS-clean
layouts for 96.6% of samples. Compared with training the model from scratch for
the metal routing task, fine-tuning required only 1/8 of the data to achieve
the same dice score of 0.95. With the same data, fine-tuning achieved a 90%
lower validation loss and a 40% higher benchmark score than training from
scratch.
|
2503.22144 | Papa Abdou Karim Karou Diallo | Papa Abdou Karim Karou Diallo and Amal Zouaq | FRASE: Structured Representations for Generalizable SPARQL Query
Generation | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Translating natural language questions into SPARQL queries enables Knowledge
Base querying for factual and up-to-date responses. However, existing datasets
for this task are predominantly template-based, leading models to learn
superficial mappings between question and query templates rather than
developing true generalization capabilities. As a result, models struggle when
encountering naturally phrased, template-free questions. This paper introduces
FRASE (FRAme-based Semantic Enhancement), a novel approach that leverages Frame
Semantic Role Labeling (FSRL) to address this limitation. We also present
LC-QuAD 3.0, a new dataset derived from LC-QuAD 2.0, in which each question is
enriched using FRASE through frame detection and the mapping of frame-elements
to their argument. We evaluate the impact of this approach through extensive
experiments on recent large language models (LLMs) under different fine-tuning
configurations. Our results demonstrate that integrating frame-based structured
representations consistently improves SPARQL generation performance,
particularly in challenging generalization scenarios when test questions
feature unseen templates (unknown template splits) and when they are all
naturally phrased (reformulated questions).
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 04:39:52 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Diallo",
"Papa Abdou Karim Karou",
""
],
[
"Zouaq",
"Amal",
""
]
] | TITLE: FRASE: Structured Representations for Generalizable SPARQL Query
Generation
ABSTRACT: Translating natural language questions into SPARQL queries enables Knowledge
Base querying for factual and up-to-date responses. However, existing datasets
for this task are predominantly template-based, leading models to learn
superficial mappings between question and query templates rather than
developing true generalization capabilities. As a result, models struggle when
encountering naturally phrased, template-free questions. This paper introduces
FRASE (FRAme-based Semantic Enhancement), a novel approach that leverages Frame
Semantic Role Labeling (FSRL) to address this limitation. We also present
LC-QuAD 3.0, a new dataset derived from LC-QuAD 2.0, in which each question is
enriched using FRASE through frame detection and the mapping of frame-elements
to their argument. We evaluate the impact of this approach through extensive
experiments on recent large language models (LLMs) under different fine-tuning
configurations. Our results demonstrate that integrating frame-based structured
representations consistently improves SPARQL generation performance,
particularly in challenging generalization scenarios when test questions
feature unseen templates (unknown template splits) and when they are all
naturally phrased (reformulated questions).
|
2503.22145 | Tim Rolff | Tim Rolff, Jurik Karimian, Niklas Hypki, Susanne Schmidt, Markus
Lappe, Frank Steinicke | Tokenization of Gaze Data | null | null | null | null | cs.LG cs.CL cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A considerable part of the performance of today's large language models
(LLM's) and multimodal large language models (MLLM's) depends on their
tokenization strategies. While tokenizers are extensively researched for
textual and visual input, there is no research on tokenization strategies for
gaze data due to its nature. However, a corresponding tokenization strategy
would allow using the vision capabilities of pre-trained MLLM's for gaze data,
for example, through fine-tuning.
In this paper, we aim to close this research gap by analyzing five different
tokenizers for gaze data on three different datasets for the forecasting and
generation of gaze data through LLMs (cf.~\cref{fig:teaser}). We evaluate the
tokenizers regarding their reconstruction and compression abilities. Further,
we train an LLM for each tokenization strategy, measuring its generative and
predictive performance. Overall, we found that a quantile tokenizer outperforms
all others in predicting the gaze positions and k-means is best when predicting
gaze velocities.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 04:41:09 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Rolff",
"Tim",
""
],
[
"Karimian",
"Jurik",
""
],
[
"Hypki",
"Niklas",
""
],
[
"Schmidt",
"Susanne",
""
],
[
"Lappe",
"Markus",
""
],
[
"Steinicke",
"Frank",
""
]
] | TITLE: Tokenization of Gaze Data
ABSTRACT: A considerable part of the performance of today's large language models
(LLM's) and multimodal large language models (MLLM's) depends on their
tokenization strategies. While tokenizers are extensively researched for
textual and visual input, there is no research on tokenization strategies for
gaze data due to its nature. However, a corresponding tokenization strategy
would allow using the vision capabilities of pre-trained MLLM's for gaze data,
for example, through fine-tuning.
In this paper, we aim to close this research gap by analyzing five different
tokenizers for gaze data on three different datasets for the forecasting and
generation of gaze data through LLMs (cf.~\cref{fig:teaser}). We evaluate the
tokenizers regarding their reconstruction and compression abilities. Further,
we train an LLM for each tokenization strategy, measuring its generative and
predictive performance. Overall, we found that a quantile tokenizer outperforms
all others in predicting the gaze positions and k-means is best when predicting
gaze velocities.
|
2503.22152 | Yuxuan Li | Yuxuan Li, Vijay Veerabadran, Michael L. Iuzzolino, Brett D. Roads,
Asli Celikyilmaz, Karl Ridgeway | EgoToM: Benchmarking Theory of Mind Reasoning from Egocentric Videos | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce EgoToM, a new video question-answering benchmark that extends
Theory-of-Mind (ToM) evaluation to egocentric domains. Using a causal ToM
model, we generate multi-choice video QA instances for the Ego4D dataset to
benchmark the ability to predict a camera wearer's goals, beliefs, and next
actions. We study the performance of both humans and state of the art
multimodal large language models (MLLMs) on these three interconnected
inference problems. Our evaluation shows that MLLMs achieve close to
human-level accuracy on inferring goals from egocentric videos. However, MLLMs
(including the largest ones we tested with over 100B parameters) fall short of
human performance when inferring the camera wearers' in-the-moment belief
states and future actions that are most consistent with the unseen video
future. We believe that our results will shape the future design of an
important class of egocentric digital assistants which are equipped with a
reasonable model of the user's internal mental states.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 05:10:59 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Yuxuan",
""
],
[
"Veerabadran",
"Vijay",
""
],
[
"Iuzzolino",
"Michael L.",
""
],
[
"Roads",
"Brett D.",
""
],
[
"Celikyilmaz",
"Asli",
""
],
[
"Ridgeway",
"Karl",
""
]
] | TITLE: EgoToM: Benchmarking Theory of Mind Reasoning from Egocentric Videos
ABSTRACT: We introduce EgoToM, a new video question-answering benchmark that extends
Theory-of-Mind (ToM) evaluation to egocentric domains. Using a causal ToM
model, we generate multi-choice video QA instances for the Ego4D dataset to
benchmark the ability to predict a camera wearer's goals, beliefs, and next
actions. We study the performance of both humans and state of the art
multimodal large language models (MLLMs) on these three interconnected
inference problems. Our evaluation shows that MLLMs achieve close to
human-level accuracy on inferring goals from egocentric videos. However, MLLMs
(including the largest ones we tested with over 100B parameters) fall short of
human performance when inferring the camera wearers' in-the-moment belief
states and future actions that are most consistent with the unseen video
future. We believe that our results will shape the future design of an
important class of egocentric digital assistants which are equipped with a
reasonable model of the user's internal mental states.
|
2503.22154 | Jae-Young Yim | Jae-Young Yim, Dongwook Kim, Jae-Young Sim | Permutation-Invariant and Orientation-Aware Dataset Distillation for 3D
Point Clouds | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We should collect large amount of data to train deep neural networks for
various applications. Recently, the dataset distillation for images and texts
has been attracting a lot of attention, that reduces the original dataset to a
synthetic dataset while preserving essential task-relevant information.
However, 3D point clouds distillation is almost unexplored due to the
challenges of unordered structures of points. In this paper, we propose a novel
distribution matching-based dataset distillation method for 3D point clouds
that jointly optimizes the geometric structures of synthetic dataset as well as
the orientations of synthetic models. To ensure the consistent feature
alignment between different 3D point cloud models, we devise a permutation
invariant distribution matching loss with the sorted feature vectors. We also
employ learnable rotation angles to transform each syntheic model according to
the optimal orientation best representing the original feature distribution.
Extensive experimental results on widely used four benchmark datasets,
including ModelNet10, ModelNet40, ShapeNet, and ScanObjectNN, demonstrate that
the proposed method consistently outperforms the existing methods.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 05:15:22 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Yim",
"Jae-Young",
""
],
[
"Kim",
"Dongwook",
""
],
[
"Sim",
"Jae-Young",
""
]
] | TITLE: Permutation-Invariant and Orientation-Aware Dataset Distillation for 3D
Point Clouds
ABSTRACT: We should collect large amount of data to train deep neural networks for
various applications. Recently, the dataset distillation for images and texts
has been attracting a lot of attention, that reduces the original dataset to a
synthetic dataset while preserving essential task-relevant information.
However, 3D point clouds distillation is almost unexplored due to the
challenges of unordered structures of points. In this paper, we propose a novel
distribution matching-based dataset distillation method for 3D point clouds
that jointly optimizes the geometric structures of synthetic dataset as well as
the orientations of synthetic models. To ensure the consistent feature
alignment between different 3D point cloud models, we devise a permutation
invariant distribution matching loss with the sorted feature vectors. We also
employ learnable rotation angles to transform each syntheic model according to
the optimal orientation best representing the original feature distribution.
Extensive experimental results on widely used four benchmark datasets,
including ModelNet10, ModelNet40, ShapeNet, and ScanObjectNN, demonstrate that
the proposed method consistently outperforms the existing methods.
|
2503.22163 | Seong-Hyeon Hwang | Seong-Hyeon Hwang, Minsu Kim, Steven Euijong Whang | T-CIL: Temperature Scaling using Adversarial Perturbation for
Calibration in Class-Incremental Learning | Accepted to CVPR 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study model confidence calibration in class-incremental learning, where
models learn from sequential tasks with different class sets. While existing
works primarily focus on accuracy, maintaining calibrated confidence has been
largely overlooked. Unfortunately, most post-hoc calibration techniques are not
designed to work with the limited memories of old-task data typical in
class-incremental learning, as retaining a sufficient validation set would be
impractical. Thus, we propose T-CIL, a novel temperature scaling approach for
class-incremental learning without a validation set for old tasks, that
leverages adversarially perturbed exemplars from memory. Directly using
exemplars is inadequate for temperature optimization, since they are already
used for training. The key idea of T-CIL is to perturb exemplars more strongly
for old tasks than for the new task by adjusting the perturbation direction
based on feature distance, with the single magnitude determined using the
new-task validation set. This strategy makes the perturbation magnitude
computed from the new task also applicable to old tasks, leveraging the
tendency that the accuracy of old tasks is lower than that of the new task. We
empirically show that T-CIL significantly outperforms various baselines in
terms of calibration on real datasets and can be integrated with existing
class-incremental learning techniques with minimal impact on accuracy.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 06:02:34 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Hwang",
"Seong-Hyeon",
""
],
[
"Kim",
"Minsu",
""
],
[
"Whang",
"Steven Euijong",
""
]
] | TITLE: T-CIL: Temperature Scaling using Adversarial Perturbation for
Calibration in Class-Incremental Learning
ABSTRACT: We study model confidence calibration in class-incremental learning, where
models learn from sequential tasks with different class sets. While existing
works primarily focus on accuracy, maintaining calibrated confidence has been
largely overlooked. Unfortunately, most post-hoc calibration techniques are not
designed to work with the limited memories of old-task data typical in
class-incremental learning, as retaining a sufficient validation set would be
impractical. Thus, we propose T-CIL, a novel temperature scaling approach for
class-incremental learning without a validation set for old tasks, that
leverages adversarially perturbed exemplars from memory. Directly using
exemplars is inadequate for temperature optimization, since they are already
used for training. The key idea of T-CIL is to perturb exemplars more strongly
for old tasks than for the new task by adjusting the perturbation direction
based on feature distance, with the single magnitude determined using the
new-task validation set. This strategy makes the perturbation magnitude
computed from the new task also applicable to old tasks, leveraging the
tendency that the accuracy of old tasks is lower than that of the new task. We
empirically show that T-CIL significantly outperforms various baselines in
terms of calibration on real datasets and can be integrated with existing
class-incremental learning techniques with minimal impact on accuracy.
|
2503.22165 | Zhanke Zhou | Zhanke Zhou, Zhaocheng Zhu, Xuan Li, Mikhail Galkin, Xiao Feng, Sanmi
Koyejo, Jian Tang, Bo Han | Landscape of Thoughts: Visualizing the Reasoning Process of Large
Language Models | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous applications of large language models (LLMs) rely on their ability
to perform step-by-step reasoning. However, the reasoning behavior of LLMs
remains poorly understood, posing challenges to research, development, and
safety. To address this gap, we introduce landscape of thoughts-the first
visualization tool for users to inspect the reasoning paths of chain-of-thought
and its derivatives on any multi-choice dataset. Specifically, we represent the
states in a reasoning path as feature vectors that quantify their distances to
all answer choices. These features are then visualized in two-dimensional plots
using t-SNE. Qualitative and quantitative analysis with the landscape of
thoughts effectively distinguishes between strong and weak models, correct and
incorrect answers, as well as different reasoning tasks. It also uncovers
undesirable reasoning patterns, such as low consistency and high uncertainty.
Additionally, users can adapt our tool to a model that predicts the property
they observe. We showcase this advantage by adapting our tool to a lightweight
verifier that evaluates the correctness of reasoning paths. The code is
publicly available at: https://github.com/tmlr-group/landscape-of-thoughts.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 06:09:51 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhou",
"Zhanke",
""
],
[
"Zhu",
"Zhaocheng",
""
],
[
"Li",
"Xuan",
""
],
[
"Galkin",
"Mikhail",
""
],
[
"Feng",
"Xiao",
""
],
[
"Koyejo",
"Sanmi",
""
],
[
"Tang",
"Jian",
""
],
[
"Han",
"Bo",
""
]
] | TITLE: Landscape of Thoughts: Visualizing the Reasoning Process of Large
Language Models
ABSTRACT: Numerous applications of large language models (LLMs) rely on their ability
to perform step-by-step reasoning. However, the reasoning behavior of LLMs
remains poorly understood, posing challenges to research, development, and
safety. To address this gap, we introduce landscape of thoughts-the first
visualization tool for users to inspect the reasoning paths of chain-of-thought
and its derivatives on any multi-choice dataset. Specifically, we represent the
states in a reasoning path as feature vectors that quantify their distances to
all answer choices. These features are then visualized in two-dimensional plots
using t-SNE. Qualitative and quantitative analysis with the landscape of
thoughts effectively distinguishes between strong and weak models, correct and
incorrect answers, as well as different reasoning tasks. It also uncovers
undesirable reasoning patterns, such as low consistency and high uncertainty.
Additionally, users can adapt our tool to a model that predicts the property
they observe. We showcase this advantage by adapting our tool to a lightweight
verifier that evaluates the correctness of reasoning paths. The code is
publicly available at: https://github.com/tmlr-group/landscape-of-thoughts.
|
2503.22166 | Junhong Lin | Song Wang, Junhong Lin, Xiaojie Guo, Julian Shun, Jundong Li, Yada Zhu | Reasoning of Large Language Models over Knowledge Graphs with
Super-Relations | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | While large language models (LLMs) have made significant progress in
processing and reasoning over knowledge graphs, current methods suffer from a
high non-retrieval rate. This limitation reduces the accuracy of answering
questions based on these graphs. Our analysis reveals that the combination of
greedy search and forward reasoning is a major contributor to this issue. To
overcome these challenges, we introduce the concept of super-relations, which
enables both forward and backward reasoning by summarizing and connecting
various relational paths within the graph. This holistic approach not only
expands the search space, but also significantly improves retrieval efficiency.
In this paper, we propose the ReKnoS framework, which aims to Reason over
Knowledge Graphs with Super-Relations. Our framework's key advantages include
the inclusion of multiple relation paths through super-relations, enhanced
forward and backward reasoning capabilities, and increased efficiency in
querying LLMs. These enhancements collectively lead to a substantial
improvement in the successful retrieval rate and overall reasoning performance.
We conduct extensive experiments on nine real-world datasets to evaluate
ReKnoS, and the results demonstrate the superior performance of ReKnoS over
existing state-of-the-art baselines, with an average accuracy gain of 2.92%.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 06:11:04 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wang",
"Song",
""
],
[
"Lin",
"Junhong",
""
],
[
"Guo",
"Xiaojie",
""
],
[
"Shun",
"Julian",
""
],
[
"Li",
"Jundong",
""
],
[
"Zhu",
"Yada",
""
]
] | TITLE: Reasoning of Large Language Models over Knowledge Graphs with
Super-Relations
ABSTRACT: While large language models (LLMs) have made significant progress in
processing and reasoning over knowledge graphs, current methods suffer from a
high non-retrieval rate. This limitation reduces the accuracy of answering
questions based on these graphs. Our analysis reveals that the combination of
greedy search and forward reasoning is a major contributor to this issue. To
overcome these challenges, we introduce the concept of super-relations, which
enables both forward and backward reasoning by summarizing and connecting
various relational paths within the graph. This holistic approach not only
expands the search space, but also significantly improves retrieval efficiency.
In this paper, we propose the ReKnoS framework, which aims to Reason over
Knowledge Graphs with Super-Relations. Our framework's key advantages include
the inclusion of multiple relation paths through super-relations, enhanced
forward and backward reasoning capabilities, and increased efficiency in
querying LLMs. These enhancements collectively lead to a substantial
improvement in the successful retrieval rate and overall reasoning performance.
We conduct extensive experiments on nine real-world datasets to evaluate
ReKnoS, and the results demonstrate the superior performance of ReKnoS over
existing state-of-the-art baselines, with an average accuracy gain of 2.92%.
|
2503.22171 | Ziyin Zeng | Min Cao, ZiYin Zeng, YuXin Lu, Mang Ye, Dong Yi and Jinqiao Wang | An Empirical Study of Validating Synthetic Data for Text-Based Person
Retrieval | 20 pages,13 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Data plays a pivotal role in Text-Based Person Retrieval (TBPR) research.
Mainstream research paradigm necessitates real-world person images with manual
textual annotations for training models, posing privacy-sensitive and
labor-intensive issues. Several pioneering efforts explore synthetic data for
TBPR but still rely on real data, keeping the aforementioned issues and also
resulting in diversity-deficient issue in synthetic datasets, thus impacting
TBPR performance. Moreover, these works tend to explore synthetic data for TBPR
through limited perspectives, leading to exploration-restricted issue. In this
paper, we conduct an empirical study to explore the potential of synthetic data
for TBPR, highlighting three key aspects. (1) We propose an inter-class image
generation pipeline, in which an automatic prompt construction strategy is
introduced to guide generative Artificial Intelligence (AI) models in
generating various inter-class images without reliance on original data. (2) We
develop an intra-class image augmentation pipeline, in which the generative AI
models are applied to further edit the images for obtaining various intra-class
images. (3) Building upon the proposed pipelines and an automatic text
generation pipeline, we explore the effectiveness of synthetic data in diverse
scenarios through extensive experiments. Additionally, we experimentally
investigate various noise-robust learning strategies to mitigate the inherent
noise in synthetic data. We will release the code, along with the synthetic
large-scale dataset generated by our pipelines, which are expected to advance
practical TBPR research.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 06:18:15 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Cao",
"Min",
""
],
[
"Zeng",
"ZiYin",
""
],
[
"Lu",
"YuXin",
""
],
[
"Ye",
"Mang",
""
],
[
"Yi",
"Dong",
""
],
[
"Wang",
"Jinqiao",
""
]
] | TITLE: An Empirical Study of Validating Synthetic Data for Text-Based Person
Retrieval
ABSTRACT: Data plays a pivotal role in Text-Based Person Retrieval (TBPR) research.
Mainstream research paradigm necessitates real-world person images with manual
textual annotations for training models, posing privacy-sensitive and
labor-intensive issues. Several pioneering efforts explore synthetic data for
TBPR but still rely on real data, keeping the aforementioned issues and also
resulting in diversity-deficient issue in synthetic datasets, thus impacting
TBPR performance. Moreover, these works tend to explore synthetic data for TBPR
through limited perspectives, leading to exploration-restricted issue. In this
paper, we conduct an empirical study to explore the potential of synthetic data
for TBPR, highlighting three key aspects. (1) We propose an inter-class image
generation pipeline, in which an automatic prompt construction strategy is
introduced to guide generative Artificial Intelligence (AI) models in
generating various inter-class images without reliance on original data. (2) We
develop an intra-class image augmentation pipeline, in which the generative AI
models are applied to further edit the images for obtaining various intra-class
images. (3) Building upon the proposed pipelines and an automatic text
generation pipeline, we explore the effectiveness of synthetic data in diverse
scenarios through extensive experiments. Additionally, we experimentally
investigate various noise-robust learning strategies to mitigate the inherent
noise in synthetic data. We will release the code, along with the synthetic
large-scale dataset generated by our pipelines, which are expected to advance
practical TBPR research.
|
2503.22172 | Minho Park | Minho Park, Sunghyun Park, Jungsoo Lee, Hyojin Park, Kyuwoong Hwang,
Fatih Porikli, Jaegul Choo, Sungha Choi | Concept-Aware LoRA for Domain-Aligned Segmentation Dataset Generation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper addresses the challenge of data scarcity in semantic segmentation
by generating datasets through text-to-image (T2I) generation models, reducing
image acquisition and labeling costs. Segmentation dataset generation faces two
key challenges: 1) aligning generated samples with the target domain and 2)
producing informative samples beyond the training data. Fine-tuning T2I models
can help generate samples aligned with the target domain. However, it often
overfits and memorizes training data, limiting their ability to generate
diverse and well-aligned samples. To overcome these issues, we propose
Concept-Aware LoRA (CA-LoRA), a novel fine-tuning approach that selectively
identifies and updates only the weights associated with necessary concepts
(e.g., style or viewpoint) for domain alignment while preserving the pretrained
knowledge of the T2I model to produce informative samples. We demonstrate its
effectiveness in generating datasets for urban-scene segmentation,
outperforming baseline and state-of-the-art methods in in-domain (few-shot and
fully-supervised) settings, as well as in domain generalization tasks,
especially under challenging conditions such as adverse weather and varying
illumination, further highlighting its superiority.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 06:23:29 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Park",
"Minho",
""
],
[
"Park",
"Sunghyun",
""
],
[
"Lee",
"Jungsoo",
""
],
[
"Park",
"Hyojin",
""
],
[
"Hwang",
"Kyuwoong",
""
],
[
"Porikli",
"Fatih",
""
],
[
"Choo",
"Jaegul",
""
],
[
"Choi",
"Sungha",
""
]
] | TITLE: Concept-Aware LoRA for Domain-Aligned Segmentation Dataset Generation
ABSTRACT: This paper addresses the challenge of data scarcity in semantic segmentation
by generating datasets through text-to-image (T2I) generation models, reducing
image acquisition and labeling costs. Segmentation dataset generation faces two
key challenges: 1) aligning generated samples with the target domain and 2)
producing informative samples beyond the training data. Fine-tuning T2I models
can help generate samples aligned with the target domain. However, it often
overfits and memorizes training data, limiting their ability to generate
diverse and well-aligned samples. To overcome these issues, we propose
Concept-Aware LoRA (CA-LoRA), a novel fine-tuning approach that selectively
identifies and updates only the weights associated with necessary concepts
(e.g., style or viewpoint) for domain alignment while preserving the pretrained
knowledge of the T2I model to produce informative samples. We demonstrate its
effectiveness in generating datasets for urban-scene segmentation,
outperforming baseline and state-of-the-art methods in in-domain (few-shot and
fully-supervised) settings, as well as in domain generalization tasks,
especially under challenging conditions such as adverse weather and varying
illumination, further highlighting its superiority.
|
2503.22174 | Jialun Pei | Jialun Pei, Zhangjun Zhou, Diandian Guo, Zhixi Li, Jing Qin, Bo Du,
Pheng-Ann Heng | Synergistic Bleeding Region and Point Detection in Surgical Videos | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Intraoperative bleeding in laparoscopic surgery causes rapid obscuration of
the operative field to hinder the surgical process. Intelligent detection of
bleeding regions can quantify the blood loss to assist decision-making, while
locating the bleeding point helps surgeons quickly identify the source of
bleeding and achieve hemostasis in time. In this study, we first construct a
real-world surgical bleeding detection dataset, named SurgBlood, comprising
5,330 frames from 95 surgical video clips with bleeding region and point
annotations. Accordingly, we develop a dual-task synergistic online detector
called BlooDet, designed to perform simultaneous detection of bleeding regions
and points in surgical videos. Our framework embraces a dual-branch
bidirectional guidance design based on Segment Anything Model 2 (SAM 2). The
mask branch detects bleeding regions through adaptive edge and point prompt
embeddings, while the point branch leverages mask memory to induce bleeding
point memory modeling and captures the direction of bleed point movement
through inter-frame optical flow. By interactive guidance and prompts, the two
branches explore potential spatial-temporal relationships while leveraging
memory modeling from previous frames to infer the current bleeding condition.
Extensive experiments demonstrate that our approach outperforms other
counterparts on SurgBlood in both bleeding region and point detection tasks,
e.g., achieving 64.88% IoU for bleeding region detection and 83.69% PCK-10% for
bleeding point detection.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 06:27:55 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Pei",
"Jialun",
""
],
[
"Zhou",
"Zhangjun",
""
],
[
"Guo",
"Diandian",
""
],
[
"Li",
"Zhixi",
""
],
[
"Qin",
"Jing",
""
],
[
"Du",
"Bo",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] | TITLE: Synergistic Bleeding Region and Point Detection in Surgical Videos
ABSTRACT: Intraoperative bleeding in laparoscopic surgery causes rapid obscuration of
the operative field to hinder the surgical process. Intelligent detection of
bleeding regions can quantify the blood loss to assist decision-making, while
locating the bleeding point helps surgeons quickly identify the source of
bleeding and achieve hemostasis in time. In this study, we first construct a
real-world surgical bleeding detection dataset, named SurgBlood, comprising
5,330 frames from 95 surgical video clips with bleeding region and point
annotations. Accordingly, we develop a dual-task synergistic online detector
called BlooDet, designed to perform simultaneous detection of bleeding regions
and points in surgical videos. Our framework embraces a dual-branch
bidirectional guidance design based on Segment Anything Model 2 (SAM 2). The
mask branch detects bleeding regions through adaptive edge and point prompt
embeddings, while the point branch leverages mask memory to induce bleeding
point memory modeling and captures the direction of bleed point movement
through inter-frame optical flow. By interactive guidance and prompts, the two
branches explore potential spatial-temporal relationships while leveraging
memory modeling from previous frames to infer the current bleeding condition.
Extensive experiments demonstrate that our approach outperforms other
counterparts on SurgBlood in both bleeding region and point detection tasks,
e.g., achieving 64.88% IoU for bleeding region detection and 83.69% PCK-10% for
bleeding point detection.
|
2503.22176 | Anandakumar D | Bargava Subramanian, Naveen Kumarasami, Praveen Shastry, Kalyan
Sivasailam, Anandakumar D, Keerthana R, Mounigasri M, Abilaasha G, Kishore
Prasath Venkatesh | A Multi-Site Study on AI-Driven Pathology Detection and Osteoarthritis
Grading from Knee X-Ray | 15 pages, 2 figures | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Introduction: Bone health disorders like osteoarthritis and osteoporosis pose
major global health challenges, often leading to delayed diagnoses due to
limited diagnostic tools. This study presents an AI-powered system that
analyzes knee X-rays to detect key pathologies, including joint space
narrowing, sclerosis, osteophytes, tibial spikes, alignment issues, and soft
tissue anomalies. It also grades osteoarthritis severity, enabling timely,
personalized treatment.
Study Design: The research used 1.3 million knee X-rays from a multi-site
Indian clinical trial across government, private, and SME hospitals. The
dataset ensured diversity in demographics, imaging equipment, and clinical
settings. Rigorous annotation and preprocessing yielded high-quality training
datasets for pathology-specific models like ResNet15 for joint space narrowing
and DenseNet for osteoarthritis grading.
Performance: The AI system achieved strong diagnostic accuracy across diverse
imaging environments. Pathology-specific models excelled in precision, recall,
and NPV, validated using Mean Squared Error (MSE), Intersection over Union
(IoU), and Dice coefficient. Subgroup analyses across age, gender, and
manufacturer variations confirmed generalizability for real-world applications.
Conclusion: This scalable, cost-effective solution for bone health
diagnostics demonstrated robust performance in a multi-site trial. It holds
promise for widespread adoption, especially in resource-limited healthcare
settings, transforming bone health management and enabling proactive patient
care.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 06:41:22 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Subramanian",
"Bargava",
""
],
[
"Kumarasami",
"Naveen",
""
],
[
"Shastry",
"Praveen",
""
],
[
"Sivasailam",
"Kalyan",
""
],
[
"D",
"Anandakumar",
""
],
[
"R",
"Keerthana",
""
],
[
"M",
"Mounigasri",
""
],
[
"G",
"Abilaasha",
""
],
[
"Venkatesh",
"Kishore Prasath",
""
]
] | TITLE: A Multi-Site Study on AI-Driven Pathology Detection and Osteoarthritis
Grading from Knee X-Ray
ABSTRACT: Introduction: Bone health disorders like osteoarthritis and osteoporosis pose
major global health challenges, often leading to delayed diagnoses due to
limited diagnostic tools. This study presents an AI-powered system that
analyzes knee X-rays to detect key pathologies, including joint space
narrowing, sclerosis, osteophytes, tibial spikes, alignment issues, and soft
tissue anomalies. It also grades osteoarthritis severity, enabling timely,
personalized treatment.
Study Design: The research used 1.3 million knee X-rays from a multi-site
Indian clinical trial across government, private, and SME hospitals. The
dataset ensured diversity in demographics, imaging equipment, and clinical
settings. Rigorous annotation and preprocessing yielded high-quality training
datasets for pathology-specific models like ResNet15 for joint space narrowing
and DenseNet for osteoarthritis grading.
Performance: The AI system achieved strong diagnostic accuracy across diverse
imaging environments. Pathology-specific models excelled in precision, recall,
and NPV, validated using Mean Squared Error (MSE), Intersection over Union
(IoU), and Dice coefficient. Subgroup analyses across age, gender, and
manufacturer variations confirmed generalizability for real-world applications.
Conclusion: This scalable, cost-effective solution for bone health
diagnostics demonstrated robust performance in a multi-site trial. It holds
promise for widespread adoption, especially in resource-limited healthcare
settings, transforming bone health management and enabling proactive patient
care.
|
2503.22177 | Shuai Zhang | Shuai Zhang, Jinliang Wang, Sujith Konandetails, Xu Wang, Danail
Stoyanov, Evangelos B.Mazomenos | 3D Acetabular Surface Reconstruction from 2D Pre-operative X-ray Images
using SRVF Elastic Registration and Deformation Graph | 10 pages, 3 figures, conference | null | null | null | cs.RO cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Accurate and reliable selection of the appropriate acetabular cup size is
crucial for restoring joint biomechanics in total hip arthroplasty (THA). This
paper proposes a novel framework that integrates square-root velocity function
(SRVF)-based elastic shape registration technique with an embedded deformation
(ED) graph approach to reconstruct the 3D articular surface of the acetabulum
by fusing multiple views of 2D pre-operative pelvic X-ray images and a
hemispherical surface model. The SRVF-based elastic registration establishes
2D-3D correspondences between the parametric hemispherical model and X-ray
images, and the ED framework incorporates the SRVF-derived correspondences as
constraints to optimize the 3D acetabular surface reconstruction using
nonlinear least-squares optimization. Validations using both simulation and
real patient datasets are performed to demonstrate the robustness and the
potential clinical value of the proposed algorithm. The reconstruction result
can assist surgeons in selecting the correct acetabular cup on the first
attempt in primary THA, minimising the need for revision surgery.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 06:47:32 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhang",
"Shuai",
""
],
[
"Wang",
"Jinliang",
""
],
[
"Konandetails",
"Sujith",
""
],
[
"Wang",
"Xu",
""
],
[
"Stoyanov",
"Danail",
""
],
[
"Mazomenos",
"Evangelos B.",
""
]
] | TITLE: 3D Acetabular Surface Reconstruction from 2D Pre-operative X-ray Images
using SRVF Elastic Registration and Deformation Graph
ABSTRACT: Accurate and reliable selection of the appropriate acetabular cup size is
crucial for restoring joint biomechanics in total hip arthroplasty (THA). This
paper proposes a novel framework that integrates square-root velocity function
(SRVF)-based elastic shape registration technique with an embedded deformation
(ED) graph approach to reconstruct the 3D articular surface of the acetabulum
by fusing multiple views of 2D pre-operative pelvic X-ray images and a
hemispherical surface model. The SRVF-based elastic registration establishes
2D-3D correspondences between the parametric hemispherical model and X-ray
images, and the ED framework incorporates the SRVF-derived correspondences as
constraints to optimize the 3D acetabular surface reconstruction using
nonlinear least-squares optimization. Validations using both simulation and
real patient datasets are performed to demonstrate the robustness and the
potential clinical value of the proposed algorithm. The reconstruction result
can assist surgeons in selecting the correct acetabular cup on the first
attempt in primary THA, minimising the need for revision surgery.
|
2503.22180 | Juwei Guan | Juwei Guan, Xiaolin Fang, Donghyun Kim, Haotian Gong, Tongxin Zhu,
Zhen Ling, Ming Yang | Knowledge Rectification for Camouflaged Object Detection: Unlocking
Insights from Low-Quality Data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low-quality data often suffer from insufficient image details, introducing an
extra implicit aspect of camouflage that complicates camouflaged object
detection (COD). Existing COD methods focus primarily on high-quality data,
overlooking the challenges posed by low-quality data, which leads to
significant performance degradation. Therefore, we propose KRNet, the first
framework explicitly designed for COD on low-quality data. KRNet presents a
Leader-Follower framework where the Leader extracts dual gold-standard
distributions: conditional and hybrid, from high-quality data to drive the
Follower in rectifying knowledge learned from low-quality data. The framework
further benefits from a cross-consistency strategy that improves the
rectification of these distributions and a time-dependent conditional encoder
that enriches the distribution diversity. Extensive experiments on benchmark
datasets demonstrate that KRNet outperforms state-of-the-art COD methods and
super-resolution-assisted COD approaches, proving its effectiveness in tackling
the challenges of low-quality data in COD.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 06:53:21 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Guan",
"Juwei",
""
],
[
"Fang",
"Xiaolin",
""
],
[
"Kim",
"Donghyun",
""
],
[
"Gong",
"Haotian",
""
],
[
"Zhu",
"Tongxin",
""
],
[
"Ling",
"Zhen",
""
],
[
"Yang",
"Ming",
""
]
] | TITLE: Knowledge Rectification for Camouflaged Object Detection: Unlocking
Insights from Low-Quality Data
ABSTRACT: Low-quality data often suffer from insufficient image details, introducing an
extra implicit aspect of camouflage that complicates camouflaged object
detection (COD). Existing COD methods focus primarily on high-quality data,
overlooking the challenges posed by low-quality data, which leads to
significant performance degradation. Therefore, we propose KRNet, the first
framework explicitly designed for COD on low-quality data. KRNet presents a
Leader-Follower framework where the Leader extracts dual gold-standard
distributions: conditional and hybrid, from high-quality data to drive the
Follower in rectifying knowledge learned from low-quality data. The framework
further benefits from a cross-consistency strategy that improves the
rectification of these distributions and a time-dependent conditional encoder
that enriches the distribution diversity. Extensive experiments on benchmark
datasets demonstrate that KRNet outperforms state-of-the-art COD methods and
super-resolution-assisted COD approaches, proving its effectiveness in tackling
the challenges of low-quality data in COD.
|
2503.22186 | Weicai Li | Weicai Li, Tiejun Lv, Wei Ni, Jingbo Zhao, Ekram Hossain, and H.
Vincent Poor | Route-and-Aggregate Decentralized Federated Learning Under Communication
Errors | 15 pages, 10 figures | null | null | null | cs.DC cs.NI cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized federated learning (D-FL) allows clients to aggregate learning
models locally, offering flexibility and scalability. Existing D-FL methods use
gossip protocols, which are inefficient when not all nodes in the network are
D-FL clients. This paper puts forth a new D-FL strategy, termed
Route-and-Aggregate (R&A) D-FL, where participating clients exchange models
with their peers through established routes (as opposed to flooding) and
adaptively normalize their aggregation coefficients to compensate for
communication errors. The impact of routing and imperfect links on the
convergence of R&A D-FL is analyzed, revealing that convergence is minimized
when routes with the minimum end-to-end packet error rates are employed to
deliver models. Our analysis is experimentally validated through three image
classification tasks and two next-word prediction tasks, utilizing widely
recognized datasets and models. R&A D-FL outperforms the flooding-based D-FL
method in terms of training accuracy by 35% in our tested 10-client network,
and shows strong synergy between D-FL and networking. In another test with 10
D-FL clients, the training accuracy of R&A D-FL with communication errors
approaches that of the ideal C-FL without communication errors, as the number
of routing nodes (i.e., nodes that do not participate in the training of D-FL)
rises to 28.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:05:37 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Weicai",
""
],
[
"Lv",
"Tiejun",
""
],
[
"Ni",
"Wei",
""
],
[
"Zhao",
"Jingbo",
""
],
[
"Hossain",
"Ekram",
""
],
[
"Poor",
"H. Vincent",
""
]
] | TITLE: Route-and-Aggregate Decentralized Federated Learning Under Communication
Errors
ABSTRACT: Decentralized federated learning (D-FL) allows clients to aggregate learning
models locally, offering flexibility and scalability. Existing D-FL methods use
gossip protocols, which are inefficient when not all nodes in the network are
D-FL clients. This paper puts forth a new D-FL strategy, termed
Route-and-Aggregate (R&A) D-FL, where participating clients exchange models
with their peers through established routes (as opposed to flooding) and
adaptively normalize their aggregation coefficients to compensate for
communication errors. The impact of routing and imperfect links on the
convergence of R&A D-FL is analyzed, revealing that convergence is minimized
when routes with the minimum end-to-end packet error rates are employed to
deliver models. Our analysis is experimentally validated through three image
classification tasks and two next-word prediction tasks, utilizing widely
recognized datasets and models. R&A D-FL outperforms the flooding-based D-FL
method in terms of training accuracy by 35% in our tested 10-client network,
and shows strong synergy between D-FL and networking. In another test with 10
D-FL clients, the training accuracy of R&A D-FL with communication errors
approaches that of the ideal C-FL without communication errors, as the number
of routing nodes (i.e., nodes that do not participate in the training of D-FL)
rises to 28.
|
2503.22193 | Jiale Du | Yang Liu, Feixiang Liu, Jiale Du, Xinbo Gao, Jungong Han | Unbiased Max-Min Embedding Classification for Transductive Few-Shot
Learning: Clustering and Classification Are All You Need | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks and supervised learning have achieved
remarkable success in various fields but are limited by the need for large
annotated datasets. Few-shot learning (FSL) addresses this limitation by
enabling models to generalize from only a few labeled examples. Transductive
few-shot learning (TFSL) enhances FSL by leveraging both labeled and unlabeled
data, though it faces challenges like the hubness problem. To overcome these
limitations, we propose the Unbiased Max-Min Embedding Classification (UMMEC)
Method, which addresses the key challenges in few-shot learning through three
innovative contributions. First, we introduce a decentralized covariance matrix
to mitigate the hubness problem, ensuring a more uniform distribution of
embeddings. Second, our method combines local alignment and global uniformity
through adaptive weighting and nonlinear transformation, balancing intra-class
clustering with inter-class separation. Third, we employ a Variational Sinkhorn
Few-Shot Classifier to optimize the distances between samples and class
prototypes, enhancing classification accuracy and robustness. These combined
innovations allow the UMMEC method to achieve superior performance with minimal
labeled data. Our UMMEC method significantly improves classification
performance with minimal labeled data, advancing the state-of-the-art in TFSL.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:23:07 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Liu",
"Yang",
""
],
[
"Liu",
"Feixiang",
""
],
[
"Du",
"Jiale",
""
],
[
"Gao",
"Xinbo",
""
],
[
"Han",
"Jungong",
""
]
] | TITLE: Unbiased Max-Min Embedding Classification for Transductive Few-Shot
Learning: Clustering and Classification Are All You Need
ABSTRACT: Convolutional neural networks and supervised learning have achieved
remarkable success in various fields but are limited by the need for large
annotated datasets. Few-shot learning (FSL) addresses this limitation by
enabling models to generalize from only a few labeled examples. Transductive
few-shot learning (TFSL) enhances FSL by leveraging both labeled and unlabeled
data, though it faces challenges like the hubness problem. To overcome these
limitations, we propose the Unbiased Max-Min Embedding Classification (UMMEC)
Method, which addresses the key challenges in few-shot learning through three
innovative contributions. First, we introduce a decentralized covariance matrix
to mitigate the hubness problem, ensuring a more uniform distribution of
embeddings. Second, our method combines local alignment and global uniformity
through adaptive weighting and nonlinear transformation, balancing intra-class
clustering with inter-class separation. Third, we employ a Variational Sinkhorn
Few-Shot Classifier to optimize the distances between samples and class
prototypes, enhancing classification accuracy and robustness. These combined
innovations allow the UMMEC method to achieve superior performance with minimal
labeled data. Our UMMEC method significantly improves classification
performance with minimal labeled data, advancing the state-of-the-art in TFSL.
|
2503.22197 | Jiale Du | Yang Liu, Xun Zhang, Jiale Du, Xinbo Gao, Jungong Han | Extremely Simple Out-of-distribution Detection for Audio-visual
Generalized Zero-shot Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot Learning(ZSL) attains knowledge transfer from seen classes to
unseen classes by exploring auxiliary category information, which is a
promising yet difficult research topic. In this field, Audio-Visual Generalized
Zero-Shot Learning~(AV-GZSL) has aroused researchers' great interest in which
intricate relations within triple modalities~(audio, video, and natural
language) render this task quite challenging but highly research-worthy.
However, both existing embedding-based and generative-based AV-GZSL methods
tend to suffer from domain shift problem a lot and we propose an extremely
simple Out-of-distribution~(OOD) detection based AV-GZSL method~(EZ-AVOOD) to
further mitigate bias problem by differentiating seen and unseen samples at the
initial beginning. EZ-AVOOD accomplishes effective seen-unseen separation by
exploiting the intrinsic discriminative information held in class-specific
logits and class-agnostic feature subspace without training an extra OOD
detector network. Followed by seen-unseen binary classification, we employ two
expert models to classify seen samples and unseen samples separately. Compared
to existing state-of-the-art methods, our model achieves superior ZSL and GZSL
performances on three audio-visual datasets and becomes the new SOTA, which
comprehensively demonstrates the effectiveness of the proposed EZ-AVOOD.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:28:56 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Liu",
"Yang",
""
],
[
"Zhang",
"Xun",
""
],
[
"Du",
"Jiale",
""
],
[
"Gao",
"Xinbo",
""
],
[
"Han",
"Jungong",
""
]
] | TITLE: Extremely Simple Out-of-distribution Detection for Audio-visual
Generalized Zero-shot Learning
ABSTRACT: Zero-shot Learning(ZSL) attains knowledge transfer from seen classes to
unseen classes by exploring auxiliary category information, which is a
promising yet difficult research topic. In this field, Audio-Visual Generalized
Zero-Shot Learning~(AV-GZSL) has aroused researchers' great interest in which
intricate relations within triple modalities~(audio, video, and natural
language) render this task quite challenging but highly research-worthy.
However, both existing embedding-based and generative-based AV-GZSL methods
tend to suffer from domain shift problem a lot and we propose an extremely
simple Out-of-distribution~(OOD) detection based AV-GZSL method~(EZ-AVOOD) to
further mitigate bias problem by differentiating seen and unseen samples at the
initial beginning. EZ-AVOOD accomplishes effective seen-unseen separation by
exploiting the intrinsic discriminative information held in class-specific
logits and class-agnostic feature subspace without training an extra OOD
detector network. Followed by seen-unseen binary classification, we employ two
expert models to classify seen samples and unseen samples separately. Compared
to existing state-of-the-art methods, our model achieves superior ZSL and GZSL
performances on three audio-visual datasets and becomes the new SOTA, which
comprehensively demonstrates the effectiveness of the proposed EZ-AVOOD.
|
2503.22199 | Yunhe Zhang | Long Gao, Yunhe Zhang, Langkun Chen, Yan Jiang, Weiying Xie, Yunsong
Li | Hyperspectral Adapter for Object Tracking based on Hyperspectral Video | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object tracking based on hyperspectral video attracts increasing attention to
the rich material and motion information in the hyperspectral videos. The
prevailing hyperspectral methods adapt pretrained RGB-based object tracking
networks for hyperspectral tasks by fine-tuning the entire network on
hyperspectral datasets, which achieves impressive results in challenging
scenarios. However, the performance of hyperspectral trackers is limited by the
loss of spectral information during the transformation, and fine-tuning the
entire pretrained network is inefficient for practical applications. To address
the issues, a new hyperspectral object tracking method, hyperspectral adapter
for tracking (HyA-T), is proposed in this work. The hyperspectral adapter for
the self-attention (HAS) and the hyperspectral adapter for the multilayer
perceptron (HAM) are proposed to generate the adaption information and to
transfer the multi-head self-attention (MSA) module and the multilayer
perceptron (MLP) in pretrained network for the hyperspectral object tracking
task by augmenting the adaption information into the calculation of the MSA and
MLP. Additionally, the hyperspectral enhancement of input (HEI) is proposed to
augment the original spectral information into the input of the tracking
network. The proposed methods extract spectral information directly from the
hyperspectral images, which prevent the loss of the spectral information.
Moreover, only the parameters in the proposed methods are fine-tuned, which is
more efficient than the existing methods. Extensive experiments were conducted
on four datasets with various spectral bands, verifing the effectiveness of the
proposed methods. The HyA-T achieves state-of-the-art performance on all the
datasets.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:31:48 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Gao",
"Long",
""
],
[
"Zhang",
"Yunhe",
""
],
[
"Chen",
"Langkun",
""
],
[
"Jiang",
"Yan",
""
],
[
"Xie",
"Weiying",
""
],
[
"Li",
"Yunsong",
""
]
] | TITLE: Hyperspectral Adapter for Object Tracking based on Hyperspectral Video
ABSTRACT: Object tracking based on hyperspectral video attracts increasing attention to
the rich material and motion information in the hyperspectral videos. The
prevailing hyperspectral methods adapt pretrained RGB-based object tracking
networks for hyperspectral tasks by fine-tuning the entire network on
hyperspectral datasets, which achieves impressive results in challenging
scenarios. However, the performance of hyperspectral trackers is limited by the
loss of spectral information during the transformation, and fine-tuning the
entire pretrained network is inefficient for practical applications. To address
the issues, a new hyperspectral object tracking method, hyperspectral adapter
for tracking (HyA-T), is proposed in this work. The hyperspectral adapter for
the self-attention (HAS) and the hyperspectral adapter for the multilayer
perceptron (HAM) are proposed to generate the adaption information and to
transfer the multi-head self-attention (MSA) module and the multilayer
perceptron (MLP) in pretrained network for the hyperspectral object tracking
task by augmenting the adaption information into the calculation of the MSA and
MLP. Additionally, the hyperspectral enhancement of input (HEI) is proposed to
augment the original spectral information into the input of the tracking
network. The proposed methods extract spectral information directly from the
hyperspectral images, which prevent the loss of the spectral information.
Moreover, only the parameters in the proposed methods are fine-tuned, which is
more efficient than the existing methods. Extensive experiments were conducted
on four datasets with various spectral bands, verifing the effectiveness of the
proposed methods. The HyA-T achieves state-of-the-art performance on all the
datasets.
|
2503.22200 | Xinhan Di | Haomin Zhang, Sizhe Shan, Haoyu Wang, Zihao Chen, Xiulong Liu, Chaofan
Ding, Xinhan Di | Enhance Generation Quality of Flow Matching V2A Model via Multi-Step
CoT-Like Guidance and Combined Preference Optimization | 10 pages, 4 figures | null | null | null | cs.SD cs.CV eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Creating high-quality sound effects from videos and text prompts requires
precise alignment between visual and audio domains, both semantically and
temporally, along with step-by-step guidance for professional audio generation.
However, current state-of-the-art video-guided audio generation models often
fall short of producing high-quality audio for both general and specialized use
cases. To address this challenge, we introduce a multi-stage, multi-modal,
end-to-end generative framework with Chain-of-Thought-like (CoT-like) guidance
learning, termed Chain-of-Perform (CoP). First, we employ a transformer-based
network architecture designed to achieve CoP guidance, enabling the generation
of both general and professional audio. Second, we implement a multi-stage
training framework that follows step-by-step guidance to ensure the generation
of high-quality sound effects. Third, we develop a CoP multi-modal dataset,
guided by video, to support step-by-step sound effects generation. Evaluation
results highlight the advantages of the proposed multi-stage CoP generative
framework compared to the state-of-the-art models on a variety of datasets,
with FAD 0.79 to 0.74 (+6.33%), CLIP 16.12 to 17.70 (+9.80%) on VGGSound,
SI-SDR 1.98dB to 3.35dB (+69.19%), MOS 2.94 to 3.49(+18.71%) on PianoYT-2h, and
SI-SDR 2.22dB to 3.21dB (+44.59%), MOS 3.07 to 3.42 (+11.40%) on Piano-10h.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:32:14 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Zhang",
"Haomin",
""
],
[
"Shan",
"Sizhe",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Chen",
"Zihao",
""
],
[
"Liu",
"Xiulong",
""
],
[
"Ding",
"Chaofan",
""
],
[
"Di",
"Xinhan",
""
]
] | TITLE: Enhance Generation Quality of Flow Matching V2A Model via Multi-Step
CoT-Like Guidance and Combined Preference Optimization
ABSTRACT: Creating high-quality sound effects from videos and text prompts requires
precise alignment between visual and audio domains, both semantically and
temporally, along with step-by-step guidance for professional audio generation.
However, current state-of-the-art video-guided audio generation models often
fall short of producing high-quality audio for both general and specialized use
cases. To address this challenge, we introduce a multi-stage, multi-modal,
end-to-end generative framework with Chain-of-Thought-like (CoT-like) guidance
learning, termed Chain-of-Perform (CoP). First, we employ a transformer-based
network architecture designed to achieve CoP guidance, enabling the generation
of both general and professional audio. Second, we implement a multi-stage
training framework that follows step-by-step guidance to ensure the generation
of high-quality sound effects. Third, we develop a CoP multi-modal dataset,
guided by video, to support step-by-step sound effects generation. Evaluation
results highlight the advantages of the proposed multi-stage CoP generative
framework compared to the state-of-the-art models on a variety of datasets,
with FAD 0.79 to 0.74 (+6.33%), CLIP 16.12 to 17.70 (+9.80%) on VGGSound,
SI-SDR 1.98dB to 3.35dB (+69.19%), MOS 2.94 to 3.49(+18.71%) on PianoYT-2h, and
SI-SDR 2.22dB to 3.21dB (+44.59%), MOS 3.07 to 3.42 (+11.40%) on Piano-10h.
|
2503.22201 | Jaewoo Jeong | Jaewoo Jeong, Seohee Lee, Daehee Park, Giwon Lee, Kuk-Jin Yoon | Multi-modal Knowledge Distillation-based Human Trajectory Forecasting | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pedestrian trajectory forecasting is crucial in various applications such as
autonomous driving and mobile robot navigation. In such applications,
camera-based perception enables the extraction of additional modalities (human
pose, text) to enhance prediction accuracy. Indeed, we find that textual
descriptions play a crucial role in integrating additional modalities into a
unified understanding. However, online extraction of text requires the use of
VLM, which may not be feasible for resource-constrained systems. To address
this challenge, we propose a multi-modal knowledge distillation framework: a
student model with limited modality is distilled from a teacher model trained
with full range of modalities. The comprehensive knowledge of a teacher model
trained with trajectory, human pose, and text is distilled into a student model
using only trajectory or human pose as a sole supplement. In doing so, we
separately distill the core locomotion insights from intra-agent multi-modality
and inter-agent interaction. Our generalizable framework is validated with two
state-of-the-art models across three datasets on both ego-view (JRDB, SIT) and
BEV-view (ETH/UCY) setups, utilizing both annotated and VLM-generated text
captions. Distilled student models show consistent improvement in all
prediction metrics for both full and instantaneous observations, improving up
to ~13%. The code is available at https://github.com/Jaewoo97/KDTF.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:32:51 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Jeong",
"Jaewoo",
""
],
[
"Lee",
"Seohee",
""
],
[
"Park",
"Daehee",
""
],
[
"Lee",
"Giwon",
""
],
[
"Yoon",
"Kuk-Jin",
""
]
] | TITLE: Multi-modal Knowledge Distillation-based Human Trajectory Forecasting
ABSTRACT: Pedestrian trajectory forecasting is crucial in various applications such as
autonomous driving and mobile robot navigation. In such applications,
camera-based perception enables the extraction of additional modalities (human
pose, text) to enhance prediction accuracy. Indeed, we find that textual
descriptions play a crucial role in integrating additional modalities into a
unified understanding. However, online extraction of text requires the use of
VLM, which may not be feasible for resource-constrained systems. To address
this challenge, we propose a multi-modal knowledge distillation framework: a
student model with limited modality is distilled from a teacher model trained
with full range of modalities. The comprehensive knowledge of a teacher model
trained with trajectory, human pose, and text is distilled into a student model
using only trajectory or human pose as a sole supplement. In doing so, we
separately distill the core locomotion insights from intra-agent multi-modality
and inter-agent interaction. Our generalizable framework is validated with two
state-of-the-art models across three datasets on both ego-view (JRDB, SIT) and
BEV-view (ETH/UCY) setups, utilizing both annotated and VLM-generated text
captions. Distilled student models show consistent improvement in all
prediction metrics for both full and instantaneous observations, improving up
to ~13%. The code is available at https://github.com/Jaewoo97/KDTF.
|
2503.22204 | Yiren Lu | Yiren Lu, Yunlai Zhou, Yiran Qiao, Chaoda Song, Tuo Liang, Jing Ma, Yu
Yin | Segment then Splat: A Unified Approach for 3D Open-Vocabulary
Segmentation based on Gaussian Splatting | Project page: https://vulab-ai.github.io/Segment-then-Splat/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-vocabulary querying in 3D space is crucial for enabling more intelligent
perception in applications such as robotics, autonomous systems, and augmented
reality. However, most existing methods rely on 2D pixel-level parsing, leading
to multi-view inconsistencies and poor 3D object retrieval. Moreover, they are
limited to static scenes and struggle with dynamic scenes due to the
complexities of motion modeling. In this paper, we propose Segment then Splat,
a 3D-aware open vocabulary segmentation approach for both static and dynamic
scenes based on Gaussian Splatting. Segment then Splat reverses the long
established approach of "segmentation after reconstruction" by dividing
Gaussians into distinct object sets before reconstruction. Once the
reconstruction is complete, the scene is naturally segmented into individual
objects, achieving true 3D segmentation. This approach not only eliminates
Gaussian-object misalignment issues in dynamic scenes but also accelerates the
optimization process, as it eliminates the need for learning a separate
language field. After optimization, a CLIP embedding is assigned to each object
to enable open-vocabulary querying. Extensive experiments on various datasets
demonstrate the effectiveness of our proposed method in both static and dynamic
scenarios.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:36:51 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Lu",
"Yiren",
""
],
[
"Zhou",
"Yunlai",
""
],
[
"Qiao",
"Yiran",
""
],
[
"Song",
"Chaoda",
""
],
[
"Liang",
"Tuo",
""
],
[
"Ma",
"Jing",
""
],
[
"Yin",
"Yu",
""
]
] | TITLE: Segment then Splat: A Unified Approach for 3D Open-Vocabulary
Segmentation based on Gaussian Splatting
ABSTRACT: Open-vocabulary querying in 3D space is crucial for enabling more intelligent
perception in applications such as robotics, autonomous systems, and augmented
reality. However, most existing methods rely on 2D pixel-level parsing, leading
to multi-view inconsistencies and poor 3D object retrieval. Moreover, they are
limited to static scenes and struggle with dynamic scenes due to the
complexities of motion modeling. In this paper, we propose Segment then Splat,
a 3D-aware open vocabulary segmentation approach for both static and dynamic
scenes based on Gaussian Splatting. Segment then Splat reverses the long
established approach of "segmentation after reconstruction" by dividing
Gaussians into distinct object sets before reconstruction. Once the
reconstruction is complete, the scene is naturally segmented into individual
objects, achieving true 3D segmentation. This approach not only eliminates
Gaussian-object misalignment issues in dynamic scenes but also accelerates the
optimization process, as it eliminates the need for learning a separate
language field. After optimization, a CLIP embedding is assigned to each object
to enable open-vocabulary querying. Extensive experiments on various datasets
demonstrate the effectiveness of our proposed method in both static and dynamic
scenarios.
|
2503.22208 | Xinhan Di | Yunming Liang, Zihao Chen, Chaofan Ding, Xinhan Di | DeepSound-V1: Start to Think Step-by-Step in the Audio Generation from
Videos | 11 pages, 6 figures | null | null | null | cs.SD cs.CV eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently, high-quality, synchronized audio is synthesized from video and
optional text inputs using various multi-modal joint learning frameworks.
However, the precise alignment between the visual and generated audio domains
remains far from satisfactory. One key factor is the lack of sufficient
temporal and semantic alignment annotations in open-source video-audio and
text-audio benchmarks. Therefore, we propose a framework for audio generation
from videos, leveraging the internal chain-of-thought (CoT) of a multi-modal
large language model (MLLM) to enable step-by-step reasoning without requiring
additional annotations. Additionally, a corresponding multi-modal reasoning
dataset is constructed to facilitate the learning of initial reasoning in audio
generation. In the experiments, we demonstrate the effectiveness of the
proposed framework in reducing misalignment (voice-over) in generated audio and
achieving competitive performance compared to various state-of-the-art models.
The evaluation results show that the proposed method outperforms
state-of-the-art approaches across multiple metrics. Specifically, the F DP
aSST indicator is reduced by up to 10.07%, the F DP AN N s indicator by up to
11.62%, and the F DV GG indicator by up to 38.61%. Furthermore, the IS
indicator improves by up to 4.95%, the IB-score indicator increases by up to
6.39%, and the DeSync indicator is reduced by up to 0.89%.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:56:19 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Liang",
"Yunming",
""
],
[
"Chen",
"Zihao",
""
],
[
"Ding",
"Chaofan",
""
],
[
"Di",
"Xinhan",
""
]
] | TITLE: DeepSound-V1: Start to Think Step-by-Step in the Audio Generation from
Videos
ABSTRACT: Currently, high-quality, synchronized audio is synthesized from video and
optional text inputs using various multi-modal joint learning frameworks.
However, the precise alignment between the visual and generated audio domains
remains far from satisfactory. One key factor is the lack of sufficient
temporal and semantic alignment annotations in open-source video-audio and
text-audio benchmarks. Therefore, we propose a framework for audio generation
from videos, leveraging the internal chain-of-thought (CoT) of a multi-modal
large language model (MLLM) to enable step-by-step reasoning without requiring
additional annotations. Additionally, a corresponding multi-modal reasoning
dataset is constructed to facilitate the learning of initial reasoning in audio
generation. In the experiments, we demonstrate the effectiveness of the
proposed framework in reducing misalignment (voice-over) in generated audio and
achieving competitive performance compared to various state-of-the-art models.
The evaluation results show that the proposed method outperforms
state-of-the-art approaches across multiple metrics. Specifically, the F DP
aSST indicator is reduced by up to 10.07%, the F DP AN N s indicator by up to
11.62%, and the F DV GG indicator by up to 38.61%. Furthermore, the IS
indicator improves by up to 4.95%, the IB-score indicator increases by up to
6.39%, and the DeSync indicator is reduced by up to 0.89%.
|
2503.22209 | Wonhyeok Choi | Wonhyeok Choi, Kyumin Hwang, Minwoo Choi, Kiljoon Han, Wonjoon Choi,
Mingyu Shin, Sunghoon Im | Intrinsic Image Decomposition for Robust Self-supervised Monocular Depth
Estimation on Reflective Surfaces | Accepted at AAAI 2025 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Self-supervised monocular depth estimation (SSMDE) has gained attention in
the field of deep learning as it estimates depth without requiring ground truth
depth maps. This approach typically uses a photometric consistency loss between
a synthesized image, generated from the estimated depth, and the original
image, thereby reducing the need for extensive dataset acquisition. However,
the conventional photometric consistency loss relies on the Lambertian
assumption, which often leads to significant errors when dealing with
reflective surfaces that deviate from this model. To address this limitation,
we propose a novel framework that incorporates intrinsic image decomposition
into SSMDE. Our method synergistically trains for both monocular depth
estimation and intrinsic image decomposition. The accurate depth estimation
facilitates multi-image consistency for intrinsic image decomposition by
aligning different view coordinate systems, while the decomposition process
identifies reflective areas and excludes corrupted gradients from the depth
training process. Furthermore, our framework introduces a pseudo-depth
generation and knowledge distillation technique to further enhance the
performance of the student model across both reflective and non-reflective
surfaces. Comprehensive evaluations on multiple datasets show that our approach
significantly outperforms existing SSMDE baselines in depth prediction,
especially on reflective surfaces.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:56:59 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Choi",
"Wonhyeok",
""
],
[
"Hwang",
"Kyumin",
""
],
[
"Choi",
"Minwoo",
""
],
[
"Han",
"Kiljoon",
""
],
[
"Choi",
"Wonjoon",
""
],
[
"Shin",
"Mingyu",
""
],
[
"Im",
"Sunghoon",
""
]
] | TITLE: Intrinsic Image Decomposition for Robust Self-supervised Monocular Depth
Estimation on Reflective Surfaces
ABSTRACT: Self-supervised monocular depth estimation (SSMDE) has gained attention in
the field of deep learning as it estimates depth without requiring ground truth
depth maps. This approach typically uses a photometric consistency loss between
a synthesized image, generated from the estimated depth, and the original
image, thereby reducing the need for extensive dataset acquisition. However,
the conventional photometric consistency loss relies on the Lambertian
assumption, which often leads to significant errors when dealing with
reflective surfaces that deviate from this model. To address this limitation,
we propose a novel framework that incorporates intrinsic image decomposition
into SSMDE. Our method synergistically trains for both monocular depth
estimation and intrinsic image decomposition. The accurate depth estimation
facilitates multi-image consistency for intrinsic image decomposition by
aligning different view coordinate systems, while the decomposition process
identifies reflective areas and excludes corrupted gradients from the depth
training process. Furthermore, our framework introduces a pseudo-depth
generation and knowledge distillation technique to further enhance the
performance of the student model across both reflective and non-reflective
surfaces. Comprehensive evaluations on multiple datasets show that our approach
significantly outperforms existing SSMDE baselines in depth prediction,
especially on reflective surfaces.
|
2503.22211 | Chongyu Wang | Congyu Wang, Mingjing Du, Xiang Jiang and Yongquan Dong | Fuzzy Cluster-Aware Contrastive Clustering for Time Series | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid growth of unlabeled time series data, driven by the Internet of
Things (IoT), poses significant challenges in uncovering underlying patterns.
Traditional unsupervised clustering methods often fail to capture the complex
nature of time series data. Recent deep learning-based clustering approaches,
while effective, struggle with insufficient representation learning and the
integration of clustering objectives. To address these issues, we propose a
fuzzy cluster-aware contrastive clustering framework (FCACC) that jointly
optimizes representation learning and clustering.
Our approach introduces a novel three-view data augmentation strategy to
enhance feature extraction by leveraging various characteristics of time series
data. Additionally, we propose a cluster-aware hard negative sample generation
mechanism that dynamically constructs high-quality negative samples using
clustering structure information, thereby improving the model's discriminative
ability.
By leveraging fuzzy clustering, FCACC dynamically generates cluster
structures to guide the contrastive learning process, resulting in more
accurate clustering. Extensive experiments on 40 benchmark datasets show that
FCACC outperforms the selected baseline methods (eight in total), providing an
effective solution for unsupervised time series learning.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 07:59:23 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wang",
"Congyu",
""
],
[
"Du",
"Mingjing",
""
],
[
"Jiang",
"Xiang",
""
],
[
"Dong",
"Yongquan",
""
]
] | TITLE: Fuzzy Cluster-Aware Contrastive Clustering for Time Series
ABSTRACT: The rapid growth of unlabeled time series data, driven by the Internet of
Things (IoT), poses significant challenges in uncovering underlying patterns.
Traditional unsupervised clustering methods often fail to capture the complex
nature of time series data. Recent deep learning-based clustering approaches,
while effective, struggle with insufficient representation learning and the
integration of clustering objectives. To address these issues, we propose a
fuzzy cluster-aware contrastive clustering framework (FCACC) that jointly
optimizes representation learning and clustering.
Our approach introduces a novel three-view data augmentation strategy to
enhance feature extraction by leveraging various characteristics of time series
data. Additionally, we propose a cluster-aware hard negative sample generation
mechanism that dynamically constructs high-quality negative samples using
clustering structure information, thereby improving the model's discriminative
ability.
By leveraging fuzzy clustering, FCACC dynamically generates cluster
structures to guide the contrastive learning process, resulting in more
accurate clustering. Extensive experiments on 40 benchmark datasets show that
FCACC outperforms the selected baseline methods (eight in total), providing an
effective solution for unsupervised time series learning.
|
2503.22223 | Shuang Wang | Shuang Wang, Ming Guo, Xuben Wang, Fei Deng, Lifeng Mao, Bin Wang and
Wenlong Gao | DREMnet: An Interpretable Denoising Framework for Semi-Airborne
Transient Electromagnetic Signal | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The semi-airborne transient electromagnetic method (SATEM) is capable of
conducting rapid surveys over large-scale and hard-to-reach areas. However, the
acquired signals are often contaminated by complex noise, which can compromise
the accuracy of subsequent inversion interpretations. Traditional denoising
techniques primarily rely on parameter selection strategies, which are
insufficient for processing field data in noisy environments. With the advent
of deep learning, various neural networks have been employed for SATEM signal
denoising. However, existing deep learning methods typically use single-mapping
learning approaches that struggle to effectively separate signal from noise.
These methods capture only partial information and lack interpretability. To
overcome these limitations, we propose an interpretable decoupled
representation learning framework, termed DREMnet, that disentangles data into
content and context factors, enabling robust and interpretable denoising in
complex conditions. To address the limitations of CNN and Transformer
architectures, we utilize the RWKV architecture for data processing and
introduce the Contextual-WKV mechanism, which allows unidirectional WKV to
perform bidirectional signal modeling. Our proposed Covering Embedding
technique retains the strong local perception of convolutional networks through
stacked embedding. Experimental results on test datasets demonstrate that the
DREMnet method outperforms existing techniques, with processed field data that
more accurately reflects the theoretical signal, offering improved
identification of subsurface electrical structures.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 08:13:23 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Wang",
"Shuang",
""
],
[
"Guo",
"Ming",
""
],
[
"Wang",
"Xuben",
""
],
[
"Deng",
"Fei",
""
],
[
"Mao",
"Lifeng",
""
],
[
"Wang",
"Bin",
""
],
[
"Gao",
"Wenlong",
""
]
] | TITLE: DREMnet: An Interpretable Denoising Framework for Semi-Airborne
Transient Electromagnetic Signal
ABSTRACT: The semi-airborne transient electromagnetic method (SATEM) is capable of
conducting rapid surveys over large-scale and hard-to-reach areas. However, the
acquired signals are often contaminated by complex noise, which can compromise
the accuracy of subsequent inversion interpretations. Traditional denoising
techniques primarily rely on parameter selection strategies, which are
insufficient for processing field data in noisy environments. With the advent
of deep learning, various neural networks have been employed for SATEM signal
denoising. However, existing deep learning methods typically use single-mapping
learning approaches that struggle to effectively separate signal from noise.
These methods capture only partial information and lack interpretability. To
overcome these limitations, we propose an interpretable decoupled
representation learning framework, termed DREMnet, that disentangles data into
content and context factors, enabling robust and interpretable denoising in
complex conditions. To address the limitations of CNN and Transformer
architectures, we utilize the RWKV architecture for data processing and
introduce the Contextual-WKV mechanism, which allows unidirectional WKV to
perform bidirectional signal modeling. Our proposed Covering Embedding
technique retains the strong local perception of convolutional networks through
stacked embedding. Experimental results on test datasets demonstrate that the
DREMnet method outperforms existing techniques, with processed field data that
more accurately reflects the theoretical signal, offering improved
identification of subsurface electrical structures.
|
2503.22227 | Qirui Li | Qirui Li and Rui Zong | CAT: A GPU-Accelerated FHE Framework with Its Application to
High-Precision Private Dataset Query | null | null | null | null | cs.CR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an open-source GPU-accelerated fully homomorphic encryption
(FHE) framework CAT, which surpasses existing solutions in functionality and
efficiency. \emph{CAT} features a three-layer architecture: a foundation of
core math, a bridge of pre-computed elements and combined operations, and an
API-accessible layer of FHE operators. It utilizes techniques such as parallel
executed operations, well-defined layout patterns of cipher data, kernel
fusion/segmentation, and dual GPU pools to enhance the overall execution
efficiency. In addition, a memory management mechanism ensures server-side
suitability and prevents data leakage.
Based on our framework, we implement three widely used FHE schemes: CKKS,
BFV, and BGV. The results show that our implementation on Nvidia 4090 can
achieve up to 2173$\times$ speedup over CPU implementation and 1.25$\times$
over state-of-the-art GPU acceleration work for specific operations. What's
more, we offer a scenario validation with CKKS-based Privacy Database Queries,
achieving a 33$\times$ speedup over its CPU counterpart. All query tasks can
handle datasets up to $10^3$ rows on a single GPU within 1 second, using 2-5 GB
storage.
Our implementation has undergone extensive stability testing and can be
easily deployed on commercial GPUs. We hope that our work will significantly
advance the integration of state-of-the-art FHE algorithms into diverse
real-world systems by providing a robust, industry-ready, and open-source tool.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 08:20:18 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Qirui",
""
],
[
"Zong",
"Rui",
""
]
] | TITLE: CAT: A GPU-Accelerated FHE Framework with Its Application to
High-Precision Private Dataset Query
ABSTRACT: We introduce an open-source GPU-accelerated fully homomorphic encryption
(FHE) framework CAT, which surpasses existing solutions in functionality and
efficiency. \emph{CAT} features a three-layer architecture: a foundation of
core math, a bridge of pre-computed elements and combined operations, and an
API-accessible layer of FHE operators. It utilizes techniques such as parallel
executed operations, well-defined layout patterns of cipher data, kernel
fusion/segmentation, and dual GPU pools to enhance the overall execution
efficiency. In addition, a memory management mechanism ensures server-side
suitability and prevents data leakage.
Based on our framework, we implement three widely used FHE schemes: CKKS,
BFV, and BGV. The results show that our implementation on Nvidia 4090 can
achieve up to 2173$\times$ speedup over CPU implementation and 1.25$\times$
over state-of-the-art GPU acceleration work for specific operations. What's
more, we offer a scenario validation with CKKS-based Privacy Database Queries,
achieving a 33$\times$ speedup over its CPU counterpart. All query tasks can
handle datasets up to $10^3$ rows on a single GPU within 1 second, using 2-5 GB
storage.
Our implementation has undergone extensive stability testing and can be
easily deployed on commercial GPUs. We hope that our work will significantly
advance the integration of state-of-the-art FHE algorithms into diverse
real-world systems by providing a robust, industry-ready, and open-source tool.
|
2503.22251 | Guneet Mutreja | Guneet Mutreja, Ksenia Bittner | Efficient Building Roof Type Classification: A Domain-Specific
Self-Supervised Approach | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate classification of building roof types from aerial imagery is crucial
for various remote sensing applications, including urban planning, disaster
management, and infrastructure monitoring. However, this task is often hindered
by the limited availability of labeled data for supervised learning approaches.
To address this challenge, this paper investigates the effectiveness of self
supervised learning with EfficientNet architectures, known for their
computational efficiency, for building roof type classification. We propose a
novel framework that incorporates a Convolutional Block Attention Module (CBAM)
to enhance the feature extraction capabilities of EfficientNet. Furthermore, we
explore the benefits of pretraining on a domain-specific dataset, the Aerial
Image Dataset (AID), compared to ImageNet pretraining. Our experimental results
demonstrate the superiority of our approach. Employing Simple Framework for
Contrastive Learning of Visual Representations (SimCLR) with EfficientNet-B3
and CBAM achieves a 95.5% accuracy on our validation set, matching the
performance of state-of-the-art transformer-based models while utilizing
significantly fewer parameters. We also provide a comprehensive evaluation on
two challenging test sets, demonstrating the generalization capability of our
method. Notably, our findings highlight the effectiveness of domain-specific
pretraining, consistently leading to higher accuracy compared to models
pretrained on the generic ImageNet dataset. Our work establishes EfficientNet
based self-supervised learning as a computationally efficient and highly
effective approach for building roof type classification, particularly
beneficial in scenarios with limited labeled data.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:04:11 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Mutreja",
"Guneet",
""
],
[
"Bittner",
"Ksenia",
""
]
] | TITLE: Efficient Building Roof Type Classification: A Domain-Specific
Self-Supervised Approach
ABSTRACT: Accurate classification of building roof types from aerial imagery is crucial
for various remote sensing applications, including urban planning, disaster
management, and infrastructure monitoring. However, this task is often hindered
by the limited availability of labeled data for supervised learning approaches.
To address this challenge, this paper investigates the effectiveness of self
supervised learning with EfficientNet architectures, known for their
computational efficiency, for building roof type classification. We propose a
novel framework that incorporates a Convolutional Block Attention Module (CBAM)
to enhance the feature extraction capabilities of EfficientNet. Furthermore, we
explore the benefits of pretraining on a domain-specific dataset, the Aerial
Image Dataset (AID), compared to ImageNet pretraining. Our experimental results
demonstrate the superiority of our approach. Employing Simple Framework for
Contrastive Learning of Visual Representations (SimCLR) with EfficientNet-B3
and CBAM achieves a 95.5% accuracy on our validation set, matching the
performance of state-of-the-art transformer-based models while utilizing
significantly fewer parameters. We also provide a comprehensive evaluation on
two challenging test sets, demonstrating the generalization capability of our
method. Notably, our findings highlight the effectiveness of domain-specific
pretraining, consistently leading to higher accuracy compared to models
pretrained on the generic ImageNet dataset. Our work establishes EfficientNet
based self-supervised learning as a computationally efficient and highly
effective approach for building roof type classification, particularly
beneficial in scenarios with limited labeled data.
|
2503.22257 | Munib Mesinovic | Munib Mesinovic, Soheila Molaei, Peter Watkinson, Tingting Zhu | DynaGraph: Interpretable Multi-Label Prediction from EHRs via Dynamic
Graph Learning and Contrastive Augmentation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Learning from longitudinal electronic health records is limited if it does
not capture the temporal trajectories of the patient's state in a clinical
setting. Graph models allow us to capture the hidden dependencies of the
multivariate time-series when the graphs are constructed in a similar dynamic
manner. Previous dynamic graph models require a pre-defined and/or static graph
structure, which is unknown in most cases, or they only capture the spatial
relations between the features. Furthermore in healthcare, the interpretability
of the model is an essential requirement to build trust with clinicians. In
addition to previously proposed attention mechanisms, there has not been an
interpretable dynamic graph framework for data from multivariate electronic
health records (EHRs). Here, we propose DynaGraph, an end-to-end interpretable
contrastive graph model that learns the dynamics of multivariate time-series
EHRs as part of optimisation. We validate our model in four real-world clinical
datasets, ranging from primary care to secondary care settings with broad
demographics, in challenging settings where tasks are imbalanced and
multi-labelled. Compared to state-of-the-art models, DynaGraph achieves
significant improvements in balanced accuracy and sensitivity over the nearest
complex competitors in time-series or dynamic graph modelling across three ICU
and one primary care datasets. Through a pseudo-attention approach to graph
construction, our model also indicates the importance of clinical covariates
over time, providing means for clinical validation.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:13:30 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Mesinovic",
"Munib",
""
],
[
"Molaei",
"Soheila",
""
],
[
"Watkinson",
"Peter",
""
],
[
"Zhu",
"Tingting",
""
]
] | TITLE: DynaGraph: Interpretable Multi-Label Prediction from EHRs via Dynamic
Graph Learning and Contrastive Augmentation
ABSTRACT: Learning from longitudinal electronic health records is limited if it does
not capture the temporal trajectories of the patient's state in a clinical
setting. Graph models allow us to capture the hidden dependencies of the
multivariate time-series when the graphs are constructed in a similar dynamic
manner. Previous dynamic graph models require a pre-defined and/or static graph
structure, which is unknown in most cases, or they only capture the spatial
relations between the features. Furthermore in healthcare, the interpretability
of the model is an essential requirement to build trust with clinicians. In
addition to previously proposed attention mechanisms, there has not been an
interpretable dynamic graph framework for data from multivariate electronic
health records (EHRs). Here, we propose DynaGraph, an end-to-end interpretable
contrastive graph model that learns the dynamics of multivariate time-series
EHRs as part of optimisation. We validate our model in four real-world clinical
datasets, ranging from primary care to secondary care settings with broad
demographics, in challenging settings where tasks are imbalanced and
multi-labelled. Compared to state-of-the-art models, DynaGraph achieves
significant improvements in balanced accuracy and sensitivity over the nearest
complex competitors in time-series or dynamic graph modelling across three ICU
and one primary care datasets. Through a pseudo-attention approach to graph
construction, our model also indicates the importance of clinical covariates
over time, providing means for clinical validation.
|
2503.22262 | Songsong Yu | Songsong Yu, Yuxin Chen, Zhongang Qi, Zeke Xie, Yifan Wang, Lijun
Wang, Ying Shan, Huchuan Lu | Mono2Stereo: A Benchmark and Empirical Study for Stereo Conversion | Accepted by CVPR 2025 Project webpage:
https://mono2stereo-bench.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid proliferation of 3D devices and the shortage of 3D content,
stereo conversion is attracting increasing attention. Recent works introduce
pretrained Diffusion Models (DMs) into this task. However, due to the scarcity
of large-scale training data and comprehensive benchmarks, the optimal
methodologies for employing DMs in stereo conversion and the accurate
evaluation of stereo effects remain largely unexplored. In this work, we
introduce the Mono2Stereo dataset, providing high-quality training data and
benchmark to support in-depth exploration of stereo conversion. With this
dataset, we conduct an empirical study that yields two primary findings. 1) The
differences between the left and right views are subtle, yet existing metrics
consider overall pixels, failing to concentrate on regions critical to stereo
effects. 2) Mainstream methods adopt either one-stage left-to-right generation
or warp-and-inpaint pipeline, facing challenges of degraded stereo effect and
image distortion respectively. Based on these findings, we introduce a new
evaluation metric, Stereo Intersection-over-Union, which prioritizes disparity
and achieves a high correlation with human judgments on stereo effect.
Moreover, we propose a strong baseline model, harmonizing the stereo effect and
image quality simultaneously, and notably surpassing current mainstream
methods. Our code and data will be open-sourced to promote further research in
stereo conversion. Our models are available at mono2stereo-bench.github.io.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:25:58 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Yu",
"Songsong",
""
],
[
"Chen",
"Yuxin",
""
],
[
"Qi",
"Zhongang",
""
],
[
"Xie",
"Zeke",
""
],
[
"Wang",
"Yifan",
""
],
[
"Wang",
"Lijun",
""
],
[
"Shan",
"Ying",
""
],
[
"Lu",
"Huchuan",
""
]
] | TITLE: Mono2Stereo: A Benchmark and Empirical Study for Stereo Conversion
ABSTRACT: With the rapid proliferation of 3D devices and the shortage of 3D content,
stereo conversion is attracting increasing attention. Recent works introduce
pretrained Diffusion Models (DMs) into this task. However, due to the scarcity
of large-scale training data and comprehensive benchmarks, the optimal
methodologies for employing DMs in stereo conversion and the accurate
evaluation of stereo effects remain largely unexplored. In this work, we
introduce the Mono2Stereo dataset, providing high-quality training data and
benchmark to support in-depth exploration of stereo conversion. With this
dataset, we conduct an empirical study that yields two primary findings. 1) The
differences between the left and right views are subtle, yet existing metrics
consider overall pixels, failing to concentrate on regions critical to stereo
effects. 2) Mainstream methods adopt either one-stage left-to-right generation
or warp-and-inpaint pipeline, facing challenges of degraded stereo effect and
image distortion respectively. Based on these findings, we introduce a new
evaluation metric, Stereo Intersection-over-Union, which prioritizes disparity
and achieves a high correlation with human judgments on stereo effect.
Moreover, we propose a strong baseline model, harmonizing the stereo effect and
image quality simultaneously, and notably surpassing current mainstream
methods. Our code and data will be open-sourced to promote further research in
stereo conversion. Our models are available at mono2stereo-bench.github.io.
|
2503.22263 | Xitong Gao | Dongping Liao, Xitong Gao, Yabo Xu, Chengzhong Xu | FLIP: Towards Comprehensive and Reliable Evaluation of Federated Prompt
Learning | https://github.com/0-ml/flip | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The increasing emphasis on privacy and data security has driven the adoption
of federated learning, a decentralized approach to train machine learning
models without sharing raw data. Prompt learning, which fine-tunes prompt
embeddings of pretrained models, offers significant advantages in federated
settings by reducing computational costs and communication overheads while
leveraging the strong performance and generalization capabilities of
vision-language models such as CLIP. This paper addresses the intersection of
federated learning and prompt learning, particularly for vision-language
models. In this work, we introduce a comprehensive framework, named FLIP, to
evaluate federated prompt learning algorithms. FLIP assesses the performance of
8 state-of-the-art federated prompt learning methods across 4 federated
learning protocols and 12 open datasets, considering 6 distinct evaluation
scenarios. Our findings demonstrate that prompt learning maintains strong
generalization performance in both in-distribution and out-of-distribution
settings with minimal resource consumption. This work highlights the
effectiveness of federated prompt learning in environments characterized by
data scarcity, unseen classes, and cross-domain distributional shifts. We
open-source the code for all implemented algorithms in FLIP to facilitate
further research in this domain.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:27:20 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Liao",
"Dongping",
""
],
[
"Gao",
"Xitong",
""
],
[
"Xu",
"Yabo",
""
],
[
"Xu",
"Chengzhong",
""
]
] | TITLE: FLIP: Towards Comprehensive and Reliable Evaluation of Federated Prompt
Learning
ABSTRACT: The increasing emphasis on privacy and data security has driven the adoption
of federated learning, a decentralized approach to train machine learning
models without sharing raw data. Prompt learning, which fine-tunes prompt
embeddings of pretrained models, offers significant advantages in federated
settings by reducing computational costs and communication overheads while
leveraging the strong performance and generalization capabilities of
vision-language models such as CLIP. This paper addresses the intersection of
federated learning and prompt learning, particularly for vision-language
models. In this work, we introduce a comprehensive framework, named FLIP, to
evaluate federated prompt learning algorithms. FLIP assesses the performance of
8 state-of-the-art federated prompt learning methods across 4 federated
learning protocols and 12 open datasets, considering 6 distinct evaluation
scenarios. Our findings demonstrate that prompt learning maintains strong
generalization performance in both in-distribution and out-of-distribution
settings with minimal resource consumption. This work highlights the
effectiveness of federated prompt learning in environments characterized by
data scarcity, unseen classes, and cross-domain distributional shifts. We
open-source the code for all implemented algorithms in FLIP to facilitate
further research in this domain.
|
2503.22268 | Nan Huang | Nan Huang, Wenzhao Zheng, Chenfeng Xu, Kurt Keutzer, Shanghang Zhang,
Angjoo Kanazawa, Qianqian Wang | Segment Any Motion in Videos | CVPR 2025. Website: https://motion-seg.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Moving object segmentation is a crucial task for achieving a high-level
understanding of visual scenes and has numerous downstream applications. Humans
can effortlessly segment moving objects in videos. Previous work has largely
relied on optical flow to provide motion cues; however, this approach often
results in imperfect predictions due to challenges such as partial motion,
complex deformations, motion blur and background distractions. We propose a
novel approach for moving object segmentation that combines long-range
trajectory motion cues with DINO-based semantic features and leverages SAM2 for
pixel-level mask densification through an iterative prompting strategy. Our
model employs Spatio-Temporal Trajectory Attention and Motion-Semantic
Decoupled Embedding to prioritize motion while integrating semantic support.
Extensive testing on diverse datasets demonstrates state-of-the-art
performance, excelling in challenging scenarios and fine-grained segmentation
of multiple objects. Our code is available at https://motion-seg.github.io/.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:34:11 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Huang",
"Nan",
""
],
[
"Zheng",
"Wenzhao",
""
],
[
"Xu",
"Chenfeng",
""
],
[
"Keutzer",
"Kurt",
""
],
[
"Zhang",
"Shanghang",
""
],
[
"Kanazawa",
"Angjoo",
""
],
[
"Wang",
"Qianqian",
""
]
] | TITLE: Segment Any Motion in Videos
ABSTRACT: Moving object segmentation is a crucial task for achieving a high-level
understanding of visual scenes and has numerous downstream applications. Humans
can effortlessly segment moving objects in videos. Previous work has largely
relied on optical flow to provide motion cues; however, this approach often
results in imperfect predictions due to challenges such as partial motion,
complex deformations, motion blur and background distractions. We propose a
novel approach for moving object segmentation that combines long-range
trajectory motion cues with DINO-based semantic features and leverages SAM2 for
pixel-level mask densification through an iterative prompting strategy. Our
model employs Spatio-Temporal Trajectory Attention and Motion-Semantic
Decoupled Embedding to prioritize motion while integrating semantic support.
Extensive testing on diverse datasets demonstrates state-of-the-art
performance, excelling in challenging scenarios and fine-grained segmentation
of multiple objects. Our code is available at https://motion-seg.github.io/.
|
2503.22275 | Shivam Mehta | Shivam Mehta, Nebojsa Jojic, Hannes Gamper | Make Some Noise: Towards LLM audio reasoning and generation using sound
tokens | 5 pages, 2 figures, Accepted at ICASSP 2025 | null | 10.1109/ICASSP49660.2025.10888809 | null | eess.AS cs.AI cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integrating audio comprehension and generation into large language models
(LLMs) remains challenging due to the continuous nature of audio and the
resulting high sampling rates. Here, we introduce a novel approach that
combines Variational Quantization with Conditional Flow Matching to convert
audio into ultra-low bitrate discrete tokens of 0.23kpbs, allowing for seamless
integration with text tokens in LLMs. We fine-tuned a pretrained text-based LLM
using Low-Rank Adaptation (LoRA) to assess its effectiveness in achieving true
multimodal capabilities, i.e., audio comprehension and generation. Our
tokenizer outperforms a traditional VQ-VAE across various datasets with diverse
acoustic events. Despite the substantial loss of fine-grained details through
audio tokenization, our multimodal LLM trained with discrete tokens achieves
competitive results in audio comprehension with state-of-the-art methods,
though audio generation is poor. Our results highlight the need for larger,
more diverse datasets and improved evaluation metrics to advance multimodal LLM
performance.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:43:47 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Mehta",
"Shivam",
""
],
[
"Jojic",
"Nebojsa",
""
],
[
"Gamper",
"Hannes",
""
]
] | TITLE: Make Some Noise: Towards LLM audio reasoning and generation using sound
tokens
ABSTRACT: Integrating audio comprehension and generation into large language models
(LLMs) remains challenging due to the continuous nature of audio and the
resulting high sampling rates. Here, we introduce a novel approach that
combines Variational Quantization with Conditional Flow Matching to convert
audio into ultra-low bitrate discrete tokens of 0.23kpbs, allowing for seamless
integration with text tokens in LLMs. We fine-tuned a pretrained text-based LLM
using Low-Rank Adaptation (LoRA) to assess its effectiveness in achieving true
multimodal capabilities, i.e., audio comprehension and generation. Our
tokenizer outperforms a traditional VQ-VAE across various datasets with diverse
acoustic events. Despite the substantial loss of fine-grained details through
audio tokenization, our multimodal LLM trained with discrete tokens achieves
competitive results in audio comprehension with state-of-the-art methods,
though audio generation is poor. Our results highlight the need for larger,
more diverse datasets and improved evaluation metrics to advance multimodal LLM
performance.
|
2503.22276 | Torsten Sch\"on | Calvin Kammerlander, Viola Kolb, Marinus Luegmair, Lou Scheermann,
Maximilian Schmailzl, Marco Seufert, Jiayun Zhang, Denis Dalic, Torsten
Sch\"on | Machine Learning Models for Soil Parameter Prediction Based on
Satellite, Weather, Clay and Yield Data | This technical report is the documentation of a student project
collaboration between Technische Hochschule Ingolstadt and MI4People | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Efficient nutrient management and precise fertilization are essential for
advancing modern agriculture, particularly in regions striving to optimize crop
yields sustainably. The AgroLens project endeavors to address this challenge by
develop ing Machine Learning (ML)-based methodologies to predict soil nutrient
levels without reliance on laboratory tests. By leveraging state of the art
techniques, the project lays a foundation for acionable insights to improve
agricultural productivity in resource-constrained areas, such as Africa. The
approach begins with the development of a robust European model using the LUCAS
Soil dataset and Sentinel-2 satellite imagery to estimate key soil properties,
including phosphorus, potassium, nitrogen, and pH levels. This model is then
enhanced by integrating supplementary features, such as weather data, harvest
rates, and Clay AI-generated embeddings. This report details the methodological
framework, data preprocessing strategies, and ML pipelines employed in this
project. Advanced algorithms, including Random Forests, Extreme Gradient
Boosting (XGBoost), and Fully Connected Neural Networks (FCNN), were
implemented and finetuned for precise nutrient prediction. Results showcase
robust model performance, with root mean square error values meeting stringent
accuracy thresholds. By establishing a reproducible and scalable pipeline for
soil nutrient prediction, this research paves the way for transformative
agricultural applications, including precision fertilization and improved
resource allocation in underresourced regions like Africa.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:44:32 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Kammerlander",
"Calvin",
""
],
[
"Kolb",
"Viola",
""
],
[
"Luegmair",
"Marinus",
""
],
[
"Scheermann",
"Lou",
""
],
[
"Schmailzl",
"Maximilian",
""
],
[
"Seufert",
"Marco",
""
],
[
"Zhang",
"Jiayun",
""
],
[
"Dalic",
"Denis",
""
],
[
"Schön",
"Torsten",
""
]
] | TITLE: Machine Learning Models for Soil Parameter Prediction Based on
Satellite, Weather, Clay and Yield Data
ABSTRACT: Efficient nutrient management and precise fertilization are essential for
advancing modern agriculture, particularly in regions striving to optimize crop
yields sustainably. The AgroLens project endeavors to address this challenge by
develop ing Machine Learning (ML)-based methodologies to predict soil nutrient
levels without reliance on laboratory tests. By leveraging state of the art
techniques, the project lays a foundation for acionable insights to improve
agricultural productivity in resource-constrained areas, such as Africa. The
approach begins with the development of a robust European model using the LUCAS
Soil dataset and Sentinel-2 satellite imagery to estimate key soil properties,
including phosphorus, potassium, nitrogen, and pH levels. This model is then
enhanced by integrating supplementary features, such as weather data, harvest
rates, and Clay AI-generated embeddings. This report details the methodological
framework, data preprocessing strategies, and ML pipelines employed in this
project. Advanced algorithms, including Random Forests, Extreme Gradient
Boosting (XGBoost), and Fully Connected Neural Networks (FCNN), were
implemented and finetuned for precise nutrient prediction. Results showcase
robust model performance, with root mean square error values meeting stringent
accuracy thresholds. By establishing a reproducible and scalable pipeline for
soil nutrient prediction, this research paves the way for transformative
agricultural applications, including precision fertilization and improved
resource allocation in underresourced regions like Africa.
|
2503.22280 | Rrubaa Panchendrarajan | Rrubaa Panchendrarajan, Rub\'en M\'iguez, Arkaitz Zubiaga | MultiClaimNet: A Massively Multilingual Dataset of Fact-Checked Claim
Clusters | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In the context of fact-checking, claims are often repeated across various
platforms and in different languages, which can benefit from a process that
reduces this redundancy. While retrieving previously fact-checked claims has
been investigated as a solution, the growing number of unverified claims and
expanding size of fact-checked databases calls for alternative, more efficient
solutions. A promising solution is to group claims that discuss the same
underlying facts into clusters to improve claim retrieval and validation.
However, research on claim clustering is hindered by the lack of suitable
datasets. To bridge this gap, we introduce \textit{MultiClaimNet}, a collection
of three multilingual claim cluster datasets containing claims in 86 languages
across diverse topics. Claim clusters are formed automatically from
claim-matching pairs with limited manual intervention. We leverage two existing
claim-matching datasets to form the smaller datasets within
\textit{MultiClaimNet}. To build the larger dataset, we propose and validate an
approach involving retrieval of approximate nearest neighbors to form candidate
claim pairs and an automated annotation of claim similarity using large
language models. This larger dataset contains 85.3K fact-checked claims written
in 78 languages. We further conduct extensive experiments using various
clustering techniques and sentence embedding models to establish baseline
performance. Our datasets and findings provide a strong foundation for scalable
claim clustering, contributing to efficient fact-checking pipelines.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:49:45 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Panchendrarajan",
"Rrubaa",
""
],
[
"Míguez",
"Rubén",
""
],
[
"Zubiaga",
"Arkaitz",
""
]
] | TITLE: MultiClaimNet: A Massively Multilingual Dataset of Fact-Checked Claim
Clusters
ABSTRACT: In the context of fact-checking, claims are often repeated across various
platforms and in different languages, which can benefit from a process that
reduces this redundancy. While retrieving previously fact-checked claims has
been investigated as a solution, the growing number of unverified claims and
expanding size of fact-checked databases calls for alternative, more efficient
solutions. A promising solution is to group claims that discuss the same
underlying facts into clusters to improve claim retrieval and validation.
However, research on claim clustering is hindered by the lack of suitable
datasets. To bridge this gap, we introduce \textit{MultiClaimNet}, a collection
of three multilingual claim cluster datasets containing claims in 86 languages
across diverse topics. Claim clusters are formed automatically from
claim-matching pairs with limited manual intervention. We leverage two existing
claim-matching datasets to form the smaller datasets within
\textit{MultiClaimNet}. To build the larger dataset, we propose and validate an
approach involving retrieval of approximate nearest neighbors to form candidate
claim pairs and an automated annotation of claim similarity using large
language models. This larger dataset contains 85.3K fact-checked claims written
in 78 languages. We further conduct extensive experiments using various
clustering techniques and sentence embedding models to establish baseline
performance. Our datasets and findings provide a strong foundation for scalable
claim clustering, contributing to efficient fact-checking pipelines.
|
2503.22281 | Xuan Loc Pham | Xuan Loc Pham, Mathias Prokop, Bram van Ginneken, Alessa Hering | Divide to Conquer: A Field Decomposition Approach for Multi-Organ
Whole-Body CT Image Registration | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image registration is an essential technique for the analysis of Computed
Tomography (CT) images in clinical practice. However, existing methodologies
are predominantly tailored to a specific organ of interest and often exhibit
lower performance on other organs, thus limiting their generalizability and
applicability. Multi-organ registration addresses these limitations, but the
simultaneous alignment of multiple organs with diverse shapes, sizes and
locations requires a highly complex deformation field with a multi-layer
composition of individual deformations. This study introduces a novel field
decomposition approach to address the high complexity of deformations in
multi-organ whole-body CT image registration. The proposed method is trained
and evaluated on a longitudinal dataset of 691 patients, each with two CT
images obtained at distinct time points. These scans fully encompass the
thoracic, abdominal, and pelvic regions. Two baseline registration methods are
selected for this study: one based on optimization techniques and another based
on deep learning. Experimental results demonstrate that the proposed approach
outperforms baseline methods in handling complex deformations in multi-organ
whole-body CT image registration.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 09:51:13 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Pham",
"Xuan Loc",
""
],
[
"Prokop",
"Mathias",
""
],
[
"van Ginneken",
"Bram",
""
],
[
"Hering",
"Alessa",
""
]
] | TITLE: Divide to Conquer: A Field Decomposition Approach for Multi-Organ
Whole-Body CT Image Registration
ABSTRACT: Image registration is an essential technique for the analysis of Computed
Tomography (CT) images in clinical practice. However, existing methodologies
are predominantly tailored to a specific organ of interest and often exhibit
lower performance on other organs, thus limiting their generalizability and
applicability. Multi-organ registration addresses these limitations, but the
simultaneous alignment of multiple organs with diverse shapes, sizes and
locations requires a highly complex deformation field with a multi-layer
composition of individual deformations. This study introduces a novel field
decomposition approach to address the high complexity of deformations in
multi-organ whole-body CT image registration. The proposed method is trained
and evaluated on a longitudinal dataset of 691 patients, each with two CT
images obtained at distinct time points. These scans fully encompass the
thoracic, abdominal, and pelvic regions. Two baseline registration methods are
selected for this study: one based on optimization techniques and another based
on deep learning. Experimental results demonstrate that the proposed approach
outperforms baseline methods in handling complex deformations in multi-organ
whole-body CT image registration.
|
2503.22309 | Matej Grcic | Zakaria Laskar, Tomas Vojir, Matej Grcic, Iaroslav Melekhov, Shankar
Gangisettye, Juho Kannala, Jiri Matas, Giorgos Tolias, C.V. Jawahar | A Dataset for Semantic Segmentation in the Presence of Unknowns | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Before deployment in the real-world deep neural networks require thorough
evaluation of how they handle both knowns, inputs represented in the training
data, and unknowns (anomalies). This is especially important for scene
understanding tasks with safety critical applications, such as in autonomous
driving. Existing datasets allow evaluation of only knowns or unknowns - but
not both, which is required to establish "in the wild" suitability of deep
neural network models. To bridge this gap, we propose a novel anomaly
segmentation dataset, ISSU, that features a diverse set of anomaly inputs from
cluttered real-world environments. The dataset is twice larger than existing
anomaly segmentation datasets, and provides a training, validation and test set
for controlled in-domain evaluation. The test set consists of a static and
temporal part, with the latter comprised of videos. The dataset provides
annotations for both closed-set (knowns) and anomalies, enabling closed-set and
open-set evaluation. The dataset covers diverse conditions, such as domain and
cross-sensor shift, illumination variation and allows ablation of anomaly
detection methods with respect to these variations. Evaluation results of
current state-of-the-art methods confirm the need for improvements especially
in domain-generalization, small and large object segmentation.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 10:31:01 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Laskar",
"Zakaria",
""
],
[
"Vojir",
"Tomas",
""
],
[
"Grcic",
"Matej",
""
],
[
"Melekhov",
"Iaroslav",
""
],
[
"Gangisettye",
"Shankar",
""
],
[
"Kannala",
"Juho",
""
],
[
"Matas",
"Jiri",
""
],
[
"Tolias",
"Giorgos",
""
],
[
"Jawahar",
"C. V.",
""
]
] | TITLE: A Dataset for Semantic Segmentation in the Presence of Unknowns
ABSTRACT: Before deployment in the real-world deep neural networks require thorough
evaluation of how they handle both knowns, inputs represented in the training
data, and unknowns (anomalies). This is especially important for scene
understanding tasks with safety critical applications, such as in autonomous
driving. Existing datasets allow evaluation of only knowns or unknowns - but
not both, which is required to establish "in the wild" suitability of deep
neural network models. To bridge this gap, we propose a novel anomaly
segmentation dataset, ISSU, that features a diverse set of anomaly inputs from
cluttered real-world environments. The dataset is twice larger than existing
anomaly segmentation datasets, and provides a training, validation and test set
for controlled in-domain evaluation. The test set consists of a static and
temporal part, with the latter comprised of videos. The dataset provides
annotations for both closed-set (knowns) and anomalies, enabling closed-set and
open-set evaluation. The dataset covers diverse conditions, such as domain and
cross-sensor shift, illumination variation and allows ablation of anomaly
detection methods with respect to these variations. Evaluation results of
current state-of-the-art methods confirm the need for improvements especially
in domain-generalization, small and large object segmentation.
|
2503.22328 | Shiming Wang | Yancong Lin, Shiming Wang, Liangliang Nan, Julian Kooij and Holger
Caesar | VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow | CVPR 2025. Code is available at
https://github.com/tudelft-iv/VoteFlow. Yancong Lin and Shiming Wang have
equal contributions | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene flow estimation aims to recover per-point motion from two adjacent
LiDAR scans. However, in real-world applications such as autonomous driving,
points rarely move independently of others, especially for nearby points
belonging to the same object, which often share the same motion. Incorporating
this locally rigid motion constraint has been a key challenge in
self-supervised scene flow estimation, which is often addressed by
post-processing or appending extra regularization. While these approaches are
able to improve the rigidity of predicted flows, they lack an architectural
inductive bias for local rigidity within the model structure, leading to
suboptimal learning efficiency and inferior performance. In contrast, we
enforce local rigidity with a lightweight add-on module in neural network
design, enabling end-to-end learning. We design a discretized voting space that
accommodates all possible translations and then identify the one shared by
nearby points by differentiable voting. Additionally, to ensure computational
efficiency, we operate on pillars rather than points and learn representative
features for voting per pillar. We plug the Voting Module into popular model
designs and evaluate its benefit on Argoverse 2 and Waymo datasets. We
outperform baseline works with only marginal compute overhead. Code is
available at https://github.com/tudelft-iv/VoteFlow.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 11:06:27 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Lin",
"Yancong",
""
],
[
"Wang",
"Shiming",
""
],
[
"Nan",
"Liangliang",
""
],
[
"Kooij",
"Julian",
""
],
[
"Caesar",
"Holger",
""
]
] | TITLE: VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow
ABSTRACT: Scene flow estimation aims to recover per-point motion from two adjacent
LiDAR scans. However, in real-world applications such as autonomous driving,
points rarely move independently of others, especially for nearby points
belonging to the same object, which often share the same motion. Incorporating
this locally rigid motion constraint has been a key challenge in
self-supervised scene flow estimation, which is often addressed by
post-processing or appending extra regularization. While these approaches are
able to improve the rigidity of predicted flows, they lack an architectural
inductive bias for local rigidity within the model structure, leading to
suboptimal learning efficiency and inferior performance. In contrast, we
enforce local rigidity with a lightweight add-on module in neural network
design, enabling end-to-end learning. We design a discretized voting space that
accommodates all possible translations and then identify the one shared by
nearby points by differentiable voting. Additionally, to ensure computational
efficiency, we operate on pillars rather than points and learn representative
features for voting per pillar. We plug the Voting Module into popular model
designs and evaluate its benefit on Argoverse 2 and Waymo datasets. We
outperform baseline works with only marginal compute overhead. Code is
available at https://github.com/tudelft-iv/VoteFlow.
|
2503.22338 | Stamos Katsigiannis | Shrikant Malviya, Pablo Arnau-Gonz\'alez, Miguel Arevalillo-Herr\'aez,
Stamos Katsigiannis | SKDU at De-Factify 4.0: Natural Language Features for AI-Generated
Text-Detection | De-Factify 4.0 Workshop at the 39th AAAI Conference on Artificial
Intelligence (AAAI 2025) | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The rapid advancement of large language models (LLMs) has introduced new
challenges in distinguishing human-written text from AI-generated content. In
this work, we explored a pipelined approach for AI-generated text detection
that includes a feature extraction step (i.e. prompt-based rewriting features
inspired by RAIDAR and content-based features derived from the NELA toolkit)
followed by a classification module. Comprehensive experiments were conducted
on the Defactify4.0 dataset, evaluating two tasks: binary classification to
differentiate human-written and AI-generated text, and multi-class
classification to identify the specific generative model used to generate the
input text. Our findings reveal that NELA features significantly outperform
RAIDAR features in both tasks, demonstrating their ability to capture nuanced
linguistic, stylistic, and content-based differences. Combining RAIDAR and NELA
features provided minimal improvement, highlighting the redundancy introduced
by less discriminative features. Among the classifiers tested, XGBoost emerged
as the most effective, leveraging the rich feature sets to achieve high
accuracy and generalisation.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 11:25:05 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Malviya",
"Shrikant",
""
],
[
"Arnau-González",
"Pablo",
""
],
[
"Arevalillo-Herráez",
"Miguel",
""
],
[
"Katsigiannis",
"Stamos",
""
]
] | TITLE: SKDU at De-Factify 4.0: Natural Language Features for AI-Generated
Text-Detection
ABSTRACT: The rapid advancement of large language models (LLMs) has introduced new
challenges in distinguishing human-written text from AI-generated content. In
this work, we explored a pipelined approach for AI-generated text detection
that includes a feature extraction step (i.e. prompt-based rewriting features
inspired by RAIDAR and content-based features derived from the NELA toolkit)
followed by a classification module. Comprehensive experiments were conducted
on the Defactify4.0 dataset, evaluating two tasks: binary classification to
differentiate human-written and AI-generated text, and multi-class
classification to identify the specific generative model used to generate the
input text. Our findings reveal that NELA features significantly outperform
RAIDAR features in both tasks, demonstrating their ability to capture nuanced
linguistic, stylistic, and content-based differences. Combining RAIDAR and NELA
features provided minimal improvement, highlighting the redundancy introduced
by less discriminative features. Among the classifiers tested, XGBoost emerged
as the most effective, leveraging the rich feature sets to achieve high
accuracy and generalisation.
|
2503.22349 | Li-Heng Chen | Li-Heng Chen, Zi-Xin Zou, Chang Liu, Tianjiao Jing, Yan-Pei Cao,
Shi-Sheng Huang, Hongbo Fu and Hua Huang | GCRayDiffusion: Pose-Free Surface Reconstruction via Geometric
Consistent Ray Diffusion | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate surface reconstruction from unposed images is crucial for efficient
3D object or scene creation. However, it remains challenging, particularly for
the joint camera pose estimation. Previous approaches have achieved impressive
pose-free surface reconstruction results in dense-view settings, but could
easily fail for sparse-view scenarios without sufficient visual overlap. In
this paper, we propose a new technique for pose-free surface reconstruction,
which follows triplane-based signed distance field (SDF) learning but
regularizes the learning by explicit points sampled from ray-based diffusion of
camera pose estimation. Our key contribution is a novel Geometric Consistent
Ray Diffusion model (GCRayDiffusion), where we represent camera poses as neural
bundle rays and regress the distribution of noisy rays via a diffusion model.
More importantly, we further condition the denoising process of RGRayDiffusion
using the triplane-based SDF of the entire scene, which provides effective 3D
consistent regularization to achieve multi-view consistent camera pose
estimation. Finally, we incorporate RGRayDiffusion into the triplane-based SDF
learning by introducing on-surface geometric regularization from the sampling
points of the neural bundle rays, which leads to highly accurate pose-free
surface reconstruction results even for sparse-view inputs. Extensive
evaluations on public datasets show that our GCRayDiffusion achieves more
accurate camera pose estimation than previous approaches, with geometrically
more consistent surface reconstruction results, especially given sparse-view
inputs.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 11:45:09 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Chen",
"Li-Heng",
""
],
[
"Zou",
"Zi-Xin",
""
],
[
"Liu",
"Chang",
""
],
[
"Jing",
"Tianjiao",
""
],
[
"Cao",
"Yan-Pei",
""
],
[
"Huang",
"Shi-Sheng",
""
],
[
"Fu",
"Hongbo",
""
],
[
"Huang",
"Hua",
""
]
] | TITLE: GCRayDiffusion: Pose-Free Surface Reconstruction via Geometric
Consistent Ray Diffusion
ABSTRACT: Accurate surface reconstruction from unposed images is crucial for efficient
3D object or scene creation. However, it remains challenging, particularly for
the joint camera pose estimation. Previous approaches have achieved impressive
pose-free surface reconstruction results in dense-view settings, but could
easily fail for sparse-view scenarios without sufficient visual overlap. In
this paper, we propose a new technique for pose-free surface reconstruction,
which follows triplane-based signed distance field (SDF) learning but
regularizes the learning by explicit points sampled from ray-based diffusion of
camera pose estimation. Our key contribution is a novel Geometric Consistent
Ray Diffusion model (GCRayDiffusion), where we represent camera poses as neural
bundle rays and regress the distribution of noisy rays via a diffusion model.
More importantly, we further condition the denoising process of RGRayDiffusion
using the triplane-based SDF of the entire scene, which provides effective 3D
consistent regularization to achieve multi-view consistent camera pose
estimation. Finally, we incorporate RGRayDiffusion into the triplane-based SDF
learning by introducing on-surface geometric regularization from the sampling
points of the neural bundle rays, which leads to highly accurate pose-free
surface reconstruction results even for sparse-view inputs. Extensive
evaluations on public datasets show that our GCRayDiffusion achieves more
accurate camera pose estimation than previous approaches, with geometrically
more consistent surface reconstruction results, especially given sparse-view
inputs.
|
2503.22351 | Byeongjun Kwon | Byeongjun Kwon, Munchurl Kim | One Look is Enough: A Novel Seamless Patchwise Refinement for Zero-Shot
Monocular Depth Estimation Models on High-Resolution Images | Please visit our project page this
https://kaist-viclab.github.io/One-Look-is-Enough_site | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Zero-shot depth estimation (DE) models exhibit strong generalization
performance as they are trained on large-scale datasets. However, existing
models struggle with high-resolution images due to the discrepancy in image
resolutions of training (with smaller resolutions) and inference (for high
resolutions). Processing them at full resolution leads to decreased estimation
accuracy on depth with tremendous memory consumption, while downsampling to the
training resolution results in blurred edges in the estimated depth images.
Prevailing high-resolution depth estimation methods adopt a patch-based
approach, which introduces depth discontinuity issues when reassembling the
estimated depth patches and results in test-time inefficiency. Additionally, to
obtain fine-grained depth details, these methods rely on synthetic datasets due
to the real-world sparse ground truth depth, leading to poor generalizability.
To tackle these limitations, we propose Patch Refine Once (PRO), an efficient
and generalizable tile-based framework. Our PRO consists of two key components:
(i) Grouped Patch Consistency Training that enhances test-time efficiency while
mitigating the depth discontinuity problem by jointly processing four
overlapping patches and enforcing a consistency loss on their overlapping
regions within a single backpropagation step, and (ii) Bias Free Masking that
prevents the DE models from overfitting to dataset-specific biases, enabling
better generalization to real-world datasets even after training on synthetic
data. Zero-shot evaluation on Booster, ETH3D, Middlebury 2014, and NuScenes
demonstrates into which our PRO can be well harmonized, making their DE
capabilities still effective for the grid input of high-resolution images with
little depth discontinuities at the grid boundaries. Our PRO runs fast at
inference time.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 11:46:50 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Kwon",
"Byeongjun",
""
],
[
"Kim",
"Munchurl",
""
]
] | TITLE: One Look is Enough: A Novel Seamless Patchwise Refinement for Zero-Shot
Monocular Depth Estimation Models on High-Resolution Images
ABSTRACT: Zero-shot depth estimation (DE) models exhibit strong generalization
performance as they are trained on large-scale datasets. However, existing
models struggle with high-resolution images due to the discrepancy in image
resolutions of training (with smaller resolutions) and inference (for high
resolutions). Processing them at full resolution leads to decreased estimation
accuracy on depth with tremendous memory consumption, while downsampling to the
training resolution results in blurred edges in the estimated depth images.
Prevailing high-resolution depth estimation methods adopt a patch-based
approach, which introduces depth discontinuity issues when reassembling the
estimated depth patches and results in test-time inefficiency. Additionally, to
obtain fine-grained depth details, these methods rely on synthetic datasets due
to the real-world sparse ground truth depth, leading to poor generalizability.
To tackle these limitations, we propose Patch Refine Once (PRO), an efficient
and generalizable tile-based framework. Our PRO consists of two key components:
(i) Grouped Patch Consistency Training that enhances test-time efficiency while
mitigating the depth discontinuity problem by jointly processing four
overlapping patches and enforcing a consistency loss on their overlapping
regions within a single backpropagation step, and (ii) Bias Free Masking that
prevents the DE models from overfitting to dataset-specific biases, enabling
better generalization to real-world datasets even after training on synthetic
data. Zero-shot evaluation on Booster, ETH3D, Middlebury 2014, and NuScenes
demonstrates into which our PRO can be well harmonized, making their DE
capabilities still effective for the grid input of high-resolution images with
little depth discontinuities at the grid boundaries. Our PRO runs fast at
inference time.
|
2503.22353 | Yubo Li | Yubo Li, Yidi Miao, Xueying Ding, Ramayya Krishnan, Rema Padman | Firm or Fickle? Evaluating Large Language Models Consistency in
Sequential Interactions | 8 pages, 5 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have shown remarkable capabilities across
various tasks, but their deployment in high-stake domains requires consistent
performance across multiple interaction rounds. This paper introduces a
comprehensive framework for evaluating and improving LLM response consistency,
making three key contributions. First, we propose a novel Position-Weighted
Consistency (PWC) score that captures both the importance of early-stage
stability and recovery patterns in multi-turn interactions. Second, we present
a carefully curated benchmark dataset spanning diverse domains and difficulty
levels, specifically designed to evaluate LLM consistency under various
challenging follow-up scenarios. Third, we introduce Confidence-Aware Response
Generation (CARG), a framework that significantly improves response stability
by incorporating model confidence signals into the generation process.
Empirical results demonstrate that CARG significantly improves response
stability without sacrificing accuracy, underscoring its potential for reliable
LLM deployment in critical applications.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 11:49:56 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Li",
"Yubo",
""
],
[
"Miao",
"Yidi",
""
],
[
"Ding",
"Xueying",
""
],
[
"Krishnan",
"Ramayya",
""
],
[
"Padman",
"Rema",
""
]
] | TITLE: Firm or Fickle? Evaluating Large Language Models Consistency in
Sequential Interactions
ABSTRACT: Large Language Models (LLMs) have shown remarkable capabilities across
various tasks, but their deployment in high-stake domains requires consistent
performance across multiple interaction rounds. This paper introduces a
comprehensive framework for evaluating and improving LLM response consistency,
making three key contributions. First, we propose a novel Position-Weighted
Consistency (PWC) score that captures both the importance of early-stage
stability and recovery patterns in multi-turn interactions. Second, we present
a carefully curated benchmark dataset spanning diverse domains and difficulty
levels, specifically designed to evaluate LLM consistency under various
challenging follow-up scenarios. Third, we introduce Confidence-Aware Response
Generation (CARG), a framework that significantly improves response stability
by incorporating model confidence signals into the generation process.
Empirical results demonstrate that CARG significantly improves response
stability without sacrificing accuracy, underscoring its potential for reliable
LLM deployment in critical applications.
|
2503.22357 | Hadrien Reynaud | Hadrien Reynaud, Alberto Gomez, Paul Leeson, Qingjie Meng, Bernhard
Kainz | EchoFlow: A Foundation Model for Cardiac Ultrasound Image and Video
Generation | This work has been submitted to the IEEE for possible publication | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in deep learning have significantly enhanced medical image analysis,
yet the availability of large-scale medical datasets remains constrained by
patient privacy concerns. We present EchoFlow, a novel framework designed to
generate high-quality, privacy-preserving synthetic echocardiogram images and
videos. EchoFlow comprises four key components: an adversarial variational
autoencoder for defining an efficient latent representation of cardiac
ultrasound images, a latent image flow matching model for generating accurate
latent echocardiogram images, a latent re-identification model to ensure
privacy by filtering images anatomically, and a latent video flow matching
model for animating latent images into realistic echocardiogram videos
conditioned on ejection fraction. We rigorously evaluate our synthetic datasets
on the clinically relevant task of ejection fraction regression and
demonstrate, for the first time, that downstream models trained exclusively on
EchoFlow-generated synthetic datasets achieve performance parity with models
trained on real datasets. We release our models and synthetic datasets,
enabling broader, privacy-compliant research in medical ultrasound imaging at
https://huggingface.co/spaces/HReynaud/EchoFlow.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 11:51:59 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Reynaud",
"Hadrien",
""
],
[
"Gomez",
"Alberto",
""
],
[
"Leeson",
"Paul",
""
],
[
"Meng",
"Qingjie",
""
],
[
"Kainz",
"Bernhard",
""
]
] | TITLE: EchoFlow: A Foundation Model for Cardiac Ultrasound Image and Video
Generation
ABSTRACT: Advances in deep learning have significantly enhanced medical image analysis,
yet the availability of large-scale medical datasets remains constrained by
patient privacy concerns. We present EchoFlow, a novel framework designed to
generate high-quality, privacy-preserving synthetic echocardiogram images and
videos. EchoFlow comprises four key components: an adversarial variational
autoencoder for defining an efficient latent representation of cardiac
ultrasound images, a latent image flow matching model for generating accurate
latent echocardiogram images, a latent re-identification model to ensure
privacy by filtering images anatomically, and a latent video flow matching
model for animating latent images into realistic echocardiogram videos
conditioned on ejection fraction. We rigorously evaluate our synthetic datasets
on the clinically relevant task of ejection fraction regression and
demonstrate, for the first time, that downstream models trained exclusively on
EchoFlow-generated synthetic datasets achieve performance parity with models
trained on real datasets. We release our models and synthetic datasets,
enabling broader, privacy-compliant research in medical ultrasound imaging at
https://huggingface.co/spaces/HReynaud/EchoFlow.
|
2503.22359 | Jiahao Xia | Jiahao Xia, Min Xu, Wenjian Huang, Jianguo Zhang, Haimin Zhang,
Chunxia Xiao | Mitigating Knowledge Discrepancies among Multiple Datasets for
Task-agnostic Unified Face Alignment | 24 Pages, 9 Figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the similar structures of human faces, existing face alignment
methods cannot learn unified knowledge from multiple datasets with different
landmark annotations. The limited training samples in a single dataset commonly
result in fragile robustness in this field. To mitigate knowledge discrepancies
among different datasets and train a task-agnostic unified face alignment
(TUFA) framework, this paper presents a strategy to unify knowledge from
multiple datasets. Specifically, we calculate a mean face shape for each
dataset. To explicitly align these mean shapes on an interpretable plane based
on their semantics, each shape is then incorporated with a group of semantic
alignment embeddings. The 2D coordinates of these aligned shapes can be viewed
as the anchors of the plane. By encoding them into structure prompts and
further regressing the corresponding facial landmarks using image features, a
mapping from the plane to the target faces is finally established, which
unifies the learning target of different datasets. Consequently, multiple
datasets can be utilized to boost the generalization ability of the model. The
successful mitigation of discrepancies also enhances the efficiency of
knowledge transferring to a novel dataset, significantly boosts the performance
of few-shot face alignment. Additionally, the interpretable plane endows TUFA
with a task-agnostic characteristic, enabling it to locate landmarks unseen
during training in a zero-shot manner. Extensive experiments are carried on
seven benchmarks and the results demonstrate an impressive improvement in face
alignment brought by knowledge discrepancies mitigation.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 11:59:27 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Xia",
"Jiahao",
""
],
[
"Xu",
"Min",
""
],
[
"Huang",
"Wenjian",
""
],
[
"Zhang",
"Jianguo",
""
],
[
"Zhang",
"Haimin",
""
],
[
"Xiao",
"Chunxia",
""
]
] | TITLE: Mitigating Knowledge Discrepancies among Multiple Datasets for
Task-agnostic Unified Face Alignment
ABSTRACT: Despite the similar structures of human faces, existing face alignment
methods cannot learn unified knowledge from multiple datasets with different
landmark annotations. The limited training samples in a single dataset commonly
result in fragile robustness in this field. To mitigate knowledge discrepancies
among different datasets and train a task-agnostic unified face alignment
(TUFA) framework, this paper presents a strategy to unify knowledge from
multiple datasets. Specifically, we calculate a mean face shape for each
dataset. To explicitly align these mean shapes on an interpretable plane based
on their semantics, each shape is then incorporated with a group of semantic
alignment embeddings. The 2D coordinates of these aligned shapes can be viewed
as the anchors of the plane. By encoding them into structure prompts and
further regressing the corresponding facial landmarks using image features, a
mapping from the plane to the target faces is finally established, which
unifies the learning target of different datasets. Consequently, multiple
datasets can be utilized to boost the generalization ability of the model. The
successful mitigation of discrepancies also enhances the efficiency of
knowledge transferring to a novel dataset, significantly boosts the performance
of few-shot face alignment. Additionally, the interpretable plane endows TUFA
with a task-agnostic characteristic, enabling it to locate landmarks unseen
during training in a zero-shot manner. Extensive experiments are carried on
seven benchmarks and the results demonstrate an impressive improvement in face
alignment brought by knowledge discrepancies mitigation.
|
2503.22362 | Yuan He | Yuan He, Bailan He, Zifeng Ding, Alisia Lupidi, Yuqicheng Zhu, Shuo
Chen, Caiqi Zhang, Jiaoyan Chen, Yunpu Ma, Volker Tresp, Ian Horrocks | Supposedly Equivalent Facts That Aren't? Entity Frequency in
Pre-training Induces Asymmetry in LLMs | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding and mitigating hallucinations in Large Language Models (LLMs)
is crucial for ensuring reliable content generation. While previous research
has primarily focused on "when" LLMs hallucinate, our work explains "why" and
directly links model behaviour to the pre-training data that forms their prior
knowledge. Specifically, we demonstrate that an asymmetry exists in the
recognition of logically equivalent facts, which can be attributed to frequency
discrepancies of entities appearing as subjects versus objects. Given that most
pre-training datasets are inaccessible, we leverage the fully open-source OLMo
series by indexing its Dolma dataset to estimate entity frequencies. Using
relational facts (represented as triples) from Wikidata5M, we construct probing
datasets to isolate this effect. Our experiments reveal that facts with a
high-frequency subject and a low-frequency object are better recognised than
their inverse, despite their logical equivalence. The pattern reverses in
low-to-high frequency settings, and no statistically significant asymmetry
emerges when both entities are high-frequency. These findings highlight the
influential role of pre-training data in shaping model predictions and provide
insights for inferring the characteristics of pre-training data in closed or
partially closed LLMs.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 12:12:38 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"He",
"Yuan",
""
],
[
"He",
"Bailan",
""
],
[
"Ding",
"Zifeng",
""
],
[
"Lupidi",
"Alisia",
""
],
[
"Zhu",
"Yuqicheng",
""
],
[
"Chen",
"Shuo",
""
],
[
"Zhang",
"Caiqi",
""
],
[
"Chen",
"Jiaoyan",
""
],
[
"Ma",
"Yunpu",
""
],
[
"Tresp",
"Volker",
""
],
[
"Horrocks",
"Ian",
""
]
] | TITLE: Supposedly Equivalent Facts That Aren't? Entity Frequency in
Pre-training Induces Asymmetry in LLMs
ABSTRACT: Understanding and mitigating hallucinations in Large Language Models (LLMs)
is crucial for ensuring reliable content generation. While previous research
has primarily focused on "when" LLMs hallucinate, our work explains "why" and
directly links model behaviour to the pre-training data that forms their prior
knowledge. Specifically, we demonstrate that an asymmetry exists in the
recognition of logically equivalent facts, which can be attributed to frequency
discrepancies of entities appearing as subjects versus objects. Given that most
pre-training datasets are inaccessible, we leverage the fully open-source OLMo
series by indexing its Dolma dataset to estimate entity frequencies. Using
relational facts (represented as triples) from Wikidata5M, we construct probing
datasets to isolate this effect. Our experiments reveal that facts with a
high-frequency subject and a low-frequency object are better recognised than
their inverse, despite their logical equivalence. The pattern reverses in
low-to-high frequency settings, and no statistically significant asymmetry
emerges when both entities are high-frequency. These findings highlight the
influential role of pre-training data in shaping model predictions and provide
insights for inferring the characteristics of pre-training data in closed or
partially closed LLMs.
|
2503.22363 | Nandakishor Mukkunnoth | Nandakishor M, Vrinda Govind V, Anuradha Puthalath, Anzy L, Swathi P
S, Aswathi R, Devaprabha A R, Varsha Raj, Midhuna Krishnan K, Akhila
Anilkumar T V, Yamuna P V | ForcePose: A Deep Learning Approach for Force Calculation Based on
Action Recognition Using MediaPipe Pose Estimation Combined with Object
Detection | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Force estimation in human-object interactions is crucial for various fields
like ergonomics, physical therapy, and sports science. Traditional methods
depend on specialized equipment such as force plates and sensors, which makes
accurate assessments both expensive and restricted to laboratory settings. In
this paper, we introduce ForcePose, a novel deep learning framework that
estimates applied forces by combining human pose estimation with object
detection. Our approach leverages MediaPipe for skeletal tracking and SSD
MobileNet for object recognition to create a unified representation of
human-object interaction. We've developed a specialized neural network that
processes both spatial and temporal features to predict force magnitude and
direction without needing any physical sensors. After training on our dataset
of 850 annotated videos with corresponding force measurements, our model
achieves a mean absolute error of 5.83 N in force magnitude and 7.4 degrees in
force direction. When compared to existing computer vision approaches, our
method performs 27.5% better while still offering real-time performance on
standard computing hardware. ForcePose opens up new possibilities for force
analysis in diverse real-world scenarios where traditional measurement tools
are impractical or intrusive. This paper discusses our methodology, the dataset
creation process, evaluation metrics, and potential applications across
rehabilitation, ergonomics assessment, and athletic performance analysis.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 12:13:56 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"M",
"Nandakishor",
""
],
[
"Govind",
"Vrinda",
"V"
],
[
"Puthalath",
"Anuradha",
""
],
[
"L",
"Anzy",
""
],
[
"S",
"Swathi P",
""
],
[
"R",
"Aswathi",
""
],
[
"R",
"Devaprabha A",
""
],
[
"Raj",
"Varsha",
""
],
[
"K",
"Midhuna Krishnan",
""
],
[
"T",
"Akhila Anilkumar",
"V"
],
[
"P",
"Yamuna",
"V"
]
] | TITLE: ForcePose: A Deep Learning Approach for Force Calculation Based on
Action Recognition Using MediaPipe Pose Estimation Combined with Object
Detection
ABSTRACT: Force estimation in human-object interactions is crucial for various fields
like ergonomics, physical therapy, and sports science. Traditional methods
depend on specialized equipment such as force plates and sensors, which makes
accurate assessments both expensive and restricted to laboratory settings. In
this paper, we introduce ForcePose, a novel deep learning framework that
estimates applied forces by combining human pose estimation with object
detection. Our approach leverages MediaPipe for skeletal tracking and SSD
MobileNet for object recognition to create a unified representation of
human-object interaction. We've developed a specialized neural network that
processes both spatial and temporal features to predict force magnitude and
direction without needing any physical sensors. After training on our dataset
of 850 annotated videos with corresponding force measurements, our model
achieves a mean absolute error of 5.83 N in force magnitude and 7.4 degrees in
force direction. When compared to existing computer vision approaches, our
method performs 27.5% better while still offering real-time performance on
standard computing hardware. ForcePose opens up new possibilities for force
analysis in diverse real-world scenarios where traditional measurement tools
are impractical or intrusive. This paper discusses our methodology, the dataset
creation process, evaluation metrics, and potential applications across
rehabilitation, ergonomics assessment, and athletic performance analysis.
|
2503.22374 | Giulio Federico | Giulio Federico, Giuseppe Amato, Fabio Carrara, Claudio Gennaro, Marco
Di Benedetto | ViSketch-GPT: Collaborative Multi-Scale Feature Extraction for Sketch
Recognition and Generation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Understanding the nature of human sketches is challenging because of the wide
variation in how they are created. Recognizing complex structural patterns
improves both the accuracy in recognizing sketches and the fidelity of the
generated sketches. In this work, we introduce ViSketch-GPT, a novel algorithm
designed to address these challenges through a multi-scale context extraction
approach. The model captures intricate details at multiple scales and combines
them using an ensemble-like mechanism, where the extracted features work
collaboratively to enhance the recognition and generation of key details
crucial for classification and generation tasks.
The effectiveness of ViSketch-GPT is validated through extensive experiments
on the QuickDraw dataset. Our model establishes a new benchmark, significantly
outperforming existing methods in both classification and generation tasks,
with substantial improvements in accuracy and the fidelity of generated
sketches.
The proposed algorithm offers a robust framework for understanding complex
structures by extracting features that collaborate to recognize intricate
details, enhancing the understanding of structures like sketches and making it
a versatile tool for various applications in computer vision and machine
learning.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 12:28:30 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Federico",
"Giulio",
""
],
[
"Amato",
"Giuseppe",
""
],
[
"Carrara",
"Fabio",
""
],
[
"Gennaro",
"Claudio",
""
],
[
"Di Benedetto",
"Marco",
""
]
] | TITLE: ViSketch-GPT: Collaborative Multi-Scale Feature Extraction for Sketch
Recognition and Generation
ABSTRACT: Understanding the nature of human sketches is challenging because of the wide
variation in how they are created. Recognizing complex structural patterns
improves both the accuracy in recognizing sketches and the fidelity of the
generated sketches. In this work, we introduce ViSketch-GPT, a novel algorithm
designed to address these challenges through a multi-scale context extraction
approach. The model captures intricate details at multiple scales and combines
them using an ensemble-like mechanism, where the extracted features work
collaboratively to enhance the recognition and generation of key details
crucial for classification and generation tasks.
The effectiveness of ViSketch-GPT is validated through extensive experiments
on the QuickDraw dataset. Our model establishes a new benchmark, significantly
outperforming existing methods in both classification and generation tasks,
with substantial improvements in accuracy and the fidelity of generated
sketches.
The proposed algorithm offers a robust framework for understanding complex
structures by extracting features that collaborate to recognize intricate
details, enhancing the understanding of structures like sketches and making it
a versatile tool for various applications in computer vision and machine
learning.
|
2503.22375 | Christian Steinhauser | Christian Steinhauser, Philipp Reis, Hubert Padusinski, Jacob Langner
and Eric Sax | Data Quality Matters: Quantifying Image Quality Impact on Machine
Learning Performance | Submitted to IEEE IV 2025, Under Review | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Precise perception of the environment is essential in highly automated
driving systems, which rely on machine learning tasks such as object detection
and segmentation. Compression of sensor data is commonly used for data
handling, while virtualization is used for hardware-in-the-loop validation.
Both methods can alter sensor data and degrade model performance. This
necessitates a systematic approach to quantifying image validity. This paper
presents a four-step framework to evaluate the impact of image modifications on
machine learning tasks. First, a dataset with modified images is prepared to
ensure one-to-one matching image pairs, enabling measurement of deviations
resulting from compression and virtualization. Second, image deviations are
quantified by comparing the effects of compression and virtualization against
original camera-based sensor data. Third, the performance of state-of-the-art
object detection models is analyzed to determine how altered input data affects
perception tasks, including bounding box accuracy and reliability. Finally, a
correlation analysis is performed to identify relationships between image
quality and model performance. As a result, the LPIPS metric achieves the
highest correlation between image deviation and machine learning performance
across all evaluated machine learning tasks.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 12:28:44 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Steinhauser",
"Christian",
""
],
[
"Reis",
"Philipp",
""
],
[
"Padusinski",
"Hubert",
""
],
[
"Langner",
"Jacob",
""
],
[
"Sax",
"Eric",
""
]
] | TITLE: Data Quality Matters: Quantifying Image Quality Impact on Machine
Learning Performance
ABSTRACT: Precise perception of the environment is essential in highly automated
driving systems, which rely on machine learning tasks such as object detection
and segmentation. Compression of sensor data is commonly used for data
handling, while virtualization is used for hardware-in-the-loop validation.
Both methods can alter sensor data and degrade model performance. This
necessitates a systematic approach to quantifying image validity. This paper
presents a four-step framework to evaluate the impact of image modifications on
machine learning tasks. First, a dataset with modified images is prepared to
ensure one-to-one matching image pairs, enabling measurement of deviations
resulting from compression and virtualization. Second, image deviations are
quantified by comparing the effects of compression and virtualization against
original camera-based sensor data. Third, the performance of state-of-the-art
object detection models is analyzed to determine how altered input data affects
perception tasks, including bounding box accuracy and reliability. Finally, a
correlation analysis is performed to identify relationships between image
quality and model performance. As a result, the LPIPS metric achieves the
highest correlation between image deviation and machine learning performance
across all evaluated machine learning tasks.
|
2503.22388 | Zhiyu Yang | Zhiyu Yang, Shuo Wang, Yukun Yan and Yang Deng | Why Stop at One Error? Benchmarking LLMs as Data Science Code Debuggers
for Multi-Hop and Multi-Bug Errors | Work in progress | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | LLMs are transforming software development, yet current code generation and
code repair benchmarks mainly assess syntactic and functional correctness in
simple, single-error cases. LLMs' capabilities to autonomously find and fix
runtime logical errors in complex data science code remain largely unexplored.
To address this gap, we introduce DSDBench: the Data Science Debugging
Benchmark, the first benchmark for systematic evaluation of LLMs on multi-hop
error tracing and multi-bug detection in data science code debugging. DSDBench
adapts datasets from existing data science task benchmarks, such as DABench and
MatPlotBench, featuring realistic data science debugging tasks with
automatically synthesized multi-hop, multi-bug code snippets. DSDBench includes
1,117 annotated samples with 741 cause-effect error pairs and runtime error
messages. Evaluations of state-of-the-art LLMs on DSDBench show significant
performance gaps, highlighting challenges in debugging logical runtime errors
in data science code. DSDBench offers a crucial resource to evaluate and
improve LLMs' debugging and reasoning capabilities, enabling more reliable
AI-assisted data science in the future.DSDBench is publicly available at
https://github.com/KevinCL16/DSDBench.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 12:46:54 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Yang",
"Zhiyu",
""
],
[
"Wang",
"Shuo",
""
],
[
"Yan",
"Yukun",
""
],
[
"Deng",
"Yang",
""
]
] | TITLE: Why Stop at One Error? Benchmarking LLMs as Data Science Code Debuggers
for Multi-Hop and Multi-Bug Errors
ABSTRACT: LLMs are transforming software development, yet current code generation and
code repair benchmarks mainly assess syntactic and functional correctness in
simple, single-error cases. LLMs' capabilities to autonomously find and fix
runtime logical errors in complex data science code remain largely unexplored.
To address this gap, we introduce DSDBench: the Data Science Debugging
Benchmark, the first benchmark for systematic evaluation of LLMs on multi-hop
error tracing and multi-bug detection in data science code debugging. DSDBench
adapts datasets from existing data science task benchmarks, such as DABench and
MatPlotBench, featuring realistic data science debugging tasks with
automatically synthesized multi-hop, multi-bug code snippets. DSDBench includes
1,117 annotated samples with 741 cause-effect error pairs and runtime error
messages. Evaluations of state-of-the-art LLMs on DSDBench show significant
performance gaps, highlighting challenges in debugging logical runtime errors
in data science code. DSDBench offers a crucial resource to evaluate and
improve LLMs' debugging and reasoning capabilities, enabling more reliable
AI-assisted data science in the future.DSDBench is publicly available at
https://github.com/KevinCL16/DSDBench.
|
2503.22389 | Dawid P{\l}udowski | Dawid P{\l}udowski, Francesco Spinnato, Piotr Wilczy\'nski, Krzysztof
Kotowski, Evridiki Vasileia Ntagiou, Riccardo Guidotti, Przemys{\l}aw Biecek | MASCOTS: Model-Agnostic Symbolic COunterfactual explanations for Time
Series | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Counterfactual explanations provide an intuitive way to understand model
decisions by identifying minimal changes required to alter an outcome. However,
applying counterfactual methods to time series models remains challenging due
to temporal dependencies, high dimensionality, and the lack of an intuitive
human-interpretable representation. We introduce MASCOTS, a method that
leverages the Bag-of-Receptive-Fields representation alongside symbolic
transformations inspired by Symbolic Aggregate Approximation. By operating in a
symbolic feature space, it enhances interpretability while preserving fidelity
to the original data and model. Unlike existing approaches that either depend
on model structure or autoencoder-based sampling, MASCOTS directly generates
meaningful and diverse counterfactual observations in a model-agnostic manner,
operating on both univariate and multivariate data. We evaluate MASCOTS on
univariate and multivariate benchmark datasets, demonstrating comparable
validity, proximity, and plausibility to state-of-the-art methods, while
significantly improving interpretability and sparsity. Its symbolic nature
allows for explanations that can be expressed visually, in natural language, or
through semantic representations, making counterfactual reasoning more
accessible and actionable.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 12:48:12 GMT"
}
] | 2025-03-31T00:00:00 | [
[
"Płudowski",
"Dawid",
""
],
[
"Spinnato",
"Francesco",
""
],
[
"Wilczyński",
"Piotr",
""
],
[
"Kotowski",
"Krzysztof",
""
],
[
"Ntagiou",
"Evridiki Vasileia",
""
],
[
"Guidotti",
"Riccardo",
""
],
[
"Biecek",
"Przemysław",
""
]
] | TITLE: MASCOTS: Model-Agnostic Symbolic COunterfactual explanations for Time
Series
ABSTRACT: Counterfactual explanations provide an intuitive way to understand model
decisions by identifying minimal changes required to alter an outcome. However,
applying counterfactual methods to time series models remains challenging due
to temporal dependencies, high dimensionality, and the lack of an intuitive
human-interpretable representation. We introduce MASCOTS, a method that
leverages the Bag-of-Receptive-Fields representation alongside symbolic
transformations inspired by Symbolic Aggregate Approximation. By operating in a
symbolic feature space, it enhances interpretability while preserving fidelity
to the original data and model. Unlike existing approaches that either depend
on model structure or autoencoder-based sampling, MASCOTS directly generates
meaningful and diverse counterfactual observations in a model-agnostic manner,
operating on both univariate and multivariate data. We evaluate MASCOTS on
univariate and multivariate benchmark datasets, demonstrating comparable
validity, proximity, and plausibility to state-of-the-art methods, while
significantly improving interpretability and sparsity. Its symbolic nature
allows for explanations that can be expressed visually, in natural language, or
through semantic representations, making counterfactual reasoning more
accessible and actionable.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.