Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.13469 | Ivan Sviridov | Ivan Sviridov, Konstantin Egorov | Conditional Electrocardiogram Generation Using Hierarchical Variational
Autoencoders | 10 pages, 6 figures, 7 tables | null | null | null | eess.SP cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cardiovascular diseases (CVDs) are disorders impacting the heart and
circulatory system. These disorders are the foremost and continuously
escalating cause of mortality worldwide. One of the main tasks when working
with CVDs is analyzing and identifying pathologies on a 12-lead
electrocardiogram (ECG) with a standard 10-second duration. Using machine
learning (ML) in automatic ECG analysis increases CVD diagnostics'
availability, speed, and accuracy. However, the most significant difficulty in
developing ML models is obtaining a sufficient training dataset. Due to the
limitations of medical data usage, such as expensiveness, errors, the ambiguity
of labels, imbalance of classes, and privacy issues, utilizing synthetic
samples depending on specific pathologies bypasses these restrictions and
improves algorithm quality. Existing solutions for the conditional generation
of ECG signals are mainly built on Generative Adversarial Networks (GANs), and
only a few papers consider the architectures based on Variational Autoencoders
(VAEs), showing comparable results in recent works. This paper proposes the
publicly available conditional Nouveau VAE model for ECG signal generation
(cNVAE-ECG), which produces high-resolution ECGs with multiple pathologies. We
provide an extensive comparison of the proposed model on various practical
downstream tasks, including transfer learning scenarios showing an area under
the receiver operating characteristic (AUROC) increase up to 2% surpassing
GAN-like competitors.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 13:30:36 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Sviridov",
"Ivan",
""
],
[
"Egorov",
"Konstantin",
""
]
] | TITLE: Conditional Electrocardiogram Generation Using Hierarchical Variational
Autoencoders
ABSTRACT: Cardiovascular diseases (CVDs) are disorders impacting the heart and
circulatory system. These disorders are the foremost and continuously
escalating cause of mortality worldwide. One of the main tasks when working
with CVDs is analyzing and identifying pathologies on a 12-lead
electrocardiogram (ECG) with a standard 10-second duration. Using machine
learning (ML) in automatic ECG analysis increases CVD diagnostics'
availability, speed, and accuracy. However, the most significant difficulty in
developing ML models is obtaining a sufficient training dataset. Due to the
limitations of medical data usage, such as expensiveness, errors, the ambiguity
of labels, imbalance of classes, and privacy issues, utilizing synthetic
samples depending on specific pathologies bypasses these restrictions and
improves algorithm quality. Existing solutions for the conditional generation
of ECG signals are mainly built on Generative Adversarial Networks (GANs), and
only a few papers consider the architectures based on Variational Autoencoders
(VAEs), showing comparable results in recent works. This paper proposes the
publicly available conditional Nouveau VAE model for ECG signal generation
(cNVAE-ECG), which produces high-resolution ECGs with multiple pathologies. We
provide an extensive comparison of the proposed model on various practical
downstream tasks, including transfer learning scenarios showing an area under
the receiver operating characteristic (AUROC) increase up to 2% surpassing
GAN-like competitors.
|
2503.13470 | Moahmmod Suvon | Mohammod N. I. Suvon, Shuo Zhou, Prasun C. Tripathi, Wenrui Fan, Samer
Alabed, Bishesh Khanal, Venet Osmani, Andrew J. Swift, Chen (Cherise) Chen
and Haiping Lu | Multimodal Lead-Specific Modeling of ECG for Low-Cost Pulmonary
Hypertension Assessment | null | null | null | null | eess.SP cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pulmonary hypertension (PH) is frequently underdiagnosed in low- and
middle-income countries (LMICs) primarily due to the scarcity of advanced
diagnostic tools. Several studies in PH have applied machine learning to
low-cost diagnostic tools like 12-lead ECG (12L-ECG), but they mainly focus on
areas with limited resources, overlooking areas with no diagnostic tools, such
as rural primary healthcare in LMICs. Recent studies have shown the
effectiveness of 6-lead ECG (6L-ECG), as a cheaper and portable alternative in
detecting various cardiac conditions, but its clinical value for PH detection
is not well proved. Furthermore, existing methods treat 12L-/6L-ECG as a single
modality, capturing only shared features while overlooking lead-specific
features essential for identifying complex cardiac hemodynamic changes. In this
paper, we propose Lead-Specific Electrocardiogram Multimodal Variational
Autoencoder (LS-EMVAE), a model pre-trained on large-population 12L-ECG data
and fine-tuned on task-specific data (12L-ECG or 6L-ECG). LS-EMVAE models each
12L-ECG lead as a separate modality and introduces a hierarchical expert
composition using Mixture and Product of Experts for adaptive latent feature
fusion between lead-specific and shared features. Unlike existing approaches,
LS-EMVAE makes better predictions on both 12L-ECG and 6L-ECG at inference,
making it an equitable solution for areas with limited or no diagnostic tools.
We pre-trained LS-EMVAE on 800,000 publicly available 12L-ECG samples and
fine-tuned it for two tasks: 1) PH detection and 2) phenotyping
pre-/post-capillary PH, on in-house datasets of 892 and 691 subjects across
12L-ECG and 6L-ECG settings. Extensive experiments show that LS-EMVAE
outperforms existing baselines in both ECG settings, while 6L-ECG achieves
performance comparable to 12L-ECG, unlocking its potential for global PH
screening in areas without diagnostic tools.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:16:38 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Suvon",
"Mohammod N. I.",
"",
"Cherise"
],
[
"Zhou",
"Shuo",
"",
"Cherise"
],
[
"Tripathi",
"Prasun C.",
"",
"Cherise"
],
[
"Fan",
"Wenrui",
"",
"Cherise"
],
[
"Alabed",
"Samer",
"",
"Cherise"
],
[
"Khanal",
"Bishesh",
"",
"Cherise"
],
[
"Osmani",
"Venet",
"",
"Cherise"
],
[
"Swift",
"Andrew J.",
"",
"Cherise"
],
[
"Chen",
"",
"",
"Cherise"
],
[
"Chen",
"",
"",
"Cherise"
],
[
"Lu",
"Haiping",
""
]
] | TITLE: Multimodal Lead-Specific Modeling of ECG for Low-Cost Pulmonary
Hypertension Assessment
ABSTRACT: Pulmonary hypertension (PH) is frequently underdiagnosed in low- and
middle-income countries (LMICs) primarily due to the scarcity of advanced
diagnostic tools. Several studies in PH have applied machine learning to
low-cost diagnostic tools like 12-lead ECG (12L-ECG), but they mainly focus on
areas with limited resources, overlooking areas with no diagnostic tools, such
as rural primary healthcare in LMICs. Recent studies have shown the
effectiveness of 6-lead ECG (6L-ECG), as a cheaper and portable alternative in
detecting various cardiac conditions, but its clinical value for PH detection
is not well proved. Furthermore, existing methods treat 12L-/6L-ECG as a single
modality, capturing only shared features while overlooking lead-specific
features essential for identifying complex cardiac hemodynamic changes. In this
paper, we propose Lead-Specific Electrocardiogram Multimodal Variational
Autoencoder (LS-EMVAE), a model pre-trained on large-population 12L-ECG data
and fine-tuned on task-specific data (12L-ECG or 6L-ECG). LS-EMVAE models each
12L-ECG lead as a separate modality and introduces a hierarchical expert
composition using Mixture and Product of Experts for adaptive latent feature
fusion between lead-specific and shared features. Unlike existing approaches,
LS-EMVAE makes better predictions on both 12L-ECG and 6L-ECG at inference,
making it an equitable solution for areas with limited or no diagnostic tools.
We pre-trained LS-EMVAE on 800,000 publicly available 12L-ECG samples and
fine-tuned it for two tasks: 1) PH detection and 2) phenotyping
pre-/post-capillary PH, on in-house datasets of 892 and 691 subjects across
12L-ECG and 6L-ECG settings. Extensive experiments show that LS-EMVAE
outperforms existing baselines in both ECG settings, while 6L-ECG achieves
performance comparable to 12L-ECG, unlocking its potential for global PH
screening in areas without diagnostic tools.
|
2503.13473 | Jisoo Hong | Jisoo Hong, Youngjin Jung, Jihwan Bae, Seungho Song, Sung-Woo Kang | Robust Detection of Extremely Thin Lines Using 0.2mm Piano Wire | null | null | null | null | eess.SP cs.AI cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | This study developed an algorithm capable of detecting a reference line (a
0.2 mm thick piano wire) to accurately determine the position of an automated
installation robot within an elevator shaft. A total of 3,245 images were
collected from the experimental tower of H Company, the leading elevator
manufacturer in South Korea, and the detection performance was evaluated using
four experimental approaches (GCH, GSCH, GECH, FCH). During the initial image
processing stage, Gaussian blurring, sharpening filter, embossing filter, and
Fourier Transform were applied, followed by Canny Edge Detection and Hough
Transform. Notably, the method was developed to accurately extract the
reference line by averaging the x-coordinates of the lines detected through the
Hough Transform. This approach enabled the detection of the 0.2 mm thick piano
wire with high accuracy, even in the presence of noise and other interfering
factors (e.g., concrete cracks inside the elevator shaft or safety bars for
filming equipment). The experimental results showed that Experiment 4 (FCH),
which utilized Fourier Transform in the preprocessing stage, achieved the
highest detection rate for the LtoL, LtoR, and RtoL datasets. Experiment
2(GSCH), which applied Gaussian blurring and a sharpening filter, demonstrated
superior detection performance on the RtoR dataset. This study proposes a
reference line detection algorithm that enables precise position calculation
and control of automated robots in elevator shaft installation. Moreover, the
developed method shows potential for applicability even in confined working
spaces. Future work aims to develop a line detection algorithm equipped with
machine learning-based hyperparameter tuning capabilities.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 07:05:33 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Hong",
"Jisoo",
""
],
[
"Jung",
"Youngjin",
""
],
[
"Bae",
"Jihwan",
""
],
[
"Song",
"Seungho",
""
],
[
"Kang",
"Sung-Woo",
""
]
] | TITLE: Robust Detection of Extremely Thin Lines Using 0.2mm Piano Wire
ABSTRACT: This study developed an algorithm capable of detecting a reference line (a
0.2 mm thick piano wire) to accurately determine the position of an automated
installation robot within an elevator shaft. A total of 3,245 images were
collected from the experimental tower of H Company, the leading elevator
manufacturer in South Korea, and the detection performance was evaluated using
four experimental approaches (GCH, GSCH, GECH, FCH). During the initial image
processing stage, Gaussian blurring, sharpening filter, embossing filter, and
Fourier Transform were applied, followed by Canny Edge Detection and Hough
Transform. Notably, the method was developed to accurately extract the
reference line by averaging the x-coordinates of the lines detected through the
Hough Transform. This approach enabled the detection of the 0.2 mm thick piano
wire with high accuracy, even in the presence of noise and other interfering
factors (e.g., concrete cracks inside the elevator shaft or safety bars for
filming equipment). The experimental results showed that Experiment 4 (FCH),
which utilized Fourier Transform in the preprocessing stage, achieved the
highest detection rate for the LtoL, LtoR, and RtoL datasets. Experiment
2(GSCH), which applied Gaussian blurring and a sharpening filter, demonstrated
superior detection performance on the RtoR dataset. This study proposes a
reference line detection algorithm that enables precise position calculation
and control of automated robots in elevator shaft installation. Moreover, the
developed method shows potential for applicability even in confined working
spaces. Future work aims to develop a line detection algorithm equipped with
machine learning-based hyperparameter tuning capabilities.
|
2503.13475 | Zhongyi Zhang | ZhongYi Zhang and ChenYang Xu and LiXuan Zhao and HuiRang Hou and
QingHao Meng | Cross-Subject Depression Level Classification Using EEG Signals with a
Sample Confidence Method | null | null | null | null | eess.SP cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electroencephalogram (EEG) is a non-invasive tool for real-time neural
monitoring,widely used in depression detection via deep learning. However,
existing models primarily focus on binary classification (depression/normal),
lacking granularity for severity assessment. To address this, we proposed the
DepL-GCN, i.e., Depression Level classification based on GCN model. This model
tackles two key challenges: (1) subjectivity in depres-sion-level labeling due
to patient self-report biases, and (2) class imbalance across severity
categories. Inspired by the model learning patterns, we introduced two novel
modules: the sample confidence module and the minority sample penalty module.
The former leverages the L2-norm of prediction errors to progressively filter
EEG samples with weak label alignment during training, thereby reducing the
impact of subjectivity; the latter automatically upweights misclassified
minority-class samples to address imbalance issues. After testing on two public
EEG datasets, DepL-GCN achieved accuracies of 81.13% and 81.36% for multi-class
severity recognition, outperforming baseline models.Ablation studies confirmed
both modules' contributions. We further discussed the strengths and limitations
of regression-based models for depression-level recognition.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:16:11 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"ZhongYi",
""
],
[
"Xu",
"ChenYang",
""
],
[
"Zhao",
"LiXuan",
""
],
[
"Hou",
"HuiRang",
""
],
[
"Meng",
"QingHao",
""
]
] | TITLE: Cross-Subject Depression Level Classification Using EEG Signals with a
Sample Confidence Method
ABSTRACT: Electroencephalogram (EEG) is a non-invasive tool for real-time neural
monitoring,widely used in depression detection via deep learning. However,
existing models primarily focus on binary classification (depression/normal),
lacking granularity for severity assessment. To address this, we proposed the
DepL-GCN, i.e., Depression Level classification based on GCN model. This model
tackles two key challenges: (1) subjectivity in depres-sion-level labeling due
to patient self-report biases, and (2) class imbalance across severity
categories. Inspired by the model learning patterns, we introduced two novel
modules: the sample confidence module and the minority sample penalty module.
The former leverages the L2-norm of prediction errors to progressively filter
EEG samples with weak label alignment during training, thereby reducing the
impact of subjectivity; the latter automatically upweights misclassified
minority-class samples to address imbalance issues. After testing on two public
EEG datasets, DepL-GCN achieved accuracies of 81.13% and 81.36% for multi-class
severity recognition, outperforming baseline models.Ablation studies confirmed
both modules' contributions. We further discussed the strengths and limitations
of regression-based models for depression-level recognition.
|
2503.13477 | Ryan Banks | Ryan Banks, Vishal Thengane, Mar\'ia Eugenia Guerrero, Nelly Maria
Garc\'ia-Madue\~no, Yunpeng Li, Hongying Tang, Akhilanand Chaurasia | Periodontal Bone Loss Analysis via Keypoint Detection With Heuristic
Post-Processing | 31 pages, 7 tables, 5 figures, 3 equations, journal paper submitted
to Computers in Biology and Medicine | null | null | null | q-bio.TO cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Calculating percentage bone loss is a critical test for periodontal disease
staging but is sometimes imprecise and time consuming when manually calculated.
This study evaluates the application of a deep learning keypoint and object
detection model, YOLOv8-pose, for the automatic identification of localised
periodontal bone loss landmarks, conditions and staging. YOLOv8-pose was
fine-tuned on 193 annotated periapical radiographs. We propose a keypoint
detection metric, Percentage of Relative Correct Keypoints (PRCK), which
normalises the metric to the average tooth size of teeth in the image. We
propose a heuristic post-processing module that adjusts certain keypoint
predictions to align with the edge of the related tooth, using a supporting
instance segmentation model trained on an open source auxiliary dataset. The
model can sufficiently detect bone loss keypoints, tooth boxes, and alveolar
ridge resorption, but has insufficient performance at detecting detached
periodontal ligament and furcation involvement. The model with post-processing
demonstrated a PRCK 0.25 of 0.726 and PRCK 0.05 of 0.401 for keypoint
detection, mAP 0.5 of 0.715 for tooth object detection, mesial dice score of
0.593 for periodontal staging, and dice score of 0.280 for furcation
involvement. Our annotation methodology provides a stage agnostic approach to
periodontal disease detection, by ensuring most keypoints are present for each
tooth in the image, allowing small imbalanced datasets. Our PRCK metric allows
accurate evaluation of keypoints in dental domains. Our post-processing module
adjusts predicted keypoints correctly but is dependent on a minimum quality of
prediction by the pose detection and segmentation models. Code: https://
anonymous.4open.science/r/Bone-Loss-Keypoint-Detection-Code. Dataset:
https://bit.ly/4hJ3aE7.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 00:34:29 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Banks",
"Ryan",
""
],
[
"Thengane",
"Vishal",
""
],
[
"Guerrero",
"María Eugenia",
""
],
[
"García-Madueño",
"Nelly Maria",
""
],
[
"Li",
"Yunpeng",
""
],
[
"Tang",
"Hongying",
""
],
[
"Chaurasia",
"Akhilanand",
""
]
] | TITLE: Periodontal Bone Loss Analysis via Keypoint Detection With Heuristic
Post-Processing
ABSTRACT: Calculating percentage bone loss is a critical test for periodontal disease
staging but is sometimes imprecise and time consuming when manually calculated.
This study evaluates the application of a deep learning keypoint and object
detection model, YOLOv8-pose, for the automatic identification of localised
periodontal bone loss landmarks, conditions and staging. YOLOv8-pose was
fine-tuned on 193 annotated periapical radiographs. We propose a keypoint
detection metric, Percentage of Relative Correct Keypoints (PRCK), which
normalises the metric to the average tooth size of teeth in the image. We
propose a heuristic post-processing module that adjusts certain keypoint
predictions to align with the edge of the related tooth, using a supporting
instance segmentation model trained on an open source auxiliary dataset. The
model can sufficiently detect bone loss keypoints, tooth boxes, and alveolar
ridge resorption, but has insufficient performance at detecting detached
periodontal ligament and furcation involvement. The model with post-processing
demonstrated a PRCK 0.25 of 0.726 and PRCK 0.05 of 0.401 for keypoint
detection, mAP 0.5 of 0.715 for tooth object detection, mesial dice score of
0.593 for periodontal staging, and dice score of 0.280 for furcation
involvement. Our annotation methodology provides a stage agnostic approach to
periodontal disease detection, by ensuring most keypoints are present for each
tooth in the image, allowing small imbalanced datasets. Our PRCK metric allows
accurate evaluation of keypoints in dental domains. Our post-processing module
adjusts predicted keypoints correctly but is dependent on a minimum quality of
prediction by the pose detection and segmentation models. Code: https://
anonymous.4open.science/r/Bone-Loss-Keypoint-Detection-Code. Dataset:
https://bit.ly/4hJ3aE7.
|
2503.13485 | Keiichi Ochiai | Keiichi Ochiai, Yutaka Matsuo | A Causal Inference Approach for Quantifying Research Impact | null | null | null | null | cs.DL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has had a great impact on various fields of computer science by
enabling data-driven representation learning in a decade. Because science and
technology policy decisions for a nation can be made on the impact of each
technology, quantifying research impact is an important task. The number of
citations and impact factor can be used to measure the impact for individual
research. What would have happened without the research, however, is
fundamentally a counterfactual phenomenon. Thus, we propose an approach based
on causal inference to quantify the research impact of a specific technical
topic. We leverage difference-in-difference to quantify the research impact by
applying to bibliometric data. First, we identify papers of a specific
technical topic using keywords or category tags from Microsoft Academic Graph,
which is one of the largest academic publication dataset. Next, we build a
paper citation network between each technical field. Then, we aggregate the
cross-field citation count for each research field. Finally, the impact of a
specific technical topic for each research field is estimated by applying
difference-in-difference. Evaluation results show that deep learning
significantly affects computer vision and natural language processing. Besides,
deep learning significantly affects cross-field citation especially for speech
recognition to computer vision and natural language processing to computer
vision. Moreover, our method revealed that the impact of deep learning was 3.1
times of the impact of interpretability for ML models.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 10:06:42 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ochiai",
"Keiichi",
""
],
[
"Matsuo",
"Yutaka",
""
]
] | TITLE: A Causal Inference Approach for Quantifying Research Impact
ABSTRACT: Deep learning has had a great impact on various fields of computer science by
enabling data-driven representation learning in a decade. Because science and
technology policy decisions for a nation can be made on the impact of each
technology, quantifying research impact is an important task. The number of
citations and impact factor can be used to measure the impact for individual
research. What would have happened without the research, however, is
fundamentally a counterfactual phenomenon. Thus, we propose an approach based
on causal inference to quantify the research impact of a specific technical
topic. We leverage difference-in-difference to quantify the research impact by
applying to bibliometric data. First, we identify papers of a specific
technical topic using keywords or category tags from Microsoft Academic Graph,
which is one of the largest academic publication dataset. Next, we build a
paper citation network between each technical field. Then, we aggregate the
cross-field citation count for each research field. Finally, the impact of a
specific technical topic for each research field is estimated by applying
difference-in-difference. Evaluation results show that deep learning
significantly affects computer vision and natural language processing. Besides,
deep learning significantly affects cross-field citation especially for speech
recognition to computer vision and natural language processing to computer
vision. Moreover, our method revealed that the impact of deep learning was 3.1
times of the impact of interpretability for ML models.
|
2503.13494 | Sijin Huang | Zheyi Chen, Sijin Huang, Geyong Min, Zhaolong Ning, Jie Li and Yan
Zhang | Mobility-aware Seamless Service Migration and Resource Allocation in
Multi-edge IoV Systems | null | null | null | null | cs.NI cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile Edge Computing (MEC) offers low-latency and high-bandwidth support for
Internet-of-Vehicles (IoV) applications. However, due to high vehicle mobility
and finite communication coverage of base stations, it is hard to maintain
uninterrupted and high-quality services without proper service migration among
MEC servers. Existing solutions commonly rely on prior knowledge and rarely
consider efficient resource allocation during the service migration process,
making it hard to reach optimal performance in dynamic IoV environments. To
address these important challenges, we propose SR-CL, a novel mobility-aware
seamless Service migration and Resource allocation framework via
Convex-optimization-enabled deep reinforcement Learning in multi-edge IoV
systems. First, we decouple the Mixed Integer Nonlinear Programming (MINLP)
problem of service migration and resource allocation into two sub-problems.
Next, we design a new actor-critic-based asynchronous-update deep reinforcement
learning method to handle service migration, where the delayed-update actor
makes migration decisions and the one-step-update critic evaluates the
decisions to guide the policy update. Notably, we theoretically derive the
optimal resource allocation with convex optimization for each MEC server,
thereby further improving system performance. Using the real-world datasets of
vehicle trajectories and testbed, extensive experiments are conducted to verify
the effectiveness of the proposed SR-CL. Compared to benchmark methods, the
SR-CL achieves superior convergence and delay performance under various
scenarios.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 07:03:25 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chen",
"Zheyi",
""
],
[
"Huang",
"Sijin",
""
],
[
"Min",
"Geyong",
""
],
[
"Ning",
"Zhaolong",
""
],
[
"Li",
"Jie",
""
],
[
"Zhang",
"Yan",
""
]
] | TITLE: Mobility-aware Seamless Service Migration and Resource Allocation in
Multi-edge IoV Systems
ABSTRACT: Mobile Edge Computing (MEC) offers low-latency and high-bandwidth support for
Internet-of-Vehicles (IoV) applications. However, due to high vehicle mobility
and finite communication coverage of base stations, it is hard to maintain
uninterrupted and high-quality services without proper service migration among
MEC servers. Existing solutions commonly rely on prior knowledge and rarely
consider efficient resource allocation during the service migration process,
making it hard to reach optimal performance in dynamic IoV environments. To
address these important challenges, we propose SR-CL, a novel mobility-aware
seamless Service migration and Resource allocation framework via
Convex-optimization-enabled deep reinforcement Learning in multi-edge IoV
systems. First, we decouple the Mixed Integer Nonlinear Programming (MINLP)
problem of service migration and resource allocation into two sub-problems.
Next, we design a new actor-critic-based asynchronous-update deep reinforcement
learning method to handle service migration, where the delayed-update actor
makes migration decisions and the one-step-update critic evaluates the
decisions to guide the policy update. Notably, we theoretically derive the
optimal resource allocation with convex optimization for each MEC server,
thereby further improving system performance. Using the real-world datasets of
vehicle trajectories and testbed, extensive experiments are conducted to verify
the effectiveness of the proposed SR-CL. Compared to benchmark methods, the
SR-CL achieves superior convergence and delay performance under various
scenarios.
|
2503.13495 | Ziyu Wang | Ziyu Wang, Elahe Khatibi, Kianoosh Kazemi, Iman Azimi, Sanaz Mousavi,
Shaista Malik, Amir M. Rahmani | TransECG: Leveraging Transformers for Explainable ECG Re-identification
Risk Analysis | null | null | null | null | eess.SP cs.LG | http://creativecommons.org/licenses/by/4.0/ | Electrocardiogram (ECG) signals are widely shared across multiple clinical
applications for diagnosis, health monitoring, and biometric authentication.
While valuable for healthcare, they also carry unique biometric identifiers
that pose privacy risks, especially when ECG data shared across multiple
entities. These risks are amplified in shared environments, where
re-identification threats can compromise patient privacy. Existing deep
learning re-identification models prioritize accuracy but lack explainability,
making it challenging to understand how the unique biometric characteristics
encoded within ECG signals are recognized and utilized for identification.
Without these insights, despite high accuracy, developing secure and trustable
ECG data-sharing frameworks remains difficult, especially in diverse,
multi-source environments. In this work, we introduce TransECG, a Vision
Transformer (ViT)-based method that uses attention mechanisms to pinpoint
critical ECG segments associated with re-identification tasks like gender, age,
and participant ID. Our approach demonstrates high accuracy (89.9% for gender,
89.9% for age, and 88.6% for ID re-identification) across four real-world
datasets with 87 participants. Importantly, we provide key insights into ECG
components such as the R-wave, QRS complex, and P-Q interval in
re-identification. For example, in the gender classification, the R wave
contributed 58.29% to the model's attention, while in the age classification,
the P-R interval contributed 46.29%. By combining high predictive performance
with enhanced explainability, TransECG provides a robust solution for
privacy-conscious ECG data sharing, supporting the development of secure and
trusted healthcare data environment.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 07:37:56 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Ziyu",
""
],
[
"Khatibi",
"Elahe",
""
],
[
"Kazemi",
"Kianoosh",
""
],
[
"Azimi",
"Iman",
""
],
[
"Mousavi",
"Sanaz",
""
],
[
"Malik",
"Shaista",
""
],
[
"Rahmani",
"Amir M.",
""
]
] | TITLE: TransECG: Leveraging Transformers for Explainable ECG Re-identification
Risk Analysis
ABSTRACT: Electrocardiogram (ECG) signals are widely shared across multiple clinical
applications for diagnosis, health monitoring, and biometric authentication.
While valuable for healthcare, they also carry unique biometric identifiers
that pose privacy risks, especially when ECG data shared across multiple
entities. These risks are amplified in shared environments, where
re-identification threats can compromise patient privacy. Existing deep
learning re-identification models prioritize accuracy but lack explainability,
making it challenging to understand how the unique biometric characteristics
encoded within ECG signals are recognized and utilized for identification.
Without these insights, despite high accuracy, developing secure and trustable
ECG data-sharing frameworks remains difficult, especially in diverse,
multi-source environments. In this work, we introduce TransECG, a Vision
Transformer (ViT)-based method that uses attention mechanisms to pinpoint
critical ECG segments associated with re-identification tasks like gender, age,
and participant ID. Our approach demonstrates high accuracy (89.9% for gender,
89.9% for age, and 88.6% for ID re-identification) across four real-world
datasets with 87 participants. Importantly, we provide key insights into ECG
components such as the R-wave, QRS complex, and P-Q interval in
re-identification. For example, in the gender classification, the R wave
contributed 58.29% to the model's attention, while in the age classification,
the P-R interval contributed 46.29%. By combining high predictive performance
with enhanced explainability, TransECG provides a robust solution for
privacy-conscious ECG data sharing, supporting the development of secure and
trusted healthcare data environment.
|
2503.13496 | Pietro Cerveri | Sara Maria Pagotto, Federico Tognoni, Matteo Rossi, Dario Bovio,
Caterina Salito, Luca Mainardi, Pietro Cerveri | Finger-to-Chest Style Transfer-assisted Deep Learning Method For
Photoplethysmogram Waveform Restoration with Timing Preservation | null | null | null | null | eess.SP cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Wearable measurements, such as those obtained by photoplethysmogram (PPG)
sensors are highly susceptible to motion artifacts and noise, affecting
cardiovascular measures. Chest-acquired PPG signals are especially vulnerable,
with signal degradation primarily resulting from lower perfusion,
breathing-induced motion, and mechanical interference from chest movements.
Traditional restoration methods often degrade the signal, and supervised deep
learning (DL) struggles with random and systematic distortions, requiring very
large datasets for successful training. To efficiently restore chest PPG
waveform, we propose a style transfer-assisted cycle-consistent generative
adversarial network, called starGAN, whose performance is evaluated on a
three-channel PPG signal (red, green,and infrared) acquired by a chest-worn
multi-modal sensor, called Soundi. Two identical devices are adopted, one
sensor to collect the PPG signal on the chest, considered to feature low
quality and undergoing restoration, and another sensor to obtain a high-quality
PPG signal measured on the finger, considered the reference signal. Extensive
validation over some 8,000 5-second chunks collected from 40 subjects showed
about 90% correlation of the restored chest PPG with the reference finger PPG,
with a 30% improvement over raw chest PPG. Likewise, the signal-to-noise ratio
improved on average of about 125%, over the three channels. The agreement with
heart-rate computed from concurrent ECG was extremely high, overcoming 84% on
average. These results demonstrate effective signal restoration, comparable
with findings in recent literature papers. Significance: PPG signals collected
from wearable devices are highly susceptible to artifacts, making innovative
AI-based techniques fundamental towards holistic health assessments in a single
device.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 09:38:44 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Pagotto",
"Sara Maria",
""
],
[
"Tognoni",
"Federico",
""
],
[
"Rossi",
"Matteo",
""
],
[
"Bovio",
"Dario",
""
],
[
"Salito",
"Caterina",
""
],
[
"Mainardi",
"Luca",
""
],
[
"Cerveri",
"Pietro",
""
]
] | TITLE: Finger-to-Chest Style Transfer-assisted Deep Learning Method For
Photoplethysmogram Waveform Restoration with Timing Preservation
ABSTRACT: Wearable measurements, such as those obtained by photoplethysmogram (PPG)
sensors are highly susceptible to motion artifacts and noise, affecting
cardiovascular measures. Chest-acquired PPG signals are especially vulnerable,
with signal degradation primarily resulting from lower perfusion,
breathing-induced motion, and mechanical interference from chest movements.
Traditional restoration methods often degrade the signal, and supervised deep
learning (DL) struggles with random and systematic distortions, requiring very
large datasets for successful training. To efficiently restore chest PPG
waveform, we propose a style transfer-assisted cycle-consistent generative
adversarial network, called starGAN, whose performance is evaluated on a
three-channel PPG signal (red, green,and infrared) acquired by a chest-worn
multi-modal sensor, called Soundi. Two identical devices are adopted, one
sensor to collect the PPG signal on the chest, considered to feature low
quality and undergoing restoration, and another sensor to obtain a high-quality
PPG signal measured on the finger, considered the reference signal. Extensive
validation over some 8,000 5-second chunks collected from 40 subjects showed
about 90% correlation of the restored chest PPG with the reference finger PPG,
with a 30% improvement over raw chest PPG. Likewise, the signal-to-noise ratio
improved on average of about 125%, over the three channels. The agreement with
heart-rate computed from concurrent ECG was extremely high, overcoming 84% on
average. These results demonstrate effective signal restoration, comparable
with findings in recent literature papers. Significance: PPG signals collected
from wearable devices are highly susceptible to artifacts, making innovative
AI-based techniques fundamental towards holistic health assessments in a single
device.
|
2503.13503 | Xi Chen | Chuan Qin, Xin Chen, Chengrui Wang, Pengmin Wu, Xi Chen, Yihang Cheng,
Jingyi Zhao, Meng Xiao, Xiangchao Dong, Qingqing Long, Boya Pan, Han Wu,
Chengzan Li, Yuanchun Zhou, Hui Xiong, Hengshu Zhu | SciHorizon: Benchmarking AI-for-Science Readiness from Scientific Data
to Large Language Models | null | null | null | null | cs.LG cs.CL cs.DL cs.IR | http://creativecommons.org/licenses/by/4.0/ | In recent years, the rapid advancement of Artificial Intelligence (AI)
technologies, particularly Large Language Models (LLMs), has revolutionized the
paradigm of scientific discovery, establishing AI-for-Science (AI4Science) as a
dynamic and evolving field. However, there is still a lack of an effective
framework for the overall assessment of AI4Science, particularly from a
holistic perspective on data quality and model capability. Therefore, in this
study, we propose SciHorizon, a comprehensive assessment framework designed to
benchmark the readiness of AI4Science from both scientific data and LLM
perspectives. First, we introduce a generalizable framework for assessing
AI-ready scientific data, encompassing four key dimensions: Quality, FAIRness,
Explainability, and Compliance which are subdivided into 15 sub-dimensions.
Drawing on data resource papers published between 2018 and 2023 in
peer-reviewed journals, we present recommendation lists of AI-ready datasets
for both Earth and Life Sciences, making a novel and original contribution to
the field. Concurrently, to assess the capabilities of LLMs across multiple
scientific disciplines, we establish 16 assessment dimensions based on five
core indicators Knowledge, Understanding, Reasoning, Multimodality, and Values
spanning Mathematics, Physics, Chemistry, Life Sciences, and Earth and Space
Sciences. Using the developed benchmark datasets, we have conducted a
comprehensive evaluation of over 20 representative open-source and closed
source LLMs. All the results are publicly available and can be accessed online
at www.scihorizon.cn/en.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 11:34:41 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Qin",
"Chuan",
""
],
[
"Chen",
"Xin",
""
],
[
"Wang",
"Chengrui",
""
],
[
"Wu",
"Pengmin",
""
],
[
"Chen",
"Xi",
""
],
[
"Cheng",
"Yihang",
""
],
[
"Zhao",
"Jingyi",
""
],
[
"Xiao",
"Meng",
""
],
[
"Dong",
"Xiangchao",
""
],
[
"Long",
"Qingqing",
""
],
[
"Pan",
"Boya",
""
],
[
"Wu",
"Han",
""
],
[
"Li",
"Chengzan",
""
],
[
"Zhou",
"Yuanchun",
""
],
[
"Xiong",
"Hui",
""
],
[
"Zhu",
"Hengshu",
""
]
] | TITLE: SciHorizon: Benchmarking AI-for-Science Readiness from Scientific Data
to Large Language Models
ABSTRACT: In recent years, the rapid advancement of Artificial Intelligence (AI)
technologies, particularly Large Language Models (LLMs), has revolutionized the
paradigm of scientific discovery, establishing AI-for-Science (AI4Science) as a
dynamic and evolving field. However, there is still a lack of an effective
framework for the overall assessment of AI4Science, particularly from a
holistic perspective on data quality and model capability. Therefore, in this
study, we propose SciHorizon, a comprehensive assessment framework designed to
benchmark the readiness of AI4Science from both scientific data and LLM
perspectives. First, we introduce a generalizable framework for assessing
AI-ready scientific data, encompassing four key dimensions: Quality, FAIRness,
Explainability, and Compliance which are subdivided into 15 sub-dimensions.
Drawing on data resource papers published between 2018 and 2023 in
peer-reviewed journals, we present recommendation lists of AI-ready datasets
for both Earth and Life Sciences, making a novel and original contribution to
the field. Concurrently, to assess the capabilities of LLMs across multiple
scientific disciplines, we establish 16 assessment dimensions based on five
core indicators Knowledge, Understanding, Reasoning, Multimodality, and Values
spanning Mathematics, Physics, Chemistry, Life Sciences, and Earth and Space
Sciences. Using the developed benchmark datasets, we have conducted a
comprehensive evaluation of over 20 representative open-source and closed
source LLMs. All the results are publicly available and can be accessed online
at www.scihorizon.cn/en.
|
2503.13504 | Xiangbo Gao | Rujia Wang, Xiangbo Gao, Hao Xiang, Runsheng Xu, Zhengzhong Tu | CoCMT: Communication-Efficient Cross-Modal Transformer for Collaborative
Perception | null | null | null | null | cs.LG cs.AI cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-agent collaborative perception enhances each agent perceptual
capabilities by sharing sensing information to cooperatively perform robot
perception tasks. This approach has proven effective in addressing challenges
such as sensor deficiencies, occlusions, and long-range perception. However,
existing representative collaborative perception systems transmit intermediate
feature maps, such as bird-eye view (BEV) representations, which contain a
significant amount of non-critical information, leading to high communication
bandwidth requirements. To enhance communication efficiency while preserving
perception capability, we introduce CoCMT, an object-query-based collaboration
framework that optimizes communication bandwidth by selectively extracting and
transmitting essential features. Within CoCMT, we introduce the Efficient Query
Transformer (EQFormer) to effectively fuse multi-agent object queries and
implement a synergistic deep supervision to enhance the positive reinforcement
between stages, leading to improved overall performance. Experiments on OPV2V
and V2V4Real datasets show CoCMT outperforms state-of-the-art methods while
drastically reducing communication needs. On V2V4Real, our model (Top-50 object
queries) requires only 0.416 Mb bandwidth, 83 times less than SOTA methods,
while improving AP70 by 1.1 percent. This efficiency breakthrough enables
practical collaborative perception deployment in bandwidth-constrained
environments without sacrificing detection accuracy.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 06:41:25 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Rujia",
""
],
[
"Gao",
"Xiangbo",
""
],
[
"Xiang",
"Hao",
""
],
[
"Xu",
"Runsheng",
""
],
[
"Tu",
"Zhengzhong",
""
]
] | TITLE: CoCMT: Communication-Efficient Cross-Modal Transformer for Collaborative
Perception
ABSTRACT: Multi-agent collaborative perception enhances each agent perceptual
capabilities by sharing sensing information to cooperatively perform robot
perception tasks. This approach has proven effective in addressing challenges
such as sensor deficiencies, occlusions, and long-range perception. However,
existing representative collaborative perception systems transmit intermediate
feature maps, such as bird-eye view (BEV) representations, which contain a
significant amount of non-critical information, leading to high communication
bandwidth requirements. To enhance communication efficiency while preserving
perception capability, we introduce CoCMT, an object-query-based collaboration
framework that optimizes communication bandwidth by selectively extracting and
transmitting essential features. Within CoCMT, we introduce the Efficient Query
Transformer (EQFormer) to effectively fuse multi-agent object queries and
implement a synergistic deep supervision to enhance the positive reinforcement
between stages, leading to improved overall performance. Experiments on OPV2V
and V2V4Real datasets show CoCMT outperforms state-of-the-art methods while
drastically reducing communication needs. On V2V4Real, our model (Top-50 object
queries) requires only 0.416 Mb bandwidth, 83 times less than SOTA methods,
while improving AP70 by 1.1 percent. This efficiency breakthrough enables
practical collaborative perception deployment in bandwidth-constrained
environments without sacrificing detection accuracy.
|
2503.13506 | Mustafa Cavus | Mustafa Cavus, Katarzyna Wo\'znica, Przemys{\l}aw Biecek | The Role of Hyperparameters in Predictive Multiplicity | 16 pages, 4 figures | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | This paper investigates the critical role of hyperparameters in predictive
multiplicity, where different machine learning models trained on the same
dataset yield divergent predictions for identical inputs. These inconsistencies
can seriously impact high-stakes decisions such as credit assessments, hiring,
and medical diagnoses. Focusing on six widely used models for tabular data -
Elastic Net, Decision Tree, k-Nearest Neighbor, Support Vector Machine, Random
Forests, and Extreme Gradient Boosting - we explore how hyperparameter tuning
influences predictive multiplicity, as expressed by the distribution of
prediction discrepancies across benchmark datasets. Key hyperparameters such as
lambda in Elastic Net, gamma in Support Vector Machines, and alpha in Extreme
Gradient Boosting play a crucial role in shaping predictive multiplicity, often
compromising the stability of predictions within specific algorithms. Our
experiments on 21 benchmark datasets reveal that tuning these hyperparameters
leads to notable performance improvements but also increases prediction
discrepancies, with Extreme Gradient Boosting exhibiting the highest
discrepancy and substantial prediction instability. This highlights the
trade-off between performance optimization and prediction consistency, raising
concerns about the risk of arbitrary predictions. These findings provide
insight into how hyperparameter optimization leads to predictive multiplicity.
While predictive multiplicity allows prioritizing domain-specific objectives
such as fairness and reduces reliance on a single model, it also complicates
decision-making, potentially leading to arbitrary or unjustified outcomes.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 19:22:44 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Cavus",
"Mustafa",
""
],
[
"Woźnica",
"Katarzyna",
""
],
[
"Biecek",
"Przemysław",
""
]
] | TITLE: The Role of Hyperparameters in Predictive Multiplicity
ABSTRACT: This paper investigates the critical role of hyperparameters in predictive
multiplicity, where different machine learning models trained on the same
dataset yield divergent predictions for identical inputs. These inconsistencies
can seriously impact high-stakes decisions such as credit assessments, hiring,
and medical diagnoses. Focusing on six widely used models for tabular data -
Elastic Net, Decision Tree, k-Nearest Neighbor, Support Vector Machine, Random
Forests, and Extreme Gradient Boosting - we explore how hyperparameter tuning
influences predictive multiplicity, as expressed by the distribution of
prediction discrepancies across benchmark datasets. Key hyperparameters such as
lambda in Elastic Net, gamma in Support Vector Machines, and alpha in Extreme
Gradient Boosting play a crucial role in shaping predictive multiplicity, often
compromising the stability of predictions within specific algorithms. Our
experiments on 21 benchmark datasets reveal that tuning these hyperparameters
leads to notable performance improvements but also increases prediction
discrepancies, with Extreme Gradient Boosting exhibiting the highest
discrepancy and substantial prediction instability. This highlights the
trade-off between performance optimization and prediction consistency, raising
concerns about the risk of arbitrary predictions. These findings provide
insight into how hyperparameter optimization leads to predictive multiplicity.
While predictive multiplicity allows prioritizing domain-specific objectives
such as fairness and reduces reliance on a single model, it also complicates
decision-making, potentially leading to arbitrary or unjustified outcomes.
|
2503.13507 | Mark Saroufim | Mark Saroufim, Yotam Perlitz, Leshem Choshen, Luca Antiga, Greg
Bowyer, Christian Puhrsch, Driss Guessous, Supriya Rao, Geeta Chauhan,
Ashvini Kumar, Jindal Pawan Kumar, Rajpoot Ankur Parikh, Joe Isaacson, Weiwei
Yang | NeurIPS 2023 LLM Efficiency Fine-tuning Competition | 11 pages, 10 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Our analysis of the NeurIPS 2023 large language model (LLM) fine-tuning
competition revealed the following trend: top-performing models exhibit
significant overfitting on benchmark datasets, mirroring the broader issue of
benchmark overfitting on popular leaderboards and that data curation is
essential in order to get a high performing LLM. The competition, which
consisted of two stages - an open evaluation stage with publicly available
tasks and a closed evaluation stage with unseen tasks - allowed us to assess
the generalizability of fine-tuned LLMs. Our results highlight the limitations
of current benchmark-based evaluation schemes for generative models and
demonstrate the need for more robust evaluation methods. Notably, the winning
submissions utilized standard open-source libraries and focused primarily on
data curation. To facilitate further research and promote reproducibility, we
release all competition entries, Docker files, and evaluation infrastructure,
providing a valuable resource for the community to explore fine-tuning,
overfitting, and reproducibility in LLMs.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 19:35:40 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Saroufim",
"Mark",
""
],
[
"Perlitz",
"Yotam",
""
],
[
"Choshen",
"Leshem",
""
],
[
"Antiga",
"Luca",
""
],
[
"Bowyer",
"Greg",
""
],
[
"Puhrsch",
"Christian",
""
],
[
"Guessous",
"Driss",
""
],
[
"Rao",
"Supriya",
""
],
[
"Chauhan",
"Geeta",
""
],
[
"Kumar",
"Ashvini",
""
],
[
"Kumar",
"Jindal Pawan",
""
],
[
"Parikh",
"Rajpoot Ankur",
""
],
[
"Isaacson",
"Joe",
""
],
[
"Yang",
"Weiwei",
""
]
] | TITLE: NeurIPS 2023 LLM Efficiency Fine-tuning Competition
ABSTRACT: Our analysis of the NeurIPS 2023 large language model (LLM) fine-tuning
competition revealed the following trend: top-performing models exhibit
significant overfitting on benchmark datasets, mirroring the broader issue of
benchmark overfitting on popular leaderboards and that data curation is
essential in order to get a high performing LLM. The competition, which
consisted of two stages - an open evaluation stage with publicly available
tasks and a closed evaluation stage with unseen tasks - allowed us to assess
the generalizability of fine-tuned LLMs. Our results highlight the limitations
of current benchmark-based evaluation schemes for generative models and
demonstrate the need for more robust evaluation methods. Notably, the winning
submissions utilized standard open-source libraries and focused primarily on
data curation. To facilitate further research and promote reproducibility, we
release all competition entries, Docker files, and evaluation infrastructure,
providing a valuable resource for the community to explore fine-tuning,
overfitting, and reproducibility in LLMs.
|
2503.13509 | Bojian Hou | Jia Xu, Tianyi Wei, Bojian Hou, Patryk Orzechowski, Shu Yang, Ruochen
Jin, Rachael Paulbeck, Joost Wagenaar, George Demiris, Li Shen | MentalChat16K: A Benchmark Dataset for Conversational Mental Health
Assistance | null | null | null | null | cs.LG cs.AI cs.CL cs.CY cs.HC | http://creativecommons.org/licenses/by-sa/4.0/ | We introduce MentalChat16K, an English benchmark dataset combining a
synthetic mental health counseling dataset and a dataset of anonymized
transcripts from interventions between Behavioral Health Coaches and Caregivers
of patients in palliative or hospice care. Covering a diverse range of
conditions like depression, anxiety, and grief, this curated dataset is
designed to facilitate the development and evaluation of large language models
for conversational mental health assistance. By providing a high-quality
resource tailored to this critical domain, MentalChat16K aims to advance
research on empathetic, personalized AI solutions to improve access to mental
health support services. The dataset prioritizes patient privacy, ethical
considerations, and responsible data usage. MentalChat16K presents a valuable
opportunity for the research community to innovate AI technologies that can
positively impact mental well-being.
| [
{
"version": "v1",
"created": "Thu, 13 Mar 2025 20:25:10 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Xu",
"Jia",
""
],
[
"Wei",
"Tianyi",
""
],
[
"Hou",
"Bojian",
""
],
[
"Orzechowski",
"Patryk",
""
],
[
"Yang",
"Shu",
""
],
[
"Jin",
"Ruochen",
""
],
[
"Paulbeck",
"Rachael",
""
],
[
"Wagenaar",
"Joost",
""
],
[
"Demiris",
"George",
""
],
[
"Shen",
"Li",
""
]
] | TITLE: MentalChat16K: A Benchmark Dataset for Conversational Mental Health
Assistance
ABSTRACT: We introduce MentalChat16K, an English benchmark dataset combining a
synthetic mental health counseling dataset and a dataset of anonymized
transcripts from interventions between Behavioral Health Coaches and Caregivers
of patients in palliative or hospice care. Covering a diverse range of
conditions like depression, anxiety, and grief, this curated dataset is
designed to facilitate the development and evaluation of large language models
for conversational mental health assistance. By providing a high-quality
resource tailored to this critical domain, MentalChat16K aims to advance
research on empathetic, personalized AI solutions to improve access to mental
health support services. The dataset prioritizes patient privacy, ethical
considerations, and responsible data usage. MentalChat16K presents a valuable
opportunity for the research community to innovate AI technologies that can
positively impact mental well-being.
|
2503.13521 | Ellen Veomett | Ananya Agarwal, Fnu Alusi, Arbie Hsu, Arif Syraj, and Ellen Veomett | States of Disarray: Cleaning Data for Gerrymandering Analysis | 12 pages, 3 figures | null | null | null | cs.DB cs.CY physics.soc-ph stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The mathematics of redistricting is an area of study that has exploded in
recent years. In particular, many different research groups and expert
witnesses in court cases have used outlier analysis to argue that a proposed
map is a gerrymander. This outlier analysis relies on having an ensemble of
potential redistricting maps against which the proposed map is compared.
Arguably the most widely-accepted method of creating such an ensemble is to use
a Markov Chain Monte Carlo (MCMC) process. This process requires that various
pieces of data be gathered, cleaned, and coalesced into a single file that can
be used as the seed of the MCMC process.
In this article, we describe how we have begun this cleaning process for each
state, and made the resulting data available for the public at
https://github.com/eveomett-states . At the time of submission, we have data
for 22 states available for researchers, students, and the general public to
easily access and analyze. We will continue the data cleaning process for each
state, and we hope that the availability of these datasets will both further
research in this area, and increase the public's interest in and understanding
of modern techniques to detect gerrymandering.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 19:33:00 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Agarwal",
"Ananya",
""
],
[
"Alusi",
"Fnu",
""
],
[
"Hsu",
"Arbie",
""
],
[
"Syraj",
"Arif",
""
],
[
"Veomett",
"Ellen",
""
]
] | TITLE: States of Disarray: Cleaning Data for Gerrymandering Analysis
ABSTRACT: The mathematics of redistricting is an area of study that has exploded in
recent years. In particular, many different research groups and expert
witnesses in court cases have used outlier analysis to argue that a proposed
map is a gerrymander. This outlier analysis relies on having an ensemble of
potential redistricting maps against which the proposed map is compared.
Arguably the most widely-accepted method of creating such an ensemble is to use
a Markov Chain Monte Carlo (MCMC) process. This process requires that various
pieces of data be gathered, cleaned, and coalesced into a single file that can
be used as the seed of the MCMC process.
In this article, we describe how we have begun this cleaning process for each
state, and made the resulting data available for the public at
https://github.com/eveomett-states . At the time of submission, we have data
for 22 states available for researchers, students, and the general public to
easily access and analyze. We will continue the data cleaning process for each
state, and we hope that the availability of these datasets will both further
research in this area, and increase the public's interest in and understanding
of modern techniques to detect gerrymandering.
|
2503.13534 | Wonjun Yi | Wonjun Yi, Yong-Hwa Park | Multi-output Classification for Compound Fault Diagnosis in Motor under
Partially Labeled Target Domain | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This study presents a novel multi-output classification (MOC) framework
designed for domain adaptation in fault diagnosis, addressing challenges posed
by partially labeled (PL) target domain dataset and coexisting faults in
rotating machinery. Unlike conventional multi-class classification (MCC)
approaches, the MOC framework independently classifies the severity of each
fault, enhancing diagnostic accuracy. By integrating multi-kernel maximum mean
discrepancy loss (MKMMD) and entropy minimization loss (EM), the proposed
method improves feature transferability between source and target domains,
while frequency layer normalization (FLN) effectively handles stationary
vibration signals by leveraging mechanical characteristics. Experimental
evaluations across six domain adaptation cases, encompassing partially labeled
(PL) scenarios, demonstrate the superior performance of the MOC approach over
baseline methods in terms of macro F1 score.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 14:15:10 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yi",
"Wonjun",
""
],
[
"Park",
"Yong-Hwa",
""
]
] | TITLE: Multi-output Classification for Compound Fault Diagnosis in Motor under
Partially Labeled Target Domain
ABSTRACT: This study presents a novel multi-output classification (MOC) framework
designed for domain adaptation in fault diagnosis, addressing challenges posed
by partially labeled (PL) target domain dataset and coexisting faults in
rotating machinery. Unlike conventional multi-class classification (MCC)
approaches, the MOC framework independently classifies the severity of each
fault, enhancing diagnostic accuracy. By integrating multi-kernel maximum mean
discrepancy loss (MKMMD) and entropy minimization loss (EM), the proposed
method improves feature transferability between source and target domains,
while frequency layer normalization (FLN) effectively handles stationary
vibration signals by leveraging mechanical characteristics. Experimental
evaluations across six domain adaptation cases, encompassing partially labeled
(PL) scenarios, demonstrate the superior performance of the MOC approach over
baseline methods in terms of macro F1 score.
|
2503.13537 | Binghui Zhang | Binghui Zhang, Luis Mares De La Cruz, Binghui Wang | FedTilt: Towards Multi-Level Fairness-Preserving and Robust Federated
Learning | 13 pages | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by-sa/4.0/ | Federated Learning (FL) is an emerging decentralized learning paradigm that
can partly address the privacy concern that cannot be handled by traditional
centralized and distributed learning. Further, to make FL practical, it is also
necessary to consider constraints such as fairness and robustness. However,
existing robust FL methods often produce unfair models, and existing fair FL
methods only consider one-level (client) fairness and are not robust to
persistent outliers (i.e., injected outliers into each training round) that are
common in real-world FL settings. We propose \texttt{FedTilt}, a novel FL that
can preserve multi-level fairness and be robust to outliers. In particular, we
consider two common levels of fairness, i.e., \emph{client fairness} --
uniformity of performance across clients, and \emph{client data fairness} --
uniformity of performance across different classes of data within a client.
\texttt{FedTilt} is inspired by the recently proposed tilted empirical risk
minimization, which introduces tilt hyperparameters that can be flexibly tuned.
Theoretically, we show how tuning tilt values can achieve the two-level
fairness and mitigate the persistent outliers, and derive the convergence
condition of \texttt{FedTilt} as well. Empirically, our evaluation results on a
suite of realistic federated datasets in diverse settings show the
effectiveness and flexibility of the \texttt{FedTilt} framework and the
superiority to the state-of-the-arts.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 19:57:23 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Binghui",
""
],
[
"De La Cruz",
"Luis Mares",
""
],
[
"Wang",
"Binghui",
""
]
] | TITLE: FedTilt: Towards Multi-Level Fairness-Preserving and Robust Federated
Learning
ABSTRACT: Federated Learning (FL) is an emerging decentralized learning paradigm that
can partly address the privacy concern that cannot be handled by traditional
centralized and distributed learning. Further, to make FL practical, it is also
necessary to consider constraints such as fairness and robustness. However,
existing robust FL methods often produce unfair models, and existing fair FL
methods only consider one-level (client) fairness and are not robust to
persistent outliers (i.e., injected outliers into each training round) that are
common in real-world FL settings. We propose \texttt{FedTilt}, a novel FL that
can preserve multi-level fairness and be robust to outliers. In particular, we
consider two common levels of fairness, i.e., \emph{client fairness} --
uniformity of performance across clients, and \emph{client data fairness} --
uniformity of performance across different classes of data within a client.
\texttt{FedTilt} is inspired by the recently proposed tilted empirical risk
minimization, which introduces tilt hyperparameters that can be flexibly tuned.
Theoretically, we show how tuning tilt values can achieve the two-level
fairness and mitigate the persistent outliers, and derive the convergence
condition of \texttt{FedTilt} as well. Empirically, our evaluation results on a
suite of realistic federated datasets in diverse settings show the
effectiveness and flexibility of the \texttt{FedTilt} framework and the
superiority to the state-of-the-arts.
|
2503.13540 | Yuan Zhu | Weiyang Geng, Yiming Pan, Zhecong Xing, Dongyu Liu, Rui Liu, and Yuan
Zhu | MSCMHMST: A traffic flow prediction model based on Transformer | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This study proposes a hybrid model based on Transformers, named MSCMHMST,
aimed at addressing key challenges in traffic flow prediction. Traditional
single-method approaches show limitations in traffic prediction tasks, whereas
hybrid methods, by integrating the strengths of different models, can provide
more accurate and robust predictions. The MSCMHMST model introduces a
multi-head, multi-scale attention mechanism, allowing the model to parallel
process different parts of the data and learn its intrinsic representations
from multiple perspectives, thereby enhancing the model's ability to handle
complex situations. This mechanism enables the model to capture features at
various scales effectively, understanding both short-term changes and long-term
trends. Verified through experiments on the PeMS04/08 dataset with specific
experimental settings, the MSCMHMST model demonstrated excellent robustness and
accuracy in long, medium, and short-term traffic flow predictions. The results
indicate that this model has significant potential, offering a new and
effective solution for the field of traffic flow prediction.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 03:40:32 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Geng",
"Weiyang",
""
],
[
"Pan",
"Yiming",
""
],
[
"Xing",
"Zhecong",
""
],
[
"Liu",
"Dongyu",
""
],
[
"Liu",
"Rui",
""
],
[
"Zhu",
"Yuan",
""
]
] | TITLE: MSCMHMST: A traffic flow prediction model based on Transformer
ABSTRACT: This study proposes a hybrid model based on Transformers, named MSCMHMST,
aimed at addressing key challenges in traffic flow prediction. Traditional
single-method approaches show limitations in traffic prediction tasks, whereas
hybrid methods, by integrating the strengths of different models, can provide
more accurate and robust predictions. The MSCMHMST model introduces a
multi-head, multi-scale attention mechanism, allowing the model to parallel
process different parts of the data and learn its intrinsic representations
from multiple perspectives, thereby enhancing the model's ability to handle
complex situations. This mechanism enables the model to capture features at
various scales effectively, understanding both short-term changes and long-term
trends. Verified through experiments on the PeMS04/08 dataset with specific
experimental settings, the MSCMHMST model demonstrated excellent robustness and
accuracy in long, medium, and short-term traffic flow predictions. The results
indicate that this model has significant potential, offering a new and
effective solution for the field of traffic flow prediction.
|
2503.13542 | Lulu Ban | Lulu Ban, Tao Zhu, Xiangqing Lu, Qi Qiu, Wenyong Han, Shuangjian Li,
Liming Chen, Kevin I-Kai Wang, Mingxing Nie, and Yaping Wan | HAR-DoReMi: Optimizing Data Mixture for Self-Supervised Human Activity
Recognition Across Heterogeneous IMU Datasets | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-dataset Human Activity Recognition (HAR) suffers from limited model
generalization, hindering its practical deployment. To address this critical
challenge, inspired by the success of DoReMi in Large Language Models (LLMs),
we introduce a data mixture optimization strategy for pre-training HAR models,
aiming to improve the recognition performance across heterogeneous datasets.
However, directly applying DoReMi to the HAR field encounters new challenges
due to the continuous, multi-channel and intrinsic heterogeneous
characteristics of IMU sensor data. To overcome these limitations, we propose a
novel framework HAR-DoReMi, which introduces a masked reconstruction task based
on Mean Squared Error (MSE) loss. By raplacing the discrete language sequence
prediction task, which relies on the Negative Log-Likelihood (NLL) loss, in the
original DoReMi framework, the proposed framework is inherently more
appropriate for handling the continuous and multi-channel characteristics of
IMU data. In addition, HAR-DoReMi integrates the Mahony fusion algorithm into
the self-supervised HAR pre-training, aiming to mitigate the heterogeneity of
varying sensor orientation. This is achieved by estimating the sensor
orientation within each dataset and facilitating alignment with a unified
coordinate system, thereby improving the cross-dataset generalization ability
of the HAR model. Experimental evaluation on multiple cross-dataset HAR
transfer tasks demonstrates that HAR-DoReMi improves the accuracy by an average
of 6.51%, compared to the current state-of-the-art method with only
approximately 30% to 50% of the data usage. These results confirm the
effectiveness of HAR-DoReMi in improving the generalization and data efficiency
of pre-training HAR models, underscoring its significant potential to
facilitate the practical deployment of HAR technology.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 04:31:58 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ban",
"Lulu",
""
],
[
"Zhu",
"Tao",
""
],
[
"Lu",
"Xiangqing",
""
],
[
"Qiu",
"Qi",
""
],
[
"Han",
"Wenyong",
""
],
[
"Li",
"Shuangjian",
""
],
[
"Chen",
"Liming",
""
],
[
"Wang",
"Kevin I-Kai",
""
],
[
"Nie",
"Mingxing",
""
],
[
"Wan",
"Yaping",
""
]
] | TITLE: HAR-DoReMi: Optimizing Data Mixture for Self-Supervised Human Activity
Recognition Across Heterogeneous IMU Datasets
ABSTRACT: Cross-dataset Human Activity Recognition (HAR) suffers from limited model
generalization, hindering its practical deployment. To address this critical
challenge, inspired by the success of DoReMi in Large Language Models (LLMs),
we introduce a data mixture optimization strategy for pre-training HAR models,
aiming to improve the recognition performance across heterogeneous datasets.
However, directly applying DoReMi to the HAR field encounters new challenges
due to the continuous, multi-channel and intrinsic heterogeneous
characteristics of IMU sensor data. To overcome these limitations, we propose a
novel framework HAR-DoReMi, which introduces a masked reconstruction task based
on Mean Squared Error (MSE) loss. By raplacing the discrete language sequence
prediction task, which relies on the Negative Log-Likelihood (NLL) loss, in the
original DoReMi framework, the proposed framework is inherently more
appropriate for handling the continuous and multi-channel characteristics of
IMU data. In addition, HAR-DoReMi integrates the Mahony fusion algorithm into
the self-supervised HAR pre-training, aiming to mitigate the heterogeneity of
varying sensor orientation. This is achieved by estimating the sensor
orientation within each dataset and facilitating alignment with a unified
coordinate system, thereby improving the cross-dataset generalization ability
of the HAR model. Experimental evaluation on multiple cross-dataset HAR
transfer tasks demonstrates that HAR-DoReMi improves the accuracy by an average
of 6.51%, compared to the current state-of-the-art method with only
approximately 30% to 50% of the data usage. These results confirm the
effectiveness of HAR-DoReMi in improving the generalization and data efficiency
of pre-training HAR models, underscoring its significant potential to
facilitate the practical deployment of HAR technology.
|
2503.13548 | Wei Zhang | Wei Zhang, Zhaohong Deng, Guanjin Wang, Kup-Sze Choi | Fuzzy Rule-based Differentiable Representation Learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Representation learning has emerged as a crucial focus in machine and deep
learning, involving the extraction of meaningful and useful features and
patterns from the input data, thereby enhancing the performance of various
downstream tasks such as classification, clustering, and prediction. Current
mainstream representation learning methods primarily rely on non-linear data
mining techniques such as kernel methods and deep neural networks to extract
abstract knowledge from complex datasets. However, most of these methods are
black-box, lacking transparency and interpretability in the learning process,
which constrains their practical utility. To this end, this paper introduces a
novel representation learning method grounded in an interpretable fuzzy
rule-based model. Specifically, it is built upon the Takagi-Sugeno-Kang fuzzy
system (TSK-FS) to initially map input data to a high-dimensional fuzzy feature
space through the antecedent part of the TSK-FS. Subsequently, a novel
differentiable optimization method is proposed for the consequence part
learning which can preserve the model's interpretability and transparency while
further exploring the nonlinear relationships within the data. This
optimization method retains the essence of traditional optimization, with
certain parts of the process parameterized corresponding differentiable modules
constructed, and a deep optimization process implemented. Consequently, this
method not only enhances the model's performance but also ensures its
interpretability. Moreover, a second-order geometry preservation method is
introduced to further improve the robustness of the proposed method. Extensive
experiments conducted on various benchmark datasets validate the superiority of
the proposed method, highlighting its potential for advancing representation
learning methodologies.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 14:00:34 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Wei",
""
],
[
"Deng",
"Zhaohong",
""
],
[
"Wang",
"Guanjin",
""
],
[
"Choi",
"Kup-Sze",
""
]
] | TITLE: Fuzzy Rule-based Differentiable Representation Learning
ABSTRACT: Representation learning has emerged as a crucial focus in machine and deep
learning, involving the extraction of meaningful and useful features and
patterns from the input data, thereby enhancing the performance of various
downstream tasks such as classification, clustering, and prediction. Current
mainstream representation learning methods primarily rely on non-linear data
mining techniques such as kernel methods and deep neural networks to extract
abstract knowledge from complex datasets. However, most of these methods are
black-box, lacking transparency and interpretability in the learning process,
which constrains their practical utility. To this end, this paper introduces a
novel representation learning method grounded in an interpretable fuzzy
rule-based model. Specifically, it is built upon the Takagi-Sugeno-Kang fuzzy
system (TSK-FS) to initially map input data to a high-dimensional fuzzy feature
space through the antecedent part of the TSK-FS. Subsequently, a novel
differentiable optimization method is proposed for the consequence part
learning which can preserve the model's interpretability and transparency while
further exploring the nonlinear relationships within the data. This
optimization method retains the essence of traditional optimization, with
certain parts of the process parameterized corresponding differentiable modules
constructed, and a deep optimization process implemented. Consequently, this
method not only enhances the model's performance but also ensures its
interpretability. Moreover, a second-order geometry preservation method is
introduced to further improve the robustness of the proposed method. Extensive
experiments conducted on various benchmark datasets validate the superiority of
the proposed method, highlighting its potential for advancing representation
learning methodologies.
|
2503.13552 | Weihan Li | Weihan Li, Harshvardhan Samsukha, Bruis van Vlijmen, Lisen Yan, Samuel
Greenbank, Simona Onori, Venkat Viswanathan | Fast data augmentation for battery degradation prediction | null | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Degradation prediction for lithium-ion batteries using data-driven methods
requires high-quality aging data. However, generating such data, whether in the
laboratory or the field, is time- and resource-intensive. Here, we propose a
method for the synthetic generation of capacity fade curves based on limited
battery tests or operation data without the need for invasive battery
characterization, aiming to augment the datasets used by data-driven models for
degradation prediction. We validate our method by evaluating the performance of
both shallow and deep learning models using diverse datasets from laboratory
and field applications. These datasets encompass various chemistries and
realistic conditions, including cell-to-cell variations, measurement noise,
varying charge-discharge conditions, and capacity recovery. Our results show
that it is possible to reduce cell-testing efforts by at least 50% by
substituting synthetic data into an existing dataset. This paper highlights the
effectiveness of our synthetic data augmentation method in supplementing
existing methodologies in battery health prognostics while dramatically
reducing the expenditure of time and resources on battery aging experiments.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 16:50:07 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Li",
"Weihan",
""
],
[
"Samsukha",
"Harshvardhan",
""
],
[
"van Vlijmen",
"Bruis",
""
],
[
"Yan",
"Lisen",
""
],
[
"Greenbank",
"Samuel",
""
],
[
"Onori",
"Simona",
""
],
[
"Viswanathan",
"Venkat",
""
]
] | TITLE: Fast data augmentation for battery degradation prediction
ABSTRACT: Degradation prediction for lithium-ion batteries using data-driven methods
requires high-quality aging data. However, generating such data, whether in the
laboratory or the field, is time- and resource-intensive. Here, we propose a
method for the synthetic generation of capacity fade curves based on limited
battery tests or operation data without the need for invasive battery
characterization, aiming to augment the datasets used by data-driven models for
degradation prediction. We validate our method by evaluating the performance of
both shallow and deep learning models using diverse datasets from laboratory
and field applications. These datasets encompass various chemistries and
realistic conditions, including cell-to-cell variations, measurement noise,
varying charge-discharge conditions, and capacity recovery. Our results show
that it is possible to reduce cell-testing efforts by at least 50% by
substituting synthetic data into an existing dataset. This paper highlights the
effectiveness of our synthetic data augmentation method in supplementing
existing methodologies in battery health prognostics while dramatically
reducing the expenditure of time and resources on battery aging experiments.
|
2503.13560 | Zhaodong Wu | Zhaodong Wu, Qiaochu Zhao, Ming Hu, Yulong Li, Haochen Xue, Kang Dang,
Zhengyong Jiang, Angelos Stefanidis, Qiufeng Wang, Imran Razzak, Zongyuan Ge,
Junjun He, Yu Qiao, Zhong Zheng, Feilong Tang, Jionglong Su | MSWAL: 3D Multi-class Segmentation of Whole Abdominal Lesions Dataset | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the significantly increasing incidence and prevalence of abdominal
diseases, there is a need to embrace greater use of new innovations and
technology for the diagnosis and treatment of patients. Although deep-learning
methods have notably been developed to assist radiologists in diagnosing
abdominal diseases, existing models have the restricted ability to segment
common lesions in the abdomen due to missing annotations for typical abdominal
pathologies in their training datasets. To address the limitation, we introduce
MSWAL, the first 3D Multi-class Segmentation of the Whole Abdominal Lesions
dataset, which broadens the coverage of various common lesion types, such as
gallstones, kidney stones, liver tumors, kidney tumors, pancreatic cancer,
liver cysts, and kidney cysts. With CT scans collected from 694 patients
(191,417 slices) of different genders across various scanning phases, MSWAL
demonstrates strong robustness and generalizability. The transfer learning
experiment from MSWAL to two public datasets, LiTS and KiTS, effectively
demonstrates consistent improvements, with Dice Similarity Coefficient (DSC)
increase of 3.00% for liver tumors and 0.89% for kidney tumors, demonstrating
that the comprehensive annotations and diverse lesion types in MSWAL facilitate
effective learning across different domains and data distributions.
Furthermore, we propose Inception nnU-Net, a novel segmentation framework that
effectively integrates an Inception module with the nnU-Net architecture to
extract information from different receptive fields, achieving significant
enhancement in both voxel-level DSC and region-level F1 compared to the
cutting-edge public algorithms on MSWAL. Our dataset will be released after
being accepted, and the code is publicly released at
https://github.com/tiuxuxsh76075/MSWAL-.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 06:31:25 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wu",
"Zhaodong",
""
],
[
"Zhao",
"Qiaochu",
""
],
[
"Hu",
"Ming",
""
],
[
"Li",
"Yulong",
""
],
[
"Xue",
"Haochen",
""
],
[
"Dang",
"Kang",
""
],
[
"Jiang",
"Zhengyong",
""
],
[
"Stefanidis",
"Angelos",
""
],
[
"Wang",
"Qiufeng",
""
],
[
"Razzak",
"Imran",
""
],
[
"Ge",
"Zongyuan",
""
],
[
"He",
"Junjun",
""
],
[
"Qiao",
"Yu",
""
],
[
"Zheng",
"Zhong",
""
],
[
"Tang",
"Feilong",
""
],
[
"Su",
"Jionglong",
""
]
] | TITLE: MSWAL: 3D Multi-class Segmentation of Whole Abdominal Lesions Dataset
ABSTRACT: With the significantly increasing incidence and prevalence of abdominal
diseases, there is a need to embrace greater use of new innovations and
technology for the diagnosis and treatment of patients. Although deep-learning
methods have notably been developed to assist radiologists in diagnosing
abdominal diseases, existing models have the restricted ability to segment
common lesions in the abdomen due to missing annotations for typical abdominal
pathologies in their training datasets. To address the limitation, we introduce
MSWAL, the first 3D Multi-class Segmentation of the Whole Abdominal Lesions
dataset, which broadens the coverage of various common lesion types, such as
gallstones, kidney stones, liver tumors, kidney tumors, pancreatic cancer,
liver cysts, and kidney cysts. With CT scans collected from 694 patients
(191,417 slices) of different genders across various scanning phases, MSWAL
demonstrates strong robustness and generalizability. The transfer learning
experiment from MSWAL to two public datasets, LiTS and KiTS, effectively
demonstrates consistent improvements, with Dice Similarity Coefficient (DSC)
increase of 3.00% for liver tumors and 0.89% for kidney tumors, demonstrating
that the comprehensive annotations and diverse lesion types in MSWAL facilitate
effective learning across different domains and data distributions.
Furthermore, we propose Inception nnU-Net, a novel segmentation framework that
effectively integrates an Inception module with the nnU-Net architecture to
extract information from different receptive fields, achieving significant
enhancement in both voxel-level DSC and region-level F1 compared to the
cutting-edge public algorithms on MSWAL. Our dataset will be released after
being accepted, and the code is publicly released at
https://github.com/tiuxuxsh76075/MSWAL-.
|
2503.13562 | Lin-Han Jia | Lin-Han Jia, Lan-Zhe Guo, Zhi Zhou, Si-Ye Han, Zi-Wen Li, Yu-Feng Li | Micro Text Classification Based on Balanced Positive-Unlabeled Learning | null | null | null | null | stat.ML cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In real-world text classification tasks, negative texts often contain a
minimal proportion of negative content, which is especially problematic in
areas like text quality control, legal risk screening, and sensitive
information interception. This challenge manifests at two levels: at the macro
level, distinguishing negative texts is difficult due to the high similarity
between coarse-grained positive and negative samples; at the micro level, the
issue stems from extreme class imbalance and a lack of fine-grained labels. To
address these challenges, we propose transforming the coarse-grained
positive-negative (PN) classification task into an imbalanced fine-grained
positive-unlabeled (PU) classification problem, supported by theoretical
analysis. We introduce a novel framework, Balanced Fine-Grained
Positive-Unlabeled (BFGPU) learning, which features a unique PU learning loss
function that optimizes macro-level performance amidst severe imbalance at the
micro level. The framework's performance is further boosted by rebalanced
pseudo-labeling and threshold adjustment. Extensive experiments on both public
and real-world datasets demonstrate the effectiveness of BFGPU, which
outperforms other methods, even in extreme scenarios where both macro and micro
levels are highly imbalanced.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 07:42:27 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jia",
"Lin-Han",
""
],
[
"Guo",
"Lan-Zhe",
""
],
[
"Zhou",
"Zhi",
""
],
[
"Han",
"Si-Ye",
""
],
[
"Li",
"Zi-Wen",
""
],
[
"Li",
"Yu-Feng",
""
]
] | TITLE: Micro Text Classification Based on Balanced Positive-Unlabeled Learning
ABSTRACT: In real-world text classification tasks, negative texts often contain a
minimal proportion of negative content, which is especially problematic in
areas like text quality control, legal risk screening, and sensitive
information interception. This challenge manifests at two levels: at the macro
level, distinguishing negative texts is difficult due to the high similarity
between coarse-grained positive and negative samples; at the micro level, the
issue stems from extreme class imbalance and a lack of fine-grained labels. To
address these challenges, we propose transforming the coarse-grained
positive-negative (PN) classification task into an imbalanced fine-grained
positive-unlabeled (PU) classification problem, supported by theoretical
analysis. We introduce a novel framework, Balanced Fine-Grained
Positive-Unlabeled (BFGPU) learning, which features a unique PU learning loss
function that optimizes macro-level performance amidst severe imbalance at the
micro level. The framework's performance is further boosted by rebalanced
pseudo-labeling and threshold adjustment. Extensive experiments on both public
and real-world datasets demonstrate the effectiveness of BFGPU, which
outperforms other methods, even in extreme scenarios where both macro and micro
levels are highly imbalanced.
|
2503.13568 | Gal Versano | Gal Versano and Itzik Klein | WMINet: A Wheel-Mounted Inertial Learning Approach For Mobile-Robot
Positioning | null | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous mobile robots are widely used for navigation, transportation, and
inspection tasks indoors and outdoors. In practical situations of limited
satellite signals or poor lighting conditions, navigation depends only on
inertial sensors. In such cases, the navigation solution rapidly drifts due to
inertial measurement errors. In this work, we propose WMINet a wheel-mounted
inertial deep learning approach to estimate the mobile robot's position based
only on its inertial sensors. To that end, we merge two common practical
methods to reduce inertial drift: a wheel-mounted approach and driving the
mobile robot in periodic trajectories. Additionally, we enforce a wheelbase
constraint to further improve positioning performance. To evaluate our proposed
approach we recorded using the Rosbot-XL a wheel-mounted initial dataset
totaling 190 minutes, which is made publicly available. Our approach
demonstrated a 66\% improvement over state-of-the-art approaches. As a
consequence, our approach enables navigation in challenging environments and
bridges the pure inertial gap. This enables seamless robot navigation using
only inertial sensors for short periods.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:43:46 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Versano",
"Gal",
""
],
[
"Klein",
"Itzik",
""
]
] | TITLE: WMINet: A Wheel-Mounted Inertial Learning Approach For Mobile-Robot
Positioning
ABSTRACT: Autonomous mobile robots are widely used for navigation, transportation, and
inspection tasks indoors and outdoors. In practical situations of limited
satellite signals or poor lighting conditions, navigation depends only on
inertial sensors. In such cases, the navigation solution rapidly drifts due to
inertial measurement errors. In this work, we propose WMINet a wheel-mounted
inertial deep learning approach to estimate the mobile robot's position based
only on its inertial sensors. To that end, we merge two common practical
methods to reduce inertial drift: a wheel-mounted approach and driving the
mobile robot in periodic trajectories. Additionally, we enforce a wheelbase
constraint to further improve positioning performance. To evaluate our proposed
approach we recorded using the Rosbot-XL a wheel-mounted initial dataset
totaling 190 minutes, which is made publicly available. Our approach
demonstrated a 66\% improvement over state-of-the-art approaches. As a
consequence, our approach enables navigation in challenging environments and
bridges the pure inertial gap. This enables seamless robot navigation using
only inertial sensors for short periods.
|
2503.13572 | Minghao Shao | Zeng Wang, Minghao Shao, Jitendra Bhandari, Likhitha Mankali, Ramesh
Karri, Ozgur Sinanoglu, Muhammad Shafique, Johann Knechtel | VeriContaminated: Assessing LLM-Driven Verilog Coding for Data
Contamination | null | null | null | null | cs.AR cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have revolutionized code generation, achieving
exceptional results on various established benchmarking frameworks. However,
concerns about data contamination - where benchmark data inadvertently leaks
into pre-training or fine-tuning datasets - raise questions about the validity
of these evaluations. While this issue is known, limiting the industrial
adoption of LLM-driven software engineering, hardware coding has received
little to no attention regarding these risks. For the first time, we analyze
state-of-the-art (SOTA) evaluation frameworks for Verilog code generation
(VerilogEval and RTLLM), using established methods for contamination detection
(CCD and Min-K% Prob). We cover SOTA commercial and open-source LLMs
(CodeGen2.5, Minitron 4b, Mistral 7b, phi-4 mini, LLaMA-{1,2,3.1},
GPT-{2,3.5,4o}, Deepseek-Coder, and CodeQwen 1.5), in baseline and fine-tuned
models (RTLCoder and Verigen). Our study confirms that data contamination is a
critical concern. We explore mitigations and the resulting trade-offs for code
quality vs fairness (i.e., reducing contamination toward unbiased
benchmarking).
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 12:26:49 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Zeng",
""
],
[
"Shao",
"Minghao",
""
],
[
"Bhandari",
"Jitendra",
""
],
[
"Mankali",
"Likhitha",
""
],
[
"Karri",
"Ramesh",
""
],
[
"Sinanoglu",
"Ozgur",
""
],
[
"Shafique",
"Muhammad",
""
],
[
"Knechtel",
"Johann",
""
]
] | TITLE: VeriContaminated: Assessing LLM-Driven Verilog Coding for Data
Contamination
ABSTRACT: Large Language Models (LLMs) have revolutionized code generation, achieving
exceptional results on various established benchmarking frameworks. However,
concerns about data contamination - where benchmark data inadvertently leaks
into pre-training or fine-tuning datasets - raise questions about the validity
of these evaluations. While this issue is known, limiting the industrial
adoption of LLM-driven software engineering, hardware coding has received
little to no attention regarding these risks. For the first time, we analyze
state-of-the-art (SOTA) evaluation frameworks for Verilog code generation
(VerilogEval and RTLLM), using established methods for contamination detection
(CCD and Min-K% Prob). We cover SOTA commercial and open-source LLMs
(CodeGen2.5, Minitron 4b, Mistral 7b, phi-4 mini, LLaMA-{1,2,3.1},
GPT-{2,3.5,4o}, Deepseek-Coder, and CodeQwen 1.5), in baseline and fine-tuned
models (RTLCoder and Verigen). Our study confirms that data contamination is a
critical concern. We explore mitigations and the resulting trade-offs for code
quality vs fairness (i.e., reducing contamination toward unbiased
benchmarking).
|
2503.13581 | Beatrice Brown-Mulry | Beatrice Brown-Mulry, Rohan Satya Isaac, Sang Hyup Lee, Ambika Seth,
KyungJee Min, Theo Dapamede, Frank Li, Aawez Mansuri, MinJae Woo, Christian
Allison Fauria-Robinson, Bhavna Paryani, Judy Wawira Gichoya, Hari Trivedi | Subgroup Performance of a Commercial Digital Breast Tomosynthesis Model
for Breast Cancer Detection | 14 pages, 7 figures (plus 7 figures in supplement), 3 tables (plus 1
table in supplement) | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While research has established the potential of AI models for mammography to
improve breast cancer screening outcomes, there have not been any detailed
subgroup evaluations performed to assess the strengths and weaknesses of
commercial models for digital breast tomosynthesis (DBT) imaging. This study
presents a granular evaluation of the Lunit INSIGHT DBT model on a large
retrospective cohort of 163,449 screening mammography exams from the Emory
Breast Imaging Dataset (EMBED). Model performance was evaluated in a binary
context with various negative exam types (162,081 exams) compared against
screen detected cancers (1,368 exams) as the positive class. The analysis was
stratified across demographic, imaging, and pathologic subgroups to identify
potential disparities. The model achieved an overall AUC of 0.91 (95% CI:
0.90-0.92) with a precision of 0.08 (95% CI: 0.08-0.08), and a recall of 0.73
(95% CI: 0.71-0.76). Performance was found to be robust across demographics,
but cases with non-invasive cancers (AUC: 0.85, 95% CI: 0.83-0.87),
calcifications (AUC: 0.80, 95% CI: 0.78-0.82), and dense breast tissue (AUC:
0.90, 95% CI: 0.88-0.91) were associated with significantly lower performance
compared to other groups. These results highlight the need for detailed
evaluation of model characteristics and vigilance in considering adoption of
new tools for clinical deployment.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 17:17:36 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Brown-Mulry",
"Beatrice",
""
],
[
"Isaac",
"Rohan Satya",
""
],
[
"Lee",
"Sang Hyup",
""
],
[
"Seth",
"Ambika",
""
],
[
"Min",
"KyungJee",
""
],
[
"Dapamede",
"Theo",
""
],
[
"Li",
"Frank",
""
],
[
"Mansuri",
"Aawez",
""
],
[
"Woo",
"MinJae",
""
],
[
"Fauria-Robinson",
"Christian Allison",
""
],
[
"Paryani",
"Bhavna",
""
],
[
"Gichoya",
"Judy Wawira",
""
],
[
"Trivedi",
"Hari",
""
]
] | TITLE: Subgroup Performance of a Commercial Digital Breast Tomosynthesis Model
for Breast Cancer Detection
ABSTRACT: While research has established the potential of AI models for mammography to
improve breast cancer screening outcomes, there have not been any detailed
subgroup evaluations performed to assess the strengths and weaknesses of
commercial models for digital breast tomosynthesis (DBT) imaging. This study
presents a granular evaluation of the Lunit INSIGHT DBT model on a large
retrospective cohort of 163,449 screening mammography exams from the Emory
Breast Imaging Dataset (EMBED). Model performance was evaluated in a binary
context with various negative exam types (162,081 exams) compared against
screen detected cancers (1,368 exams) as the positive class. The analysis was
stratified across demographic, imaging, and pathologic subgroups to identify
potential disparities. The model achieved an overall AUC of 0.91 (95% CI:
0.90-0.92) with a precision of 0.08 (95% CI: 0.08-0.08), and a recall of 0.73
(95% CI: 0.71-0.76). Performance was found to be robust across demographics,
but cases with non-invasive cancers (AUC: 0.85, 95% CI: 0.83-0.87),
calcifications (AUC: 0.80, 95% CI: 0.78-0.82), and dense breast tissue (AUC:
0.90, 95% CI: 0.88-0.91) were associated with significantly lower performance
compared to other groups. These results highlight the need for detailed
evaluation of model characteristics and vigilance in considering adoption of
new tools for clinical deployment.
|
2503.13582 | Wenya Luo | Wenya Luo, Hua Li, Zhidong Bai, Zhijun Liu | Spectrally-Corrected and Regularized QDA Classifier for Spiked
Covariance Model | null | null | null | null | cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quadratic discriminant analysis (QDA) is a widely used method for
classification problems, particularly preferable over Linear Discriminant
Analysis (LDA) for heterogeneous data. However, QDA loses its effectiveness in
high-dimensional settings, where the data dimension and sample size tend to
infinity. To address this issue, we propose a novel QDA method utilizing
spectral correction and regularization techniques, termed SR-QDA. The
regularization parameters in our method are selected by maximizing the
Fisher-discriminant ratio. We compare SR-QDA with QDA, regularized quadratic
discriminant analysis (R-QDA), and several other competitors. The results
indicate that SR-QDA performs exceptionally well, especially in moderate and
high-dimensional situations. Empirical experiments across diverse datasets
further support this conclusion.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 17:21:03 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Luo",
"Wenya",
""
],
[
"Li",
"Hua",
""
],
[
"Bai",
"Zhidong",
""
],
[
"Liu",
"Zhijun",
""
]
] | TITLE: Spectrally-Corrected and Regularized QDA Classifier for Spiked
Covariance Model
ABSTRACT: Quadratic discriminant analysis (QDA) is a widely used method for
classification problems, particularly preferable over Linear Discriminant
Analysis (LDA) for heterogeneous data. However, QDA loses its effectiveness in
high-dimensional settings, where the data dimension and sample size tend to
infinity. To address this issue, we propose a novel QDA method utilizing
spectral correction and regularization techniques, termed SR-QDA. The
regularization parameters in our method are selected by maximizing the
Fisher-discriminant ratio. We compare SR-QDA with QDA, regularized quadratic
discriminant analysis (R-QDA), and several other competitors. The results
indicate that SR-QDA performs exceptionally well, especially in moderate and
high-dimensional situations. Empirical experiments across diverse datasets
further support this conclusion.
|
2503.13587 | Dingkang Liang | Dingkang Liang, Dingyuan Zhang, Xin Zhou, Sifan Tu, Tianrui Feng,
Xiaofan Li, Yumeng Zhang, Mingyang Du, Xiao Tan, Xiang Bai | Seeing the Future, Perceiving the Future: A Unified Driving World Model
for Future Generation and Perception | The project page is at https://github.com/dk-liang/UniFuture | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present UniFuture, a simple yet effective driving world model that
seamlessly integrates future scene generation and perception within a single
framework. Unlike existing models focusing solely on pixel-level future
prediction or geometric reasoning, our approach jointly models future
appearance (i.e., RGB image) and geometry (i.e., depth), ensuring coherent
predictions. Specifically, during the training, we first introduce a
Dual-Latent Sharing scheme, which transfers image and depth sequence in a
shared latent space, allowing both modalities to benefit from shared feature
learning. Additionally, we propose a Multi-scale Latent Interaction mechanism,
which facilitates bidirectional refinement between image and depth features at
multiple spatial scales, effectively enhancing geometry consistency and
perceptual alignment. During testing, our UniFuture can easily predict
high-consistency future image-depth pairs by only using the current image as
input. Extensive experiments on the nuScenes dataset demonstrate that UniFuture
outperforms specialized models on future generation and perception tasks,
highlighting the advantages of a unified, structurally-aware world model. The
project page is at https://github.com/dk-liang/UniFuture.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 17:59:50 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Liang",
"Dingkang",
""
],
[
"Zhang",
"Dingyuan",
""
],
[
"Zhou",
"Xin",
""
],
[
"Tu",
"Sifan",
""
],
[
"Feng",
"Tianrui",
""
],
[
"Li",
"Xiaofan",
""
],
[
"Zhang",
"Yumeng",
""
],
[
"Du",
"Mingyang",
""
],
[
"Tan",
"Xiao",
""
],
[
"Bai",
"Xiang",
""
]
] | TITLE: Seeing the Future, Perceiving the Future: A Unified Driving World Model
for Future Generation and Perception
ABSTRACT: We present UniFuture, a simple yet effective driving world model that
seamlessly integrates future scene generation and perception within a single
framework. Unlike existing models focusing solely on pixel-level future
prediction or geometric reasoning, our approach jointly models future
appearance (i.e., RGB image) and geometry (i.e., depth), ensuring coherent
predictions. Specifically, during the training, we first introduce a
Dual-Latent Sharing scheme, which transfers image and depth sequence in a
shared latent space, allowing both modalities to benefit from shared feature
learning. Additionally, we propose a Multi-scale Latent Interaction mechanism,
which facilitates bidirectional refinement between image and depth features at
multiple spatial scales, effectively enhancing geometry consistency and
perceptual alignment. During testing, our UniFuture can easily predict
high-consistency future image-depth pairs by only using the current image as
input. Extensive experiments on the nuScenes dataset demonstrate that UniFuture
outperforms specialized models on future generation and perception tasks,
highlighting the advantages of a unified, structurally-aware world model. The
project page is at https://github.com/dk-liang/UniFuture.
|
2503.13588 | Shiran Yuan | Shiran Yuan and Hao Zhao | Next-Scale Autoregressive Models are Zero-Shot Single-Image Object View
Synthesizers | Full codebase, training set, and eval benchmark at
https://github.com/Shiran-Yuan/ArchonView | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Methods based on diffusion backbones have recently revolutionized novel view
synthesis (NVS). However, those models require pretrained 2D diffusion
checkpoints (e.g., Stable Diffusion) as the basis for geometrical priors. Since
such checkpoints require exorbitant amounts of data and compute to train, this
greatly limits the scalability of diffusion-based NVS models. We present
Next-Scale Autoregression Conditioned by View (ArchonView), a method that
significantly exceeds state-of-the-art methods despite being trained from
scratch with 3D rendering data only and no 2D pretraining. We achieve this by
incorporating both global (pose-augmented semantics) and local (multi-scale
hierarchical encodings) conditioning into a backbone based on the next-scale
autoregression paradigm. Our model also exhibits robust performance even for
difficult camera poses where previous methods fail, and is several times faster
in inference speed compared to diffusion. We experimentally verify that
performance scales with model and dataset size, and conduct extensive
demonstration of our method's synthesis quality across several tasks. Our code
is open-sourced at https://github.com/Shiran-Yuan/ArchonView.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 17:59:59 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yuan",
"Shiran",
""
],
[
"Zhao",
"Hao",
""
]
] | TITLE: Next-Scale Autoregressive Models are Zero-Shot Single-Image Object View
Synthesizers
ABSTRACT: Methods based on diffusion backbones have recently revolutionized novel view
synthesis (NVS). However, those models require pretrained 2D diffusion
checkpoints (e.g., Stable Diffusion) as the basis for geometrical priors. Since
such checkpoints require exorbitant amounts of data and compute to train, this
greatly limits the scalability of diffusion-based NVS models. We present
Next-Scale Autoregression Conditioned by View (ArchonView), a method that
significantly exceeds state-of-the-art methods despite being trained from
scratch with 3D rendering data only and no 2D pretraining. We achieve this by
incorporating both global (pose-augmented semantics) and local (multi-scale
hierarchical encodings) conditioning into a backbone based on the next-scale
autoregression paradigm. Our model also exhibits robust performance even for
difficult camera poses where previous methods fail, and is several times faster
in inference speed compared to diffusion. We experimentally verify that
performance scales with model and dataset size, and conduct extensive
demonstration of our method's synthesis quality across several tasks. Our code
is open-sourced at https://github.com/Shiran-Yuan/ArchonView.
|
2503.13620 | Micheline Moumoula | Micheline B\'en\'edicte Moumoula and Abdoul Kader Kabore and Jacques
Klein and Tegawend\'e F. Bissyande | Evaluating Programming Language Confusion | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models for code (Code LLMs) have gained significant traction
in software engineering, achieving state-of-the-art performance on various
programming tasks including code completion, generation, repair, and
translation. These models have demonstrated remarkable capabilities in
understanding programming concepts, implementing algorithms, and even bridging
different programming languages, fundamentally transforming how developers
interact with coding environments. Despite these advances, Code LLMs often
struggle with programming language confusion--producing code in unintended
languages despite explicit instructions or obvious context. We systematically
evaluate this phenomenon across diverse programming contexts. Our study
assesses seven popular general and Code LLMs across multiple natural and
programming languages, analyzing their behavior using four datasets (HumanEval,
HumanEval-xl, MBPP, TP3) for code generation and one dataset (CodeNet) for code
translation. The study results reveal that language confusion occurs across all
evaluated models, with StarCoder and CodeLlama exhibiting the highest confusion
rates. Even high-performing models fail to maintain language consistency
throughout generated solutions, particularly when handling complex algorithmic
problems. We identify key factors contributing to this confusion, including
syntactic similarities between programming languages and inconsistent prompt
formatting. Interestingly, we find evidence suggesting that LLMs consistently
exhibit strategic language migration behaviors, prioritizing languages where
they can produce more syntactically correct code even when explicitly
instructed otherwise. This phenomenon is particularly pronounced in code
generation tasks, where models show strong migration patterns toward Python and
between syntactically similar language pairs.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 18:14:15 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Moumoula",
"Micheline Bénédicte",
""
],
[
"Kabore",
"Abdoul Kader",
""
],
[
"Klein",
"Jacques",
""
],
[
"Bissyande",
"Tegawendé F.",
""
]
] | TITLE: Evaluating Programming Language Confusion
ABSTRACT: Large Language Models for code (Code LLMs) have gained significant traction
in software engineering, achieving state-of-the-art performance on various
programming tasks including code completion, generation, repair, and
translation. These models have demonstrated remarkable capabilities in
understanding programming concepts, implementing algorithms, and even bridging
different programming languages, fundamentally transforming how developers
interact with coding environments. Despite these advances, Code LLMs often
struggle with programming language confusion--producing code in unintended
languages despite explicit instructions or obvious context. We systematically
evaluate this phenomenon across diverse programming contexts. Our study
assesses seven popular general and Code LLMs across multiple natural and
programming languages, analyzing their behavior using four datasets (HumanEval,
HumanEval-xl, MBPP, TP3) for code generation and one dataset (CodeNet) for code
translation. The study results reveal that language confusion occurs across all
evaluated models, with StarCoder and CodeLlama exhibiting the highest confusion
rates. Even high-performing models fail to maintain language consistency
throughout generated solutions, particularly when handling complex algorithmic
problems. We identify key factors contributing to this confusion, including
syntactic similarities between programming languages and inconsistent prompt
formatting. Interestingly, we find evidence suggesting that LLMs consistently
exhibit strategic language migration behaviors, prioritizing languages where
they can produce more syntactically correct code even when explicitly
instructed otherwise. This phenomenon is particularly pronounced in code
generation tasks, where models show strong migration patterns toward Python and
between syntactically similar language pairs.
|
2503.13637 | Andr\'e Augusto | Andr\'e Augusto, Andr\'e Vasconcelos, Miguel Correia, Luyao Zhang | XChainDataGen: A Cross-Chain Dataset Generation Framework | 13 pages, 10 figures | null | null | null | cs.CR cs.DC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The number of blockchain interoperability protocols for transferring data and
assets between blockchains has grown significantly. However, no open dataset of
cross-chain transactions exists to study interoperability protocols in
operation. There is also no tool to generate such datasets and make them
available to the community. This paper proposes XChainDataGen, a tool to
extract cross-chain data from blockchains and generate datasets of cross-chain
transactions (cctxs). Using XChainDataGen, we extracted over 35 GB of data from
five cross-chain protocols deployed on 11 blockchains in the last seven months
of 2024, identifying 11,285,753 cctxs that moved over 28 billion USD in
cross-chain token transfers. Using the data collected, we compare protocols and
provide insights into their security, cost, and performance trade-offs. As
examples, we highlight differences between protocols that require full finality
on the source blockchain and those that only demand soft finality
(\textit{security}). We compare user costs, fee models, and the impact of
variables such as the Ethereum gas price on protocol fees (\textit{cost}).
Finally, we produce the first analysis of the implications of EIP-7683 for
cross-chain intents, which are increasingly popular and greatly improve the
speed with which cctxs are processed (\textit{performance}), thereby enhancing
the user experience. The availability of XChainDataGen and this dataset allows
various analyses, including trends in cross-chain activity, security
assessments of interoperability protocols, and financial research on
decentralized finance (DeFi) protocols.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 18:39:43 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Augusto",
"André",
""
],
[
"Vasconcelos",
"André",
""
],
[
"Correia",
"Miguel",
""
],
[
"Zhang",
"Luyao",
""
]
] | TITLE: XChainDataGen: A Cross-Chain Dataset Generation Framework
ABSTRACT: The number of blockchain interoperability protocols for transferring data and
assets between blockchains has grown significantly. However, no open dataset of
cross-chain transactions exists to study interoperability protocols in
operation. There is also no tool to generate such datasets and make them
available to the community. This paper proposes XChainDataGen, a tool to
extract cross-chain data from blockchains and generate datasets of cross-chain
transactions (cctxs). Using XChainDataGen, we extracted over 35 GB of data from
five cross-chain protocols deployed on 11 blockchains in the last seven months
of 2024, identifying 11,285,753 cctxs that moved over 28 billion USD in
cross-chain token transfers. Using the data collected, we compare protocols and
provide insights into their security, cost, and performance trade-offs. As
examples, we highlight differences between protocols that require full finality
on the source blockchain and those that only demand soft finality
(\textit{security}). We compare user costs, fee models, and the impact of
variables such as the Ethereum gas price on protocol fees (\textit{cost}).
Finally, we produce the first analysis of the implications of EIP-7683 for
cross-chain intents, which are increasingly popular and greatly improve the
speed with which cctxs are processed (\textit{performance}), thereby enhancing
the user experience. The availability of XChainDataGen and this dataset allows
various analyses, including trends in cross-chain activity, security
assessments of interoperability protocols, and financial research on
decentralized finance (DeFi) protocols.
|
2503.13646 | Chiara Plizzari | Chiara Plizzari, Alessio Tonioni, Yongqin Xian, Achin Kulshrestha,
Federico Tombari | Omnia de EgoTempo: Benchmarking Temporal Understanding of Multi-Modal
LLMs in Egocentric Videos | Accepted to CVPR 2025. Dataset and code are available at
https://github.com/google-research-datasets/egotempo.git | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Understanding fine-grained temporal dynamics is crucial in egocentric videos,
where continuous streams capture frequent, close-up interactions with objects.
In this work, we bring to light that current egocentric video
question-answering datasets often include questions that can be answered using
only few frames or commonsense reasoning, without being necessarily grounded in
the actual video. Our analysis shows that state-of-the-art Multi-Modal Large
Language Models (MLLMs) on these benchmarks achieve remarkably high performance
using just text or a single frame as input. To address these limitations, we
introduce EgoTempo, a dataset specifically designed to evaluate temporal
understanding in the egocentric domain. EgoTempo emphasizes tasks that require
integrating information across the entire video, ensuring that models would
need to rely on temporal patterns rather than static cues or pre-existing
knowledge. Extensive experiments on EgoTempo show that current MLLMs still fall
short in temporal reasoning on egocentric videos, and thus we hope EgoTempo
will catalyze new research in the field and inspire models that better capture
the complexity of temporal dynamics. Dataset and code are available at
https://github.com/google-research-datasets/egotempo.git.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 18:50:36 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Plizzari",
"Chiara",
""
],
[
"Tonioni",
"Alessio",
""
],
[
"Xian",
"Yongqin",
""
],
[
"Kulshrestha",
"Achin",
""
],
[
"Tombari",
"Federico",
""
]
] | TITLE: Omnia de EgoTempo: Benchmarking Temporal Understanding of Multi-Modal
LLMs in Egocentric Videos
ABSTRACT: Understanding fine-grained temporal dynamics is crucial in egocentric videos,
where continuous streams capture frequent, close-up interactions with objects.
In this work, we bring to light that current egocentric video
question-answering datasets often include questions that can be answered using
only few frames or commonsense reasoning, without being necessarily grounded in
the actual video. Our analysis shows that state-of-the-art Multi-Modal Large
Language Models (MLLMs) on these benchmarks achieve remarkably high performance
using just text or a single frame as input. To address these limitations, we
introduce EgoTempo, a dataset specifically designed to evaluate temporal
understanding in the egocentric domain. EgoTempo emphasizes tasks that require
integrating information across the entire video, ensuring that models would
need to rely on temporal patterns rather than static cues or pre-existing
knowledge. Extensive experiments on EgoTempo show that current MLLMs still fall
short in temporal reasoning on egocentric videos, and thus we hope EgoTempo
will catalyze new research in the field and inspire models that better capture
the complexity of temporal dynamics. Dataset and code are available at
https://github.com/google-research-datasets/egotempo.git.
|
2503.13652 | Maan Qraitem | Maan Qraitem, Piotr Teterwak, Kate Saenko, Bryan A. Plummer | Web Artifact Attacks Disrupt Vision Language Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-language models (VLMs) (e.g., CLIP, LLaVA) are trained on large-scale,
lightly curated web datasets, leading them to learn unintended correlations
between semantic concepts and unrelated visual signals. These associations
degrade model accuracy by causing predictions to rely on incidental patterns
rather than genuine visual understanding. Prior work has weaponized these
correlations as an attack vector to manipulate model predictions, such as
inserting a deceiving class text onto the image in a typographic attack. These
attacks succeed due to VLMs' text-heavy bias-a result of captions that echo
visible words rather than describing content. However, this attack has focused
solely on text that matches the target class exactly, overlooking a broader
range of correlations, including non-matching text and graphical symbols, which
arise from the abundance of branding content in web-scale data. To address this
gap, we introduce artifact-based attacks: a novel class of manipulations that
mislead models using both non-matching text and graphical elements. Unlike
typographic attacks, these artifacts are not predefined, making them harder to
defend against but also more challenging to find. We address this by framing
artifact attacks as a search problem and demonstrate their effectiveness across
five datasets, with some artifacts reinforcing each other to reach 100% attack
success rates. These attacks transfer across models with up to 90%
effectiveness, making it possible to attack unseen models. To defend against
these attacks, we extend prior work's artifact aware prompting to the graphical
setting. We see a moderate reduction of success rates of up to 15% relative to
standard prompts, suggesting a promising direction for enhancing model
robustness.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 18:59:29 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Qraitem",
"Maan",
""
],
[
"Teterwak",
"Piotr",
""
],
[
"Saenko",
"Kate",
""
],
[
"Plummer",
"Bryan A.",
""
]
] | TITLE: Web Artifact Attacks Disrupt Vision Language Models
ABSTRACT: Vision-language models (VLMs) (e.g., CLIP, LLaVA) are trained on large-scale,
lightly curated web datasets, leading them to learn unintended correlations
between semantic concepts and unrelated visual signals. These associations
degrade model accuracy by causing predictions to rely on incidental patterns
rather than genuine visual understanding. Prior work has weaponized these
correlations as an attack vector to manipulate model predictions, such as
inserting a deceiving class text onto the image in a typographic attack. These
attacks succeed due to VLMs' text-heavy bias-a result of captions that echo
visible words rather than describing content. However, this attack has focused
solely on text that matches the target class exactly, overlooking a broader
range of correlations, including non-matching text and graphical symbols, which
arise from the abundance of branding content in web-scale data. To address this
gap, we introduce artifact-based attacks: a novel class of manipulations that
mislead models using both non-matching text and graphical elements. Unlike
typographic attacks, these artifacts are not predefined, making them harder to
defend against but also more challenging to find. We address this by framing
artifact attacks as a search problem and demonstrate their effectiveness across
five datasets, with some artifacts reinforcing each other to reach 100% attack
success rates. These attacks transfer across models with up to 90%
effectiveness, making it possible to attack unseen models. To defend against
these attacks, we extend prior work's artifact aware prompting to the graphical
setting. We see a moderate reduction of success rates of up to 15% relative to
standard prompts, suggesting a promising direction for enhancing model
robustness.
|
2503.13654 | Manisha Mukherjee | Manisha Mukherjee and Vincent J. Hellendoorn | SOSecure: Safer Code Generation with RAG and StackOverflow Discussions | null | null | null | null | cs.SE cs.CR | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) are widely used for automated code generation.
Their reliance on infrequently updated pretraining data leaves them unaware of
newly discovered vulnerabilities and evolving security standards, making them
prone to producing insecure code. In contrast, developer communities on Stack
Overflow (SO) provide an ever-evolving repository of knowledge, where security
vulnerabilities are actively discussed and addressed through collective
expertise. These community-driven insights remain largely untapped by LLMs.
This paper introduces SOSecure, a Retrieval-Augmented Generation (RAG) system
that leverages the collective security expertise found in SO discussions to
improve the security of LLM-generated code. We build a security-focused
knowledge base by extracting SO answers and comments that explicitly identify
vulnerabilities. Unlike common uses of RAG, SOSecure triggers after code has
been generated to find discussions that identify flaws in similar code. These
are used in a prompt to an LLM to consider revising the code. Evaluation across
three datasets (SALLM, LLMSecEval, and LMSys) show that SOSecure achieves
strong fix rates of 71.7%, 91.3%, and 96.7% respectively, compared to prompting
GPT-4 without relevant discussions (49.1%, 56.5%, and 37.5%), and outperforms
multiple other baselines. SOSecure operates as a language-agnostic complement
to existing LLMs, without requiring retraining or fine-tuning, making it easy
to deploy. Our results underscore the importance of maintaining active
developer forums, which have dropped substantially in usage with LLM adoptions.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 19:03:36 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Mukherjee",
"Manisha",
""
],
[
"Hellendoorn",
"Vincent J.",
""
]
] | TITLE: SOSecure: Safer Code Generation with RAG and StackOverflow Discussions
ABSTRACT: Large Language Models (LLMs) are widely used for automated code generation.
Their reliance on infrequently updated pretraining data leaves them unaware of
newly discovered vulnerabilities and evolving security standards, making them
prone to producing insecure code. In contrast, developer communities on Stack
Overflow (SO) provide an ever-evolving repository of knowledge, where security
vulnerabilities are actively discussed and addressed through collective
expertise. These community-driven insights remain largely untapped by LLMs.
This paper introduces SOSecure, a Retrieval-Augmented Generation (RAG) system
that leverages the collective security expertise found in SO discussions to
improve the security of LLM-generated code. We build a security-focused
knowledge base by extracting SO answers and comments that explicitly identify
vulnerabilities. Unlike common uses of RAG, SOSecure triggers after code has
been generated to find discussions that identify flaws in similar code. These
are used in a prompt to an LLM to consider revising the code. Evaluation across
three datasets (SALLM, LLMSecEval, and LMSys) show that SOSecure achieves
strong fix rates of 71.7%, 91.3%, and 96.7% respectively, compared to prompting
GPT-4 without relevant discussions (49.1%, 56.5%, and 37.5%), and outperforms
multiple other baselines. SOSecure operates as a language-agnostic complement
to existing LLMs, without requiring retraining or fine-tuning, making it easy
to deploy. Our results underscore the importance of maintaining active
developer forums, which have dropped substantially in usage with LLM adoptions.
|
2503.13657 | Mert Cemri | Mert Cemri, Melissa Z. Pan, Shuyi Yang, Lakshya A. Agrawal, Bhavya
Chopra, Rishabh Tiwari, Kurt Keutzer, Aditya Parameswaran, Dan Klein, Kannan
Ramchandran, Matei Zaharia, Joseph E. Gonzalez, Ion Stoica | Why Do Multi-Agent LLM Systems Fail? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite growing enthusiasm for Multi-Agent Systems (MAS), where multiple LLM
agents collaborate to accomplish tasks, their performance gains across popular
benchmarks remain minimal compared to single-agent frameworks. This gap
highlights the need to analyze the challenges hindering MAS effectiveness.
In this paper, we present the first comprehensive study of MAS challenges. We
analyze five popular MAS frameworks across over 150 tasks, involving six expert
human annotators. We identify 14 unique failure modes and propose a
comprehensive taxonomy applicable to various MAS frameworks. This taxonomy
emerges iteratively from agreements among three expert annotators per study,
achieving a Cohen's Kappa score of 0.88. These fine-grained failure modes are
organized into 3 categories, (i) specification and system design failures, (ii)
inter-agent misalignment, and (iii) task verification and termination. To
support scalable evaluation, we integrate MASFT with LLM-as-a-Judge. We also
explore if identified failures could be easily prevented by proposing two
interventions: improved specification of agent roles and enhanced orchestration
strategies. Our findings reveal that identified failures require more complex
solutions, highlighting a clear roadmap for future research. We open-source our
dataset and LLM annotator.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 19:04:38 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Cemri",
"Mert",
""
],
[
"Pan",
"Melissa Z.",
""
],
[
"Yang",
"Shuyi",
""
],
[
"Agrawal",
"Lakshya A.",
""
],
[
"Chopra",
"Bhavya",
""
],
[
"Tiwari",
"Rishabh",
""
],
[
"Keutzer",
"Kurt",
""
],
[
"Parameswaran",
"Aditya",
""
],
[
"Klein",
"Dan",
""
],
[
"Ramchandran",
"Kannan",
""
],
[
"Zaharia",
"Matei",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Stoica",
"Ion",
""
]
] | TITLE: Why Do Multi-Agent LLM Systems Fail?
ABSTRACT: Despite growing enthusiasm for Multi-Agent Systems (MAS), where multiple LLM
agents collaborate to accomplish tasks, their performance gains across popular
benchmarks remain minimal compared to single-agent frameworks. This gap
highlights the need to analyze the challenges hindering MAS effectiveness.
In this paper, we present the first comprehensive study of MAS challenges. We
analyze five popular MAS frameworks across over 150 tasks, involving six expert
human annotators. We identify 14 unique failure modes and propose a
comprehensive taxonomy applicable to various MAS frameworks. This taxonomy
emerges iteratively from agreements among three expert annotators per study,
achieving a Cohen's Kappa score of 0.88. These fine-grained failure modes are
organized into 3 categories, (i) specification and system design failures, (ii)
inter-agent misalignment, and (iii) task verification and termination. To
support scalable evaluation, we integrate MASFT with LLM-as-a-Judge. We also
explore if identified failures could be easily prevented by proposing two
interventions: improved specification of agent roles and enhanced orchestration
strategies. Our findings reveal that identified failures require more complex
solutions, highlighting a clear roadmap for future research. We open-source our
dataset and LLM annotator.
|
2503.13661 | Huy Hoang Ha | Huy Hoang Ha | Pensez: Less Data, Better Reasoning -- Rethinking French LLM | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have demonstrated remarkable capabilities in
various natural language processing tasks. However, achieving strong
performance in specialized domains like mathematical reasoning and non-English
languages often requires extensive training on massive datasets. This paper
investigates a contrasting approach: strategic fine-tuning on a small,
high-quality, bilingual (English-French) dataset to enhance both the reasoning
capabilities and French language proficiency of a large language model. Rather
than relying on scale, we explore the hypothesis that targeted data curation
and optimized training can achieve competitive, or even superior, performance.
We demonstrate, through targeted supervised fine-tuning (SFT) on only 2,000
carefully selected samples, significant improvements in mathematical reasoning.
Specifically, Pensez 7B exhibits an increase in accuracy of the base model up
to 20% on the AIME25 and a 12% increase on a French MATH level 5 benchmark.
These results challenge the prevailing assumption that massive datasets are
aprerequisite for strong reasoning performance in LLMs, highlighting the
potential of strategic data curation and optimized fine-tuning for enhancing
both specialized skills and multilingual capabilities. Our findings have
implications for the efficient development of high-performing, multilingual
LLMs, especially in resource-constrained scenarios.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 19:09:11 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ha",
"Huy Hoang",
""
]
] | TITLE: Pensez: Less Data, Better Reasoning -- Rethinking French LLM
ABSTRACT: Large language models (LLMs) have demonstrated remarkable capabilities in
various natural language processing tasks. However, achieving strong
performance in specialized domains like mathematical reasoning and non-English
languages often requires extensive training on massive datasets. This paper
investigates a contrasting approach: strategic fine-tuning on a small,
high-quality, bilingual (English-French) dataset to enhance both the reasoning
capabilities and French language proficiency of a large language model. Rather
than relying on scale, we explore the hypothesis that targeted data curation
and optimized training can achieve competitive, or even superior, performance.
We demonstrate, through targeted supervised fine-tuning (SFT) on only 2,000
carefully selected samples, significant improvements in mathematical reasoning.
Specifically, Pensez 7B exhibits an increase in accuracy of the base model up
to 20% on the AIME25 and a 12% increase on a French MATH level 5 benchmark.
These results challenge the prevailing assumption that massive datasets are
aprerequisite for strong reasoning performance in LLMs, highlighting the
potential of strategic data curation and optimized fine-tuning for enhancing
both specialized skills and multilingual capabilities. Our findings have
implications for the efficient development of high-performing, multilingual
LLMs, especially in resource-constrained scenarios.
|
2503.13676 | Minoru Kusaba | Minoru Kusaba, Megumi Iwayama, and Ryo Yoshida | Bayesian Kernel Regression for Functional Data | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | In supervised learning, the output variable to be predicted is often
represented as a function, such as a spectrum or probability distribution.
Despite its importance, functional output regression remains relatively
unexplored. In this study, we propose a novel functional output regression
model based on kernel methods. Unlike conventional approaches that
independently train regressors with scalar outputs for each measurement point
of the output function, our method leverages the covariance structure within
the function values, akin to multitask learning, leading to enhanced learning
efficiency and improved prediction accuracy. Compared with existing nonlinear
function-on-scalar models in statistical functional data analysis, our model
effectively handles high-dimensional nonlinearity while maintaining a simple
model structure. Furthermore, the fully kernel-based formulation allows the
model to be expressed within the framework of reproducing kernel Hilbert space
(RKHS), providing an analytic form for parameter estimation and a solid
foundation for further theoretical analysis. The proposed model delivers a
functional output predictive distribution derived analytically from a Bayesian
perspective, enabling the quantification of uncertainty in the predicted
function. We demonstrate the model's enhanced prediction performance through
experiments on artificial datasets and density of states prediction tasks in
materials science.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 19:28:27 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Kusaba",
"Minoru",
""
],
[
"Iwayama",
"Megumi",
""
],
[
"Yoshida",
"Ryo",
""
]
] | TITLE: Bayesian Kernel Regression for Functional Data
ABSTRACT: In supervised learning, the output variable to be predicted is often
represented as a function, such as a spectrum or probability distribution.
Despite its importance, functional output regression remains relatively
unexplored. In this study, we propose a novel functional output regression
model based on kernel methods. Unlike conventional approaches that
independently train regressors with scalar outputs for each measurement point
of the output function, our method leverages the covariance structure within
the function values, akin to multitask learning, leading to enhanced learning
efficiency and improved prediction accuracy. Compared with existing nonlinear
function-on-scalar models in statistical functional data analysis, our model
effectively handles high-dimensional nonlinearity while maintaining a simple
model structure. Furthermore, the fully kernel-based formulation allows the
model to be expressed within the framework of reproducing kernel Hilbert space
(RKHS), providing an analytic form for parameter estimation and a solid
foundation for further theoretical analysis. The proposed model delivers a
functional output predictive distribution derived analytically from a Bayesian
perspective, enabling the quantification of uncertainty in the predicted
function. We demonstrate the model's enhanced prediction performance through
experiments on artificial datasets and density of states prediction tasks in
materials science.
|
2503.13707 | Saket Gurukar | Saket Gurukar and Asim Kadav | Long-VMNet: Accelerating Long-Form Video Understanding via Fixed Memory | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-form video understanding is essential for various applications such as
video retrieval, summarizing, and question answering. Yet, traditional
approaches demand substantial computing power and are often bottlenecked by GPU
memory. To tackle this issue, we present Long-Video Memory Network, Long-VMNet,
a novel video understanding method that employs a fixed-size memory
representation to store discriminative patches sampled from the input video.
Long-VMNet achieves improved efficiency by leveraging a neural sampler that
identifies discriminative tokens. Additionally, Long-VMNet only needs one scan
through the video, greatly boosting efficiency. Our results on the Rest-ADL
dataset demonstrate an 18x -- 75x improvement in inference times for long-form
video retrieval and answering questions, with a competitive predictive
performance.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 20:25:41 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Gurukar",
"Saket",
""
],
[
"Kadav",
"Asim",
""
]
] | TITLE: Long-VMNet: Accelerating Long-Form Video Understanding via Fixed Memory
ABSTRACT: Long-form video understanding is essential for various applications such as
video retrieval, summarizing, and question answering. Yet, traditional
approaches demand substantial computing power and are often bottlenecked by GPU
memory. To tackle this issue, we present Long-Video Memory Network, Long-VMNet,
a novel video understanding method that employs a fixed-size memory
representation to store discriminative patches sampled from the input video.
Long-VMNet achieves improved efficiency by leveraging a neural sampler that
identifies discriminative tokens. Additionally, Long-VMNet only needs one scan
through the video, greatly boosting efficiency. Our results on the Rest-ADL
dataset demonstrate an 18x -- 75x improvement in inference times for long-form
video retrieval and answering questions, with a competitive predictive
performance.
|
2503.13709 | Kanghui Ning | Yushan Jiang, Kanghui Ning, Zijie Pan, Xuyang Shen, Jingchao Ni,
Wenchao Yu, Anderson Schneider, Haifeng Chen, Yuriy Nevmyvaka, Dongjin Song | Multi-modal Time Series Analysis: A Tutorial and Survey | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Multi-modal time series analysis has recently emerged as a prominent research
area in data mining, driven by the increasing availability of diverse data
modalities, such as text, images, and structured tabular data from real-world
sources. However, effective analysis of multi-modal time series is hindered by
data heterogeneity, modality gap, misalignment, and inherent noise. Recent
advancements in multi-modal time series methods have exploited the multi-modal
context via cross-modal interactions based on deep learning methods,
significantly enhancing various downstream tasks. In this tutorial and survey,
we present a systematic and up-to-date overview of multi-modal time series
datasets and methods. We first state the existing challenges of multi-modal
time series analysis and our motivations, with a brief introduction of
preliminaries. Then, we summarize the general pipeline and categorize existing
methods through a unified cross-modal interaction framework encompassing
fusion, alignment, and transference at different levels (\textit{i.e.}, input,
intermediate, output), where key concepts and ideas are highlighted. We also
discuss the real-world applications of multi-modal analysis for both standard
and spatial time series, tailored to general and specific domains. Finally, we
discuss future research directions to help practitioners explore and exploit
multi-modal time series. The up-to-date resources are provided in the GitHub
repository: https://github.com/UConn-DSIS/Multi-modal-Time-Series-Analysis
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 20:30:02 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jiang",
"Yushan",
""
],
[
"Ning",
"Kanghui",
""
],
[
"Pan",
"Zijie",
""
],
[
"Shen",
"Xuyang",
""
],
[
"Ni",
"Jingchao",
""
],
[
"Yu",
"Wenchao",
""
],
[
"Schneider",
"Anderson",
""
],
[
"Chen",
"Haifeng",
""
],
[
"Nevmyvaka",
"Yuriy",
""
],
[
"Song",
"Dongjin",
""
]
] | TITLE: Multi-modal Time Series Analysis: A Tutorial and Survey
ABSTRACT: Multi-modal time series analysis has recently emerged as a prominent research
area in data mining, driven by the increasing availability of diverse data
modalities, such as text, images, and structured tabular data from real-world
sources. However, effective analysis of multi-modal time series is hindered by
data heterogeneity, modality gap, misalignment, and inherent noise. Recent
advancements in multi-modal time series methods have exploited the multi-modal
context via cross-modal interactions based on deep learning methods,
significantly enhancing various downstream tasks. In this tutorial and survey,
we present a systematic and up-to-date overview of multi-modal time series
datasets and methods. We first state the existing challenges of multi-modal
time series analysis and our motivations, with a brief introduction of
preliminaries. Then, we summarize the general pipeline and categorize existing
methods through a unified cross-modal interaction framework encompassing
fusion, alignment, and transference at different levels (\textit{i.e.}, input,
intermediate, output), where key concepts and ideas are highlighted. We also
discuss the real-world applications of multi-modal analysis for both standard
and spatial time series, tailored to general and specific domains. Finally, we
discuss future research directions to help practitioners explore and exploit
multi-modal time series. The up-to-date resources are provided in the GitHub
repository: https://github.com/UConn-DSIS/Multi-modal-Time-Series-Analysis
|
2503.13721 | Zhenlong Yuan | Zhenlong Yuan, Zhidong Yang, Yujun Cai, Kuangxin Wu, Mufan Liu, Dapeng
Zhang, Hao Jiang, Zhaoxin Li, and Zhaoqi Wang | SED-MVS: Segmentation-Driven and Edge-Aligned Deformation Multi-View
Stereo with Depth Restoration and Occlusion Constraint | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, patch-deformation methods have exhibited significant effectiveness
in multi-view stereo owing to the deformable and expandable patches in
reconstructing textureless areas. However, such methods primarily emphasize
broadening the receptive field in textureless areas, while neglecting
deformation instability caused by easily overlooked edge-skipping, potentially
leading to matching distortions. To address this, we propose SED-MVS, which
adopts panoptic segmentation and multi-trajectory diffusion strategy for
segmentation-driven and edge-aligned patch deformation. Specifically, to
prevent unanticipated edge-skipping, we first employ SAM2 for panoptic
segmentation as depth-edge guidance to guide patch deformation, followed by
multi-trajectory diffusion strategy to ensure patches are comprehensively
aligned with depth edges. Moreover, to avoid potential inaccuracy of random
initialization, we combine both sparse points from LoFTR and monocular depth
map from DepthAnything V2 to restore reliable and realistic depth map for
initialization and supervised guidance. Finally, we integrate segmentation
image with monocular depth map to exploit inter-instance occlusion
relationship, then further regard them as occlusion map to implement two
distinct edge constraint, thereby facilitating occlusion-aware patch
deformation. Extensive results on ETH3D, Tanks & Temples, BlendedMVS and
Strecha datasets validate the state-of-the-art performance and robust
generalization capability of our proposed method.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 21:07:44 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yuan",
"Zhenlong",
""
],
[
"Yang",
"Zhidong",
""
],
[
"Cai",
"Yujun",
""
],
[
"Wu",
"Kuangxin",
""
],
[
"Liu",
"Mufan",
""
],
[
"Zhang",
"Dapeng",
""
],
[
"Jiang",
"Hao",
""
],
[
"Li",
"Zhaoxin",
""
],
[
"Wang",
"Zhaoqi",
""
]
] | TITLE: SED-MVS: Segmentation-Driven and Edge-Aligned Deformation Multi-View
Stereo with Depth Restoration and Occlusion Constraint
ABSTRACT: Recently, patch-deformation methods have exhibited significant effectiveness
in multi-view stereo owing to the deformable and expandable patches in
reconstructing textureless areas. However, such methods primarily emphasize
broadening the receptive field in textureless areas, while neglecting
deformation instability caused by easily overlooked edge-skipping, potentially
leading to matching distortions. To address this, we propose SED-MVS, which
adopts panoptic segmentation and multi-trajectory diffusion strategy for
segmentation-driven and edge-aligned patch deformation. Specifically, to
prevent unanticipated edge-skipping, we first employ SAM2 for panoptic
segmentation as depth-edge guidance to guide patch deformation, followed by
multi-trajectory diffusion strategy to ensure patches are comprehensively
aligned with depth edges. Moreover, to avoid potential inaccuracy of random
initialization, we combine both sparse points from LoFTR and monocular depth
map from DepthAnything V2 to restore reliable and realistic depth map for
initialization and supervised guidance. Finally, we integrate segmentation
image with monocular depth map to exploit inter-instance occlusion
relationship, then further regard them as occlusion map to implement two
distinct edge constraint, thereby facilitating occlusion-aware patch
deformation. Extensive results on ETH3D, Tanks & Temples, BlendedMVS and
Strecha datasets validate the state-of-the-art performance and robust
generalization capability of our proposed method.
|
2503.13724 | Shristi Das Biswas | Shristi Das Biswas, Efstathia Soufleri, Arani Roy, Kaushik Roy | Towards Scalable Modeling of Compressed Videos for Efficient Action
Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Training robust deep video representations has proven to be computationally
challenging due to substantial decoding overheads, the enormous size of raw
video streams, and their inherent high temporal redundancy. Different from
existing schemes, operating exclusively in the compressed video domain and
exploiting all freely available modalities, i.e., I-frames, and P-frames
(motion vectors and residuals) offers a compute-efficient alternative. Existing
methods approach this task as a naive multi-modality problem, ignoring the
temporal correlation and implicit sparsity across P-frames for modeling
stronger shared representations for videos of the same action, making training
and generalization easier. By revisiting the high-level design of dominant
video understanding backbones, we increase inference speed by a factor of $56$
while retaining similar performance. For this, we propose a hybrid end-to-end
framework that factorizes learning across three key concepts to reduce
inference cost by $330\times$ versus prior art: First, a specially designed
dual-encoder scheme with efficient Spiking Temporal Modulators to minimize
latency while retaining cross-domain feature aggregation. Second, a unified
transformer model to capture inter-modal dependencies using global
self-attention to enhance I-frame -- P-frame contextual interactions. Third, a
Multi-Modal Mixer Block to model rich representations from the joint
spatiotemporal token embeddings. Experiments show that our method results in a
lightweight architecture achieving state-of-the-art video recognition
performance on UCF-101, HMDB-51, K-400, K-600 and SS-v2 datasets with favorable
costs ($0.73$J/V) and fast inference ($16$V/s). Our observations bring new
insights into practical design choices for efficient next-generation
spatiotemporal learners. Code is available.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 21:13:48 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Biswas",
"Shristi Das",
""
],
[
"Soufleri",
"Efstathia",
""
],
[
"Roy",
"Arani",
""
],
[
"Roy",
"Kaushik",
""
]
] | TITLE: Towards Scalable Modeling of Compressed Videos for Efficient Action
Recognition
ABSTRACT: Training robust deep video representations has proven to be computationally
challenging due to substantial decoding overheads, the enormous size of raw
video streams, and their inherent high temporal redundancy. Different from
existing schemes, operating exclusively in the compressed video domain and
exploiting all freely available modalities, i.e., I-frames, and P-frames
(motion vectors and residuals) offers a compute-efficient alternative. Existing
methods approach this task as a naive multi-modality problem, ignoring the
temporal correlation and implicit sparsity across P-frames for modeling
stronger shared representations for videos of the same action, making training
and generalization easier. By revisiting the high-level design of dominant
video understanding backbones, we increase inference speed by a factor of $56$
while retaining similar performance. For this, we propose a hybrid end-to-end
framework that factorizes learning across three key concepts to reduce
inference cost by $330\times$ versus prior art: First, a specially designed
dual-encoder scheme with efficient Spiking Temporal Modulators to minimize
latency while retaining cross-domain feature aggregation. Second, a unified
transformer model to capture inter-modal dependencies using global
self-attention to enhance I-frame -- P-frame contextual interactions. Third, a
Multi-Modal Mixer Block to model rich representations from the joint
spatiotemporal token embeddings. Experiments show that our method results in a
lightweight architecture achieving state-of-the-art video recognition
performance on UCF-101, HMDB-51, K-400, K-600 and SS-v2 datasets with favorable
costs ($0.73$J/V) and fast inference ($16$V/s). Our observations bring new
insights into practical design choices for efficient next-generation
spatiotemporal learners. Code is available.
|
2503.13730 | Forouzan Fallah | Forouzan Fallah, Maitreya Patel, Agneet Chatterjee, Vlad I. Morariu,
Chitta Baral, Yezhou Yang | TextInVision: Text and Prompt Complexity Driven Visual Text Generation
Benchmark | null | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Generating images with embedded text is crucial for the automatic production
of visual and multimodal documents, such as educational materials and
advertisements. However, existing diffusion-based text-to-image models often
struggle to accurately embed text within images, facing challenges in spelling
accuracy, contextual relevance, and visual coherence. Evaluating the ability of
such models to embed text within a generated image is complicated due to the
lack of comprehensive benchmarks. In this work, we introduce TextInVision, a
large-scale, text and prompt complexity driven benchmark designed to evaluate
the ability of diffusion models to effectively integrate visual text into
images. We crafted a diverse set of prompts and texts that consider various
attributes and text characteristics. Additionally, we prepared an image dataset
to test Variational Autoencoder (VAE) models across different character
representations, highlighting that VAE architectures can also pose challenges
in text generation within diffusion frameworks. Through extensive analysis of
multiple models, we identify common errors and highlight issues such as
spelling inaccuracies and contextual mismatches. By pinpointing the failure
points across different prompts and texts, our research lays the foundation for
future advancements in AI-generated multimodal content.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 21:36:31 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Fallah",
"Forouzan",
""
],
[
"Patel",
"Maitreya",
""
],
[
"Chatterjee",
"Agneet",
""
],
[
"Morariu",
"Vlad I.",
""
],
[
"Baral",
"Chitta",
""
],
[
"Yang",
"Yezhou",
""
]
] | TITLE: TextInVision: Text and Prompt Complexity Driven Visual Text Generation
Benchmark
ABSTRACT: Generating images with embedded text is crucial for the automatic production
of visual and multimodal documents, such as educational materials and
advertisements. However, existing diffusion-based text-to-image models often
struggle to accurately embed text within images, facing challenges in spelling
accuracy, contextual relevance, and visual coherence. Evaluating the ability of
such models to embed text within a generated image is complicated due to the
lack of comprehensive benchmarks. In this work, we introduce TextInVision, a
large-scale, text and prompt complexity driven benchmark designed to evaluate
the ability of diffusion models to effectively integrate visual text into
images. We crafted a diverse set of prompts and texts that consider various
attributes and text characteristics. Additionally, we prepared an image dataset
to test Variational Autoencoder (VAE) models across different character
representations, highlighting that VAE architectures can also pose challenges
in text generation within diffusion frameworks. Through extensive analysis of
multiple models, we identify common errors and highlight issues such as
spelling inaccuracies and contextual mismatches. By pinpointing the failure
points across different prompts and texts, our research lays the foundation for
future advancements in AI-generated multimodal content.
|
2503.13733 | Dilshod Azizov | Daniil Orel, Dilshod Azizov, Preslav Nakov | CoDet-M4: Detecting Machine-Generated Code in Multi-Lingual,
Multi-Generator and Multi-Domain Settings | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have revolutionized code generation, automating
programming with remarkable efficiency. However, these advancements challenge
programming skills, ethics, and assessment integrity, making the detection of
LLM-generated code essential for maintaining accountability and standards.
While, there has been some research on this problem, it generally lacks domain
coverage and robustness, and only covers a small number of programming
languages. To this end, we propose a framework capable of distinguishing
between human- and LLM-written code across multiple programming languages, code
generators, and domains. We use a large-scale dataset from renowned platforms
and LLM-based code generators, alongside applying rigorous data quality checks,
feature engineering, and comparative analysis using evaluation of traditional
machine learning models, pre-trained language models (PLMs), and LLMs for code
detection. We perform an evaluation on out-of-domain scenarios, such as
detecting the authorship and hybrid authorship of generated code and
generalizing to unseen models, domains, and programming languages. Moreover,
our extensive experiments show that our framework effectively distinguishes
human- from LLM-written code and sets a new benchmark for this task.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 21:41:37 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Orel",
"Daniil",
""
],
[
"Azizov",
"Dilshod",
""
],
[
"Nakov",
"Preslav",
""
]
] | TITLE: CoDet-M4: Detecting Machine-Generated Code in Multi-Lingual,
Multi-Generator and Multi-Domain Settings
ABSTRACT: Large language models (LLMs) have revolutionized code generation, automating
programming with remarkable efficiency. However, these advancements challenge
programming skills, ethics, and assessment integrity, making the detection of
LLM-generated code essential for maintaining accountability and standards.
While, there has been some research on this problem, it generally lacks domain
coverage and robustness, and only covers a small number of programming
languages. To this end, we propose a framework capable of distinguishing
between human- and LLM-written code across multiple programming languages, code
generators, and domains. We use a large-scale dataset from renowned platforms
and LLM-based code generators, alongside applying rigorous data quality checks,
feature engineering, and comparative analysis using evaluation of traditional
machine learning models, pre-trained language models (PLMs), and LLMs for code
detection. We perform an evaluation on out-of-domain scenarios, such as
detecting the authorship and hybrid authorship of generated code and
generalizing to unseen models, domains, and programming languages. Moreover,
our extensive experiments show that our framework effectively distinguishes
human- from LLM-written code and sets a new benchmark for this task.
|
2503.13739 | Keqi Chen | Keqi Chen, Vinkle Srivastav, Didier Mutter, Nicolas Padoy | Learning from Synchronization: Self-Supervised Uncalibrated Multi-View
Person Association in Challenging Scenes | Accepted for CVPR 2025. Code:
https://github.com/CAMMA-public/Self-MVA | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-view person association is a fundamental step towards multi-view
analysis of human activities. Although the person re-identification features
have been proven effective, they become unreliable in challenging scenes where
persons share similar appearances. Therefore, cross-view geometric constraints
are required for a more robust association. However, most existing approaches
are either fully-supervised using ground-truth identity labels or require
calibrated camera parameters that are hard to obtain. In this work, we
investigate the potential of learning from synchronization, and propose a
self-supervised uncalibrated multi-view person association approach, Self-MVA,
without using any annotations. Specifically, we propose a self-supervised
learning framework, consisting of an encoder-decoder model and a
self-supervised pretext task, cross-view image synchronization, which aims to
distinguish whether two images from different views are captured at the same
time. The model encodes each person's unified geometric and appearance
features, and we train it by utilizing synchronization labels for supervision
after applying Hungarian matching to bridge the gap between instance-wise and
image-wise distances. To further reduce the solution space, we propose two
types of self-supervised linear constraints: multi-view re-projection and
pairwise edge association. Extensive experiments on three challenging public
benchmark datasets (WILDTRACK, MVOR, and SOLDIERS) show that our approach
achieves state-of-the-art results, surpassing existing unsupervised and
fully-supervised approaches. Code is available at
https://github.com/CAMMA-public/Self-MVA.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 21:48:56 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chen",
"Keqi",
""
],
[
"Srivastav",
"Vinkle",
""
],
[
"Mutter",
"Didier",
""
],
[
"Padoy",
"Nicolas",
""
]
] | TITLE: Learning from Synchronization: Self-Supervised Uncalibrated Multi-View
Person Association in Challenging Scenes
ABSTRACT: Multi-view person association is a fundamental step towards multi-view
analysis of human activities. Although the person re-identification features
have been proven effective, they become unreliable in challenging scenes where
persons share similar appearances. Therefore, cross-view geometric constraints
are required for a more robust association. However, most existing approaches
are either fully-supervised using ground-truth identity labels or require
calibrated camera parameters that are hard to obtain. In this work, we
investigate the potential of learning from synchronization, and propose a
self-supervised uncalibrated multi-view person association approach, Self-MVA,
without using any annotations. Specifically, we propose a self-supervised
learning framework, consisting of an encoder-decoder model and a
self-supervised pretext task, cross-view image synchronization, which aims to
distinguish whether two images from different views are captured at the same
time. The model encodes each person's unified geometric and appearance
features, and we train it by utilizing synchronization labels for supervision
after applying Hungarian matching to bridge the gap between instance-wise and
image-wise distances. To further reduce the solution space, we propose two
types of self-supervised linear constraints: multi-view re-projection and
pairwise edge association. Extensive experiments on three challenging public
benchmark datasets (WILDTRACK, MVOR, and SOLDIERS) show that our approach
achieves state-of-the-art results, surpassing existing unsupervised and
fully-supervised approaches. Code is available at
https://github.com/CAMMA-public/Self-MVA.
|
2503.13751 | Andrew Ilyas | Logan Engstrom, Andrew Ilyas, Benjamin Chen, Axel Feldmann, William
Moses, Aleksander Madry | Optimizing ML Training with Metagradient Descent | null | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major challenge in training large-scale machine learning models is
configuring the training process to maximize model performance, i.e., finding
the best training setup from a vast design space. In this work, we unlock a
gradient-based approach to this problem. We first introduce an algorithm for
efficiently calculating metagradients -- gradients through model training -- at
scale. We then introduce a "smooth model training" framework that enables
effective optimization using metagradients. With metagradient descent (MGD), we
greatly improve on existing dataset selection methods, outperform
accuracy-degrading data poisoning attacks by an order of magnitude, and
automatically find competitive learning rate schedules.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 22:18:24 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Engstrom",
"Logan",
""
],
[
"Ilyas",
"Andrew",
""
],
[
"Chen",
"Benjamin",
""
],
[
"Feldmann",
"Axel",
""
],
[
"Moses",
"William",
""
],
[
"Madry",
"Aleksander",
""
]
] | TITLE: Optimizing ML Training with Metagradient Descent
ABSTRACT: A major challenge in training large-scale machine learning models is
configuring the training process to maximize model performance, i.e., finding
the best training setup from a vast design space. In this work, we unlock a
gradient-based approach to this problem. We first introduce an algorithm for
efficiently calculating metagradients -- gradients through model training -- at
scale. We then introduce a "smooth model training" framework that enables
effective optimization using metagradients. With metagradient descent (MGD), we
greatly improve on existing dataset selection methods, outperform
accuracy-degrading data poisoning attacks by an order of magnitude, and
automatically find competitive learning rate schedules.
|
2503.13763 | Atharva Agashe | Atharva Agashe, Davelle Carreiro, Alexandra Van Dine and Joshua
Peeples | Neural Edge Histogram Descriptors for Underwater Acoustic Target
Recognition | 6 pages, 5 figures. This work has been accepted to IEEE OCEANS 2025 | null | null | null | cs.LG cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Numerous maritime applications rely on the ability to recognize acoustic
targets using passive sonar. While there is a growing reliance on pre-trained
models for classification tasks, these models often require extensive
computational resources and may not perform optimally when transferred to new
domains due to dataset variations. To address these challenges, this work
adapts the neural edge histogram descriptors (NEHD) method originally developed
for image classification, to classify passive sonar signals. We conduct a
comprehensive evaluation of statistical and structural texture features,
demonstrating that their combination achieves competitive performance with
large pre-trained models. The proposed NEHD-based approach offers a lightweight
and efficient solution for underwater target recognition, significantly
reducing computational costs while maintaining accuracy.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 22:57:05 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Agashe",
"Atharva",
""
],
[
"Carreiro",
"Davelle",
""
],
[
"Van Dine",
"Alexandra",
""
],
[
"Peeples",
"Joshua",
""
]
] | TITLE: Neural Edge Histogram Descriptors for Underwater Acoustic Target
Recognition
ABSTRACT: Numerous maritime applications rely on the ability to recognize acoustic
targets using passive sonar. While there is a growing reliance on pre-trained
models for classification tasks, these models often require extensive
computational resources and may not perform optimally when transferred to new
domains due to dataset variations. To address these challenges, this work
adapts the neural edge histogram descriptors (NEHD) method originally developed
for image classification, to classify passive sonar signals. We conduct a
comprehensive evaluation of statistical and structural texture features,
demonstrating that their combination achieves competitive performance with
large pre-trained models. The proposed NEHD-based approach offers a lightweight
and efficient solution for underwater target recognition, significantly
reducing computational costs while maintaining accuracy.
|
2503.13777 | Xuyang Fang | Xuyang Fang, Sion Hannuna, Neill Campbell | 8-Calves Image dataset | 11 pages, 5 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce the 8-Calves dataset, a benchmark for evaluating object
detection and identity classification in occlusion-rich, temporally consistent
environments. The dataset comprises a 1-hour video (67,760 frames) of eight
Holstein Friesian calves in a barn, with ground truth bounding boxes and
identities, alongside 900 static frames for detection tasks. Each calf exhibits
a unique coat pattern, enabling precise identity distinction.
For cow detection, we fine-tuned 28 models (25 YOLO variants, 3 transformers)
on 600 frames, testing on the full video. Results reveal smaller YOLO models
(e.g., YOLOV9c) outperform larger counterparts despite potential bias from a
YOLOv8m-based labeling pipeline. For identity classification, embeddings from
23 pretrained vision models (ResNet, ConvNextV2, ViTs) were evaluated via
linear classifiers and KNN. Modern architectures like ConvNextV2 excelled,
while larger models frequently overfit, highlighting inefficiencies in scaling.
Key findings include: (1) Minimal, targeted augmentations (e.g., rotation)
outperform complex strategies on simpler datasets; (2) Pretraining strategies
(e.g., BEiT, DinoV2) significantly boost identity recognition; (3) Temporal
continuity and natural motion patterns offer unique challenges absent in
synthetic or domain-specific benchmarks. The dataset's controlled design and
extended sequences (1 hour vs. prior 10-minute benchmarks) make it a pragmatic
tool for stress-testing occlusion handling, temporal consistency, and
efficiency.
The link to the dataset is https://github.com/tonyFang04/8-calves.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 23:47:52 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Fang",
"Xuyang",
""
],
[
"Hannuna",
"Sion",
""
],
[
"Campbell",
"Neill",
""
]
] | TITLE: 8-Calves Image dataset
ABSTRACT: We introduce the 8-Calves dataset, a benchmark for evaluating object
detection and identity classification in occlusion-rich, temporally consistent
environments. The dataset comprises a 1-hour video (67,760 frames) of eight
Holstein Friesian calves in a barn, with ground truth bounding boxes and
identities, alongside 900 static frames for detection tasks. Each calf exhibits
a unique coat pattern, enabling precise identity distinction.
For cow detection, we fine-tuned 28 models (25 YOLO variants, 3 transformers)
on 600 frames, testing on the full video. Results reveal smaller YOLO models
(e.g., YOLOV9c) outperform larger counterparts despite potential bias from a
YOLOv8m-based labeling pipeline. For identity classification, embeddings from
23 pretrained vision models (ResNet, ConvNextV2, ViTs) were evaluated via
linear classifiers and KNN. Modern architectures like ConvNextV2 excelled,
while larger models frequently overfit, highlighting inefficiencies in scaling.
Key findings include: (1) Minimal, targeted augmentations (e.g., rotation)
outperform complex strategies on simpler datasets; (2) Pretraining strategies
(e.g., BEiT, DinoV2) significantly boost identity recognition; (3) Temporal
continuity and natural motion patterns offer unique challenges absent in
synthetic or domain-specific benchmarks. The dataset's controlled design and
extended sequences (1 hour vs. prior 10-minute benchmarks) make it a pragmatic
tool for stress-testing occlusion handling, temporal consistency, and
efficiency.
The link to the dataset is https://github.com/tonyFang04/8-calves.
|
2503.13798 | Amirhossein Khakpour | Amirhossein Khakpour, Lucia Florescu, Richard Tilley, Haibo Jiang, K.
Swaminathan Iyer, Gustavo Carneiro | AI-Powered Prediction of Nanoparticle Pharmacokinetics: A Multi-View
Learning Approach | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The clinical translation of nanoparticle-based treatments remains limited due
to the unpredictability of (nanoparticle) NP
pharmacokinetics$\unicode{x2014}$how they distribute, accumulate, and clear
from the body. Predicting these behaviours is challenging due to complex
biological interactions and the difficulty of obtaining high-quality
experimental datasets. Existing AI-driven approaches rely heavily on
data-driven learning but fail to integrate crucial knowledge about NP
properties and biodistribution mechanisms. We introduce a multi-view deep
learning framework that enhances pharmacokinetic predictions by incorporating
prior knowledge of key NP properties such as size and charge into a
cross-attention mechanism, enabling context-aware feature selection and
improving generalization despite small datasets. To further enhance prediction
robustness, we employ an ensemble learning approach, combining deep learning
with XGBoost (XGB) and Random Forest (RF), which significantly outperforms
existing AI models. Our interpretability analysis reveals key physicochemical
properties driving NP biodistribution, providing biologically meaningful
insights into possible mechanisms governing NP behaviour in vivo rather than a
black-box model. Furthermore, by bridging machine learning with physiologically
based pharmacokinetic (PBPK) modelling, this work lays the foundation for
data-efficient AI-driven drug discovery and precision nanomedicine.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 01:09:32 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Khakpour",
"Amirhossein",
""
],
[
"Florescu",
"Lucia",
""
],
[
"Tilley",
"Richard",
""
],
[
"Jiang",
"Haibo",
""
],
[
"Iyer",
"K. Swaminathan",
""
],
[
"Carneiro",
"Gustavo",
""
]
] | TITLE: AI-Powered Prediction of Nanoparticle Pharmacokinetics: A Multi-View
Learning Approach
ABSTRACT: The clinical translation of nanoparticle-based treatments remains limited due
to the unpredictability of (nanoparticle) NP
pharmacokinetics$\unicode{x2014}$how they distribute, accumulate, and clear
from the body. Predicting these behaviours is challenging due to complex
biological interactions and the difficulty of obtaining high-quality
experimental datasets. Existing AI-driven approaches rely heavily on
data-driven learning but fail to integrate crucial knowledge about NP
properties and biodistribution mechanisms. We introduce a multi-view deep
learning framework that enhances pharmacokinetic predictions by incorporating
prior knowledge of key NP properties such as size and charge into a
cross-attention mechanism, enabling context-aware feature selection and
improving generalization despite small datasets. To further enhance prediction
robustness, we employ an ensemble learning approach, combining deep learning
with XGBoost (XGB) and Random Forest (RF), which significantly outperforms
existing AI models. Our interpretability analysis reveals key physicochemical
properties driving NP biodistribution, providing biologically meaningful
insights into possible mechanisms governing NP behaviour in vivo rather than a
black-box model. Furthermore, by bridging machine learning with physiologically
based pharmacokinetic (PBPK) modelling, this work lays the foundation for
data-efficient AI-driven drug discovery and precision nanomedicine.
|
2503.13799 | Liangrui Pan | Liangrui Pan, Xiaoyu Li, Yutao Dou, Qiya Song, Jiadi Luo, Qingchun
Liang, Shaoliang Peng | SMILE: a Scale-aware Multiple Instance Learning Method for Multicenter
STAS Lung Cancer Histopathology Diagnosis | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Spread through air spaces (STAS) represents a newly identified aggressive
pattern in lung cancer, which is known to be associated with adverse prognostic
factors and complex pathological features. Pathologists currently rely on time
consuming manual assessments, which are highly subjective and prone to
variation. This highlights the urgent need for automated and precise diag
nostic solutions. 2,970 lung cancer tissue slides are comprised from multiple
centers, re-diagnosed them, and constructed and publicly released three lung
cancer STAS datasets: STAS CSU (hospital), STAS TCGA, and STAS CPTAC. All STAS
datasets provide corresponding pathological feature diagnoses and related
clinical data. To address the bias, sparse and heterogeneous nature of STAS, we
propose an scale-aware multiple instance learning(SMILE) method for STAS
diagnosis of lung cancer. By introducing a scale-adaptive attention mechanism,
the SMILE can adaptively adjust high attention instances, reducing
over-reliance on local regions and promoting consistent detection of STAS
lesions. Extensive experiments show that SMILE achieved competitive diagnostic
results on STAS CSU, diagnosing 251 and 319 STAS samples in CPTAC
andTCGA,respectively, surpassing clinical average AUC. The 11 open baseline
results are the first to be established for STAS research, laying the
foundation for the future expansion, interpretability, and clinical integration
of computational pathology technologies. The datasets and code are available at
https://anonymous.4open.science/r/IJCAI25-1DA1.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 01:09:52 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Pan",
"Liangrui",
""
],
[
"Li",
"Xiaoyu",
""
],
[
"Dou",
"Yutao",
""
],
[
"Song",
"Qiya",
""
],
[
"Luo",
"Jiadi",
""
],
[
"Liang",
"Qingchun",
""
],
[
"Peng",
"Shaoliang",
""
]
] | TITLE: SMILE: a Scale-aware Multiple Instance Learning Method for Multicenter
STAS Lung Cancer Histopathology Diagnosis
ABSTRACT: Spread through air spaces (STAS) represents a newly identified aggressive
pattern in lung cancer, which is known to be associated with adverse prognostic
factors and complex pathological features. Pathologists currently rely on time
consuming manual assessments, which are highly subjective and prone to
variation. This highlights the urgent need for automated and precise diag
nostic solutions. 2,970 lung cancer tissue slides are comprised from multiple
centers, re-diagnosed them, and constructed and publicly released three lung
cancer STAS datasets: STAS CSU (hospital), STAS TCGA, and STAS CPTAC. All STAS
datasets provide corresponding pathological feature diagnoses and related
clinical data. To address the bias, sparse and heterogeneous nature of STAS, we
propose an scale-aware multiple instance learning(SMILE) method for STAS
diagnosis of lung cancer. By introducing a scale-adaptive attention mechanism,
the SMILE can adaptively adjust high attention instances, reducing
over-reliance on local regions and promoting consistent detection of STAS
lesions. Extensive experiments show that SMILE achieved competitive diagnostic
results on STAS CSU, diagnosing 251 and 319 STAS samples in CPTAC
andTCGA,respectively, surpassing clinical average AUC. The 11 open baseline
results are the first to be established for STAS research, laying the
foundation for the future expansion, interpretability, and clinical integration
of computational pathology technologies. The datasets and code are available at
https://anonymous.4open.science/r/IJCAI25-1DA1.
|
2503.13805 | Xin Zhong | Muhammad Ahtesham, Xin Zhong | Text-Guided Image Invariant Feature Learning for Robust Image
Watermarking | null | null | null | null | cs.CV cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring robustness in image watermarking is crucial for and maintaining
content integrity under diverse transformations. Recent self-supervised
learning (SSL) approaches, such as DINO, have been leveraged for watermarking
but primarily focus on general feature representation rather than explicitly
learning invariant features. In this work, we propose a novel text-guided
invariant feature learning framework for robust image watermarking. Our
approach leverages CLIP's multimodal capabilities, using text embeddings as
stable semantic anchors to enforce feature invariance under distortions. We
evaluate the proposed method across multiple datasets, demonstrating superior
robustness against various image transformations. Compared to state-of-the-art
SSL methods, our model achieves higher cosine similarity in feature consistency
tests and outperforms existing watermarking schemes in extraction accuracy
under severe distortions. These results highlight the efficacy of our method in
learning invariant representations tailored for robust deep learning-based
watermarking.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 01:32:38 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ahtesham",
"Muhammad",
""
],
[
"Zhong",
"Xin",
""
]
] | TITLE: Text-Guided Image Invariant Feature Learning for Robust Image
Watermarking
ABSTRACT: Ensuring robustness in image watermarking is crucial for and maintaining
content integrity under diverse transformations. Recent self-supervised
learning (SSL) approaches, such as DINO, have been leveraged for watermarking
but primarily focus on general feature representation rather than explicitly
learning invariant features. In this work, we propose a novel text-guided
invariant feature learning framework for robust image watermarking. Our
approach leverages CLIP's multimodal capabilities, using text embeddings as
stable semantic anchors to enforce feature invariance under distortions. We
evaluate the proposed method across multiple datasets, demonstrating superior
robustness against various image transformations. Compared to state-of-the-art
SSL methods, our model achieves higher cosine similarity in feature consistency
tests and outperforms existing watermarking schemes in extraction accuracy
under severe distortions. These results highlight the efficacy of our method in
learning invariant representations tailored for robust deep learning-based
watermarking.
|
2503.13806 | Jiancheng Ye | Wenjie Zhang, Ziyang Zhang, Mengnan He, Jiancheng Ye | Organ-aware Multi-scale Medical Image Segmentation Using Text Prompt
Engineering | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate segmentation is essential for effective treatment planning and
disease monitoring. Existing medical image segmentation methods predominantly
rely on uni-modal visual inputs, such as images or videos, requiring
labor-intensive manual annotations. Additionally, medical imaging techniques
capture multiple intertwined organs within a single scan, further complicating
segmentation accuracy. To address these challenges, MedSAM, a large-scale
medical segmentation model based on the Segment Anything Model (SAM), was
developed to enhance segmentation accuracy by integrating image features with
user-provided prompts. While MedSAM has demonstrated strong performance across
various medical segmentation tasks, it primarily relies on geometric prompts
(e.g., points and bounding boxes) and lacks support for text-based prompts,
which could help specify subtle or ambiguous anatomical structures. To overcome
these limitations, we propose the Organ-aware Multi-scale Text-guided Medical
Image Segmentation Model (OMT-SAM) for multi-organ segmentation. Our approach
introduces CLIP encoders as a novel image-text prompt encoder, operating with
the geometric prompt encoder to provide informative contextual guidance. We
pair descriptive textual prompts with corresponding images, processing them
through pre-trained CLIP encoders and a cross-attention mechanism to generate
fused image-text embeddings. Additionally, we extract multi-scale visual
features from MedSAM, capturing fine-grained anatomical details at different
levels of granularity. We evaluate OMT-SAM on the FLARE 2021 dataset,
benchmarking its performance against existing segmentation methods. Empirical
results demonstrate that OMT-SAM achieves a mean Dice Similarity Coefficient of
0.937, outperforming MedSAM (0.893) and other segmentation models, highlighting
its superior capability in handling complex medical image segmentation tasks.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 01:35:34 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Wenjie",
""
],
[
"Zhang",
"Ziyang",
""
],
[
"He",
"Mengnan",
""
],
[
"Ye",
"Jiancheng",
""
]
] | TITLE: Organ-aware Multi-scale Medical Image Segmentation Using Text Prompt
Engineering
ABSTRACT: Accurate segmentation is essential for effective treatment planning and
disease monitoring. Existing medical image segmentation methods predominantly
rely on uni-modal visual inputs, such as images or videos, requiring
labor-intensive manual annotations. Additionally, medical imaging techniques
capture multiple intertwined organs within a single scan, further complicating
segmentation accuracy. To address these challenges, MedSAM, a large-scale
medical segmentation model based on the Segment Anything Model (SAM), was
developed to enhance segmentation accuracy by integrating image features with
user-provided prompts. While MedSAM has demonstrated strong performance across
various medical segmentation tasks, it primarily relies on geometric prompts
(e.g., points and bounding boxes) and lacks support for text-based prompts,
which could help specify subtle or ambiguous anatomical structures. To overcome
these limitations, we propose the Organ-aware Multi-scale Text-guided Medical
Image Segmentation Model (OMT-SAM) for multi-organ segmentation. Our approach
introduces CLIP encoders as a novel image-text prompt encoder, operating with
the geometric prompt encoder to provide informative contextual guidance. We
pair descriptive textual prompts with corresponding images, processing them
through pre-trained CLIP encoders and a cross-attention mechanism to generate
fused image-text embeddings. Additionally, we extract multi-scale visual
features from MedSAM, capturing fine-grained anatomical details at different
levels of granularity. We evaluate OMT-SAM on the FLARE 2021 dataset,
benchmarking its performance against existing segmentation methods. Empirical
results demonstrate that OMT-SAM achieves a mean Dice Similarity Coefficient of
0.937, outperforming MedSAM (0.893) and other segmentation models, highlighting
its superior capability in handling complex medical image segmentation tasks.
|
2503.13814 | Jinping Wang | Jinping Wang, Weiwei Song, Hao Chen, Jinchang Ren and Huimin Zhao | FusDreamer: Label-efficient Remote Sensing World Model for Multimodal
Data Classification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | World models significantly enhance hierarchical understanding, improving data
integration and learning efficiency. To explore the potential of the world
model in the remote sensing (RS) field, this paper proposes a label-efficient
remote sensing world model for multimodal data fusion (FusDreamer). The
FusDreamer uses the world model as a unified representation container to
abstract common and high-level knowledge, promoting interactions across
different types of data, \emph{i.e.}, hyperspectral (HSI), light detection and
ranging (LiDAR), and text data. Initially, a new latent diffusion fusion and
multimodal generation paradigm (LaMG) is utilized for its exceptional
information integration and detail retention capabilities. Subsequently, an
open-world knowledge-guided consistency projection (OK-CP) module incorporates
prompt representations for visually described objects and aligns
language-visual features through contrastive learning. In this way, the domain
gap can be bridged by fine-tuning the pre-trained world models with limited
samples. Finally, an end-to-end multitask combinatorial optimization (MuCO)
strategy can capture slight feature bias and constrain the diffusion process in
a collaboratively learnable direction. Experiments conducted on four typical
datasets indicate the effectiveness and advantages of the proposed FusDreamer.
The corresponding code will be released at
https://github.com/Cimy-wang/FusDreamer.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 01:45:51 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Jinping",
""
],
[
"Song",
"Weiwei",
""
],
[
"Chen",
"Hao",
""
],
[
"Ren",
"Jinchang",
""
],
[
"Zhao",
"Huimin",
""
]
] | TITLE: FusDreamer: Label-efficient Remote Sensing World Model for Multimodal
Data Classification
ABSTRACT: World models significantly enhance hierarchical understanding, improving data
integration and learning efficiency. To explore the potential of the world
model in the remote sensing (RS) field, this paper proposes a label-efficient
remote sensing world model for multimodal data fusion (FusDreamer). The
FusDreamer uses the world model as a unified representation container to
abstract common and high-level knowledge, promoting interactions across
different types of data, \emph{i.e.}, hyperspectral (HSI), light detection and
ranging (LiDAR), and text data. Initially, a new latent diffusion fusion and
multimodal generation paradigm (LaMG) is utilized for its exceptional
information integration and detail retention capabilities. Subsequently, an
open-world knowledge-guided consistency projection (OK-CP) module incorporates
prompt representations for visually described objects and aligns
language-visual features through contrastive learning. In this way, the domain
gap can be bridged by fine-tuning the pre-trained world models with limited
samples. Finally, an end-to-end multitask combinatorial optimization (MuCO)
strategy can capture slight feature bias and constrain the diffusion process in
a collaboratively learnable direction. Experiments conducted on four typical
datasets indicate the effectiveness and advantages of the proposed FusDreamer.
The corresponding code will be released at
https://github.com/Cimy-wang/FusDreamer.
|
2503.13828 | Lichao Mou | Chunlei Li, Yilei Shi, Jingliang Hu, Xiao Xiang Zhu, Lichao Mou | Scale-Aware Contrastive Reverse Distillation for Unsupervised Medical
Anomaly Detection | ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised anomaly detection using deep learning has garnered significant
research attention due to its broad applicability, particularly in medical
imaging where labeled anomalous data are scarce. While earlier approaches
leverage generative models like autoencoders and generative adversarial
networks (GANs), they often fall short due to overgeneralization. Recent
methods explore various strategies, including memory banks, normalizing flows,
self-supervised learning, and knowledge distillation, to enhance
discrimination. Among these, knowledge distillation, particularly reverse
distillation, has shown promise. Following this paradigm, we propose a novel
scale-aware contrastive reverse distillation model that addresses two key
limitations of existing reverse distillation methods: insufficient feature
discriminability and inability to handle anomaly scale variations.
Specifically, we introduce a contrastive student-teacher learning approach to
derive more discriminative representations by generating and exploring
out-of-normal distributions. Further, we design a scale adaptation mechanism to
softly weight contrastive distillation losses at different scales to account
for the scale variation issue. Extensive experiments on benchmark datasets
demonstrate state-of-the-art performance, validating the efficacy of the
proposed method. Code is available at https://github.com/MedAITech/SCRD4AD.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 02:10:20 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Li",
"Chunlei",
""
],
[
"Shi",
"Yilei",
""
],
[
"Hu",
"Jingliang",
""
],
[
"Zhu",
"Xiao Xiang",
""
],
[
"Mou",
"Lichao",
""
]
] | TITLE: Scale-Aware Contrastive Reverse Distillation for Unsupervised Medical
Anomaly Detection
ABSTRACT: Unsupervised anomaly detection using deep learning has garnered significant
research attention due to its broad applicability, particularly in medical
imaging where labeled anomalous data are scarce. While earlier approaches
leverage generative models like autoencoders and generative adversarial
networks (GANs), they often fall short due to overgeneralization. Recent
methods explore various strategies, including memory banks, normalizing flows,
self-supervised learning, and knowledge distillation, to enhance
discrimination. Among these, knowledge distillation, particularly reverse
distillation, has shown promise. Following this paradigm, we propose a novel
scale-aware contrastive reverse distillation model that addresses two key
limitations of existing reverse distillation methods: insufficient feature
discriminability and inability to handle anomaly scale variations.
Specifically, we introduce a contrastive student-teacher learning approach to
derive more discriminative representations by generating and exploring
out-of-normal distributions. Further, we design a scale adaptation mechanism to
softly weight contrastive distillation losses at different scales to account
for the scale variation issue. Extensive experiments on benchmark datasets
demonstrate state-of-the-art performance, validating the efficacy of the
proposed method. Code is available at https://github.com/MedAITech/SCRD4AD.
|
2503.13834 | JuneHyoung Kwon | JuneHyoung Kwon, MiHyeon Kim, Eunju Lee, Juhwan Choi, YoungBin Kim | See-Saw Modality Balance: See Gradient, and Sew Impaired Vision-Language
Balance to Mitigate Dominant Modality Bias | Accepted to NAACL 2025 Main | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-language (VL) models have demonstrated strong performance across
various tasks. However, these models often rely on a specific modality for
predictions, leading to "dominant modality bias.'' This bias significantly
hurts performance, especially when one modality is impaired. In this study, we
analyze model behavior under dominant modality bias and theoretically show that
unaligned gradients or differences in gradient magnitudes prevent balanced
convergence of the loss. Based on these findings, we propose a novel framework,
BalGrad to mitigate dominant modality bias. Our approach includes
inter-modality gradient reweighting, adjusting the gradient of KL divergence
based on each modality's contribution, and inter-task gradient projection to
align task directions in a non-conflicting manner. Experiments on UPMC
Food-101, Hateful Memes, and MM-IMDb datasets confirm that BalGrad effectively
alleviates over-reliance on specific modalities when making predictions.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 02:17:41 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Kwon",
"JuneHyoung",
""
],
[
"Kim",
"MiHyeon",
""
],
[
"Lee",
"Eunju",
""
],
[
"Choi",
"Juhwan",
""
],
[
"Kim",
"YoungBin",
""
]
] | TITLE: See-Saw Modality Balance: See Gradient, and Sew Impaired Vision-Language
Balance to Mitigate Dominant Modality Bias
ABSTRACT: Vision-language (VL) models have demonstrated strong performance across
various tasks. However, these models often rely on a specific modality for
predictions, leading to "dominant modality bias.'' This bias significantly
hurts performance, especially when one modality is impaired. In this study, we
analyze model behavior under dominant modality bias and theoretically show that
unaligned gradients or differences in gradient magnitudes prevent balanced
convergence of the loss. Based on these findings, we propose a novel framework,
BalGrad to mitigate dominant modality bias. Our approach includes
inter-modality gradient reweighting, adjusting the gradient of KL divergence
based on each modality's contribution, and inter-task gradient projection to
align task directions in a non-conflicting manner. Experiments on UPMC
Food-101, Hateful Memes, and MM-IMDb datasets confirm that BalGrad effectively
alleviates over-reliance on specific modalities when making predictions.
|
2503.13844 | Elyas Meguellati | Elyas Meguellati, Stefano Civelli, Pietro Bernardelle, Shazia Sadiq,
Gianluca Demartini | Spotting Persuasion: A Low-cost Model for Persuasion Detection in
Political Ads on Social Media | null | null | null | null | cs.CL cs.AI cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the realm of political advertising, persuasion operates as a pivotal
element within the broader framework of propaganda, exerting profound
influences on public opinion and electoral outcomes. In this paper, we (1)
introduce a lightweight model for persuasive text detection that achieves
state-of-the-art performance in Subtask 3 of SemEval 2023 Task 3, while
significantly reducing the computational resource requirements; and (2)
leverage the proposed model to gain insights into political campaigning
strategies on social media platforms by applying it to a real-world dataset we
curated, consisting of Facebook political ads from the 2022 Australian Federal
election campaign. Our study shows how subtleties can be found in persuasive
political advertisements and presents a pragmatic approach to detect and
analyze such strategies with limited resources, enhancing transparency in
social media political campaigns.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 02:33:38 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Meguellati",
"Elyas",
""
],
[
"Civelli",
"Stefano",
""
],
[
"Bernardelle",
"Pietro",
""
],
[
"Sadiq",
"Shazia",
""
],
[
"Demartini",
"Gianluca",
""
]
] | TITLE: Spotting Persuasion: A Low-cost Model for Persuasion Detection in
Political Ads on Social Media
ABSTRACT: In the realm of political advertising, persuasion operates as a pivotal
element within the broader framework of propaganda, exerting profound
influences on public opinion and electoral outcomes. In this paper, we (1)
introduce a lightweight model for persuasive text detection that achieves
state-of-the-art performance in Subtask 3 of SemEval 2023 Task 3, while
significantly reducing the computational resource requirements; and (2)
leverage the proposed model to gain insights into political campaigning
strategies on social media platforms by applying it to a real-world dataset we
curated, consisting of Facebook political ads from the 2022 Australian Federal
election campaign. Our study shows how subtleties can be found in persuasive
political advertisements and presents a pragmatic approach to detect and
analyze such strategies with limited resources, enhancing transparency in
social media political campaigns.
|
2503.13847 | Monika Shah | Monika Shah, Somdeb Sarkhel, Deepak Venugopal | Disentangling Fine-Tuning from Pre-Training in Visual Captioning with
Hybrid Markov Logic | 2024 IEEE International Conference on Big Data (BigData), 10 pages | null | 10.1109/BigData62323.2024.10825003 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multimodal systems have highly complex processing pipelines and are
pretrained over large datasets before being fine-tuned for specific tasks such
as visual captioning. However, it becomes hard to disentangle what the model
learns during the fine-tuning process from what it already knows due to its
pretraining. In this work, we learn a probabilistic model using Hybrid Markov
Logic Networks (HMLNs) over the training examples by relating symbolic
knowledge (extracted from the caption) with visual features (extracted from the
image). For a generated caption, we quantify the influence of training examples
based on the HMLN distribution using probabilistic inference. We evaluate two
types of inference procedures on the MSCOCO dataset for different types of
captioning models. Our results show that for BLIP2 (a model that uses a LLM),
the fine-tuning may have smaller influence on the knowledge the model has
acquired since it may have more general knowledge to perform visual captioning
as compared to models that do not use a LLM
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 02:39:26 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Shah",
"Monika",
""
],
[
"Sarkhel",
"Somdeb",
""
],
[
"Venugopal",
"Deepak",
""
]
] | TITLE: Disentangling Fine-Tuning from Pre-Training in Visual Captioning with
Hybrid Markov Logic
ABSTRACT: Multimodal systems have highly complex processing pipelines and are
pretrained over large datasets before being fine-tuned for specific tasks such
as visual captioning. However, it becomes hard to disentangle what the model
learns during the fine-tuning process from what it already knows due to its
pretraining. In this work, we learn a probabilistic model using Hybrid Markov
Logic Networks (HMLNs) over the training examples by relating symbolic
knowledge (extracted from the caption) with visual features (extracted from the
image). For a generated caption, we quantify the influence of training examples
based on the HMLN distribution using probabilistic inference. We evaluate two
types of inference procedures on the MSCOCO dataset for different types of
captioning models. Our results show that for BLIP2 (a model that uses a LLM),
the fine-tuning may have smaller influence on the knowledge the model has
acquired since it may have more general knowledge to perform visual captioning
as compared to models that do not use a LLM
|
2503.13856 | Kai Chen Nj | Kai Chen, Xinfeng Li, Tianpei Yang, Hewei Wang, Wei Dong, Yang Gao | MDTeamGPT: A Self-Evolving LLM-based Multi-Agent Framework for
Multi-Disciplinary Team Medical Consultation | 24 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have made significant progress in various
fields. However, challenges remain in Multi-Disciplinary Team (MDT) medical
consultations. Current research enhances reasoning through role assignment,
task decomposition, and accumulation of medical experience. Multi-role
collaboration in MDT consultations often results in excessively long dialogue
histories. This increases the model's cognitive burden and degrades both
efficiency and accuracy. Some methods only store treatment histories. They do
not extract effective experience or reflect on errors. This limits knowledge
generalization and system evolution. We propose a multi-agent MDT medical
consultation framework based on LLMs to address these issues. Our framework
uses consensus aggregation and a residual discussion structure for multi-round
consultations. It also employs a Correct Answer Knowledge Base (CorrectKB) and
a Chain-of-Thought Knowledge Base (ChainKB) to accumulate consultation
experience. These mechanisms enable the framework to evolve and continually
improve diagnosis rationality and accuracy. Experimental results on the MedQA
and PubMedQA datasets demonstrate that our framework achieves accuracies of
90.1% and 83.9%, respectively, and that the constructed knowledge bases
generalize effectively across test sets from both datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 03:07:34 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chen",
"Kai",
""
],
[
"Li",
"Xinfeng",
""
],
[
"Yang",
"Tianpei",
""
],
[
"Wang",
"Hewei",
""
],
[
"Dong",
"Wei",
""
],
[
"Gao",
"Yang",
""
]
] | TITLE: MDTeamGPT: A Self-Evolving LLM-based Multi-Agent Framework for
Multi-Disciplinary Team Medical Consultation
ABSTRACT: Large Language Models (LLMs) have made significant progress in various
fields. However, challenges remain in Multi-Disciplinary Team (MDT) medical
consultations. Current research enhances reasoning through role assignment,
task decomposition, and accumulation of medical experience. Multi-role
collaboration in MDT consultations often results in excessively long dialogue
histories. This increases the model's cognitive burden and degrades both
efficiency and accuracy. Some methods only store treatment histories. They do
not extract effective experience or reflect on errors. This limits knowledge
generalization and system evolution. We propose a multi-agent MDT medical
consultation framework based on LLMs to address these issues. Our framework
uses consensus aggregation and a residual discussion structure for multi-round
consultations. It also employs a Correct Answer Knowledge Base (CorrectKB) and
a Chain-of-Thought Knowledge Base (ChainKB) to accumulate consultation
experience. These mechanisms enable the framework to evolve and continually
improve diagnosis rationality and accuracy. Experimental results on the MedQA
and PubMedQA datasets demonstrate that our framework achieves accuracies of
90.1% and 83.9%, respectively, and that the constructed knowledge bases
generalize effectively across test sets from both datasets.
|
2503.13859 | Jinseok Bae | Jinseok Bae, Inwoo Hwang, Young Yoon Lee, Ziyu Guo, Joseph Liu, Yizhak
Ben-Shabat, Young Min Kim, Mubbasir Kapadia | Less is More: Improving Motion Diffusion Models with Sparse Keyframes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in motion diffusion models have led to remarkable progress in
diverse motion generation tasks, including text-to-motion synthesis. However,
existing approaches represent motions as dense frame sequences, requiring the
model to process redundant or less informative frames. The processing of dense
animation frames imposes significant training complexity, especially when
learning intricate distributions of large motion datasets even with modern
neural architectures. This severely limits the performance of generative motion
models for downstream tasks. Inspired by professional animators who mainly
focus on sparse keyframes, we propose a novel diffusion framework explicitly
designed around sparse and geometrically meaningful keyframes. Our method
reduces computation by masking non-keyframes and efficiently interpolating
missing frames. We dynamically refine the keyframe mask during inference to
prioritize informative frames in later diffusion steps. Extensive experiments
show that our approach consistently outperforms state-of-the-art methods in
text alignment and motion realism, while also effectively maintaining high
performance at significantly fewer diffusion steps. We further validate the
robustness of our framework by using it as a generative prior and adapting it
to different downstream tasks. Source code and pre-trained models will be
released upon acceptance.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 03:20:02 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Bae",
"Jinseok",
""
],
[
"Hwang",
"Inwoo",
""
],
[
"Lee",
"Young Yoon",
""
],
[
"Guo",
"Ziyu",
""
],
[
"Liu",
"Joseph",
""
],
[
"Ben-Shabat",
"Yizhak",
""
],
[
"Kim",
"Young Min",
""
],
[
"Kapadia",
"Mubbasir",
""
]
] | TITLE: Less is More: Improving Motion Diffusion Models with Sparse Keyframes
ABSTRACT: Recent advances in motion diffusion models have led to remarkable progress in
diverse motion generation tasks, including text-to-motion synthesis. However,
existing approaches represent motions as dense frame sequences, requiring the
model to process redundant or less informative frames. The processing of dense
animation frames imposes significant training complexity, especially when
learning intricate distributions of large motion datasets even with modern
neural architectures. This severely limits the performance of generative motion
models for downstream tasks. Inspired by professional animators who mainly
focus on sparse keyframes, we propose a novel diffusion framework explicitly
designed around sparse and geometrically meaningful keyframes. Our method
reduces computation by masking non-keyframes and efficiently interpolating
missing frames. We dynamically refine the keyframe mask during inference to
prioritize informative frames in later diffusion steps. Extensive experiments
show that our approach consistently outperforms state-of-the-art methods in
text alignment and motion realism, while also effectively maintaining high
performance at significantly fewer diffusion steps. We further validate the
robustness of our framework by using it as a generative prior and adapting it
to different downstream tasks. Source code and pre-trained models will be
released upon acceptance.
|
2503.13861 | Yujin Wang Mr | Yujin Wang, Quanfeng Liu, Zhengxin Jiang, Tianyi Wang, Junfeng Jiao,
Hongqing Chu, Bingzhao Gao, Hong Chen | RAD: Retrieval-Augmented Decision-Making of Meta-Actions with
Vision-Language Models in Autonomous Driving | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately understanding and deciding high-level meta-actions is essential
for ensuring reliable and safe autonomous driving systems. While
vision-language models (VLMs) have shown significant potential in various
autonomous driving tasks, they often suffer from limitations such as inadequate
spatial perception and hallucination, reducing their effectiveness in complex
autonomous driving scenarios. To address these challenges, we propose a
retrieval-augmented decision-making (RAD) framework, a novel architecture
designed to enhance VLMs' capabilities to reliably generate meta-actions in
autonomous driving scenes. RAD leverages a retrieval-augmented generation (RAG)
pipeline to dynamically improve decision accuracy through a three-stage process
consisting of the embedding flow, retrieving flow, and generating flow.
Additionally, we fine-tune VLMs on a specifically curated dataset derived from
the NuScenes dataset to enhance their spatial perception and bird's-eye view
image comprehension capabilities. Extensive experimental evaluations on the
curated NuScenes-based dataset demonstrate that RAD outperforms baseline
methods across key evaluation metrics, including match accuracy, and F1 score,
and self-defined overall score, highlighting its effectiveness in improving
meta-action decision-making for autonomous driving tasks.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 03:25:57 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Yujin",
""
],
[
"Liu",
"Quanfeng",
""
],
[
"Jiang",
"Zhengxin",
""
],
[
"Wang",
"Tianyi",
""
],
[
"Jiao",
"Junfeng",
""
],
[
"Chu",
"Hongqing",
""
],
[
"Gao",
"Bingzhao",
""
],
[
"Chen",
"Hong",
""
]
] | TITLE: RAD: Retrieval-Augmented Decision-Making of Meta-Actions with
Vision-Language Models in Autonomous Driving
ABSTRACT: Accurately understanding and deciding high-level meta-actions is essential
for ensuring reliable and safe autonomous driving systems. While
vision-language models (VLMs) have shown significant potential in various
autonomous driving tasks, they often suffer from limitations such as inadequate
spatial perception and hallucination, reducing their effectiveness in complex
autonomous driving scenarios. To address these challenges, we propose a
retrieval-augmented decision-making (RAD) framework, a novel architecture
designed to enhance VLMs' capabilities to reliably generate meta-actions in
autonomous driving scenes. RAD leverages a retrieval-augmented generation (RAG)
pipeline to dynamically improve decision accuracy through a three-stage process
consisting of the embedding flow, retrieving flow, and generating flow.
Additionally, we fine-tune VLMs on a specifically curated dataset derived from
the NuScenes dataset to enhance their spatial perception and bird's-eye view
image comprehension capabilities. Extensive experimental evaluations on the
curated NuScenes-based dataset demonstrate that RAD outperforms baseline
methods across key evaluation metrics, including match accuracy, and F1 score,
and self-defined overall score, highlighting its effectiveness in improving
meta-action decision-making for autonomous driving tasks.
|
2503.13862 | Jiaqi Yang | Jiaqi Yang, Wenting Chen, Xiaohan Xing, Sean He, Xiaoling Luo, Xinheng
Lyu, Linlin Shen, Guoping Qiu | HySurvPred: Multimodal Hyperbolic Embedding with Angle-Aware
Hierarchical Contrastive Learning and Uncertainty Constraints for Survival
Prediction | submitted to IJCAI2025 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal learning that integrates histopathology images and genomic data
holds great promise for cancer survival prediction. However, existing methods
face key limitations: 1) They rely on multimodal mapping and metrics in
Euclidean space, which cannot fully capture the hierarchical structures in
histopathology (among patches from different resolutions) and genomics data
(from genes to pathways). 2) They discretize survival time into independent
risk intervals, which ignores its continuous and ordinal nature and fails to
achieve effective optimization. 3) They treat censorship as a binary indicator,
excluding censored samples from model optimization and not making full use of
them. To address these challenges, we propose HySurvPred, a novel framework for
survival prediction that integrates three key modules: Multimodal Hyperbolic
Mapping (MHM), Angle-aware Ranking-based Contrastive Loss (ARCL) and
Censor-Conditioned Uncertainty Constraint (CUC). Instead of relying on
Euclidean space, we design the MHM module to explore the inherent hierarchical
structures within each modality in hyperbolic space. To better integrate
multimodal features in hyperbolic space, we introduce the ARCL module, which
uses ranking-based contrastive learning to preserve the ordinal nature of
survival time, along with the CUC module to fully explore the censored data.
Extensive experiments demonstrate that our method outperforms state-of-the-art
methods on five benchmark datasets. The source code is to be released.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 03:26:22 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yang",
"Jiaqi",
""
],
[
"Chen",
"Wenting",
""
],
[
"Xing",
"Xiaohan",
""
],
[
"He",
"Sean",
""
],
[
"Luo",
"Xiaoling",
""
],
[
"Lyu",
"Xinheng",
""
],
[
"Shen",
"Linlin",
""
],
[
"Qiu",
"Guoping",
""
]
] | TITLE: HySurvPred: Multimodal Hyperbolic Embedding with Angle-Aware
Hierarchical Contrastive Learning and Uncertainty Constraints for Survival
Prediction
ABSTRACT: Multimodal learning that integrates histopathology images and genomic data
holds great promise for cancer survival prediction. However, existing methods
face key limitations: 1) They rely on multimodal mapping and metrics in
Euclidean space, which cannot fully capture the hierarchical structures in
histopathology (among patches from different resolutions) and genomics data
(from genes to pathways). 2) They discretize survival time into independent
risk intervals, which ignores its continuous and ordinal nature and fails to
achieve effective optimization. 3) They treat censorship as a binary indicator,
excluding censored samples from model optimization and not making full use of
them. To address these challenges, we propose HySurvPred, a novel framework for
survival prediction that integrates three key modules: Multimodal Hyperbolic
Mapping (MHM), Angle-aware Ranking-based Contrastive Loss (ARCL) and
Censor-Conditioned Uncertainty Constraint (CUC). Instead of relying on
Euclidean space, we design the MHM module to explore the inherent hierarchical
structures within each modality in hyperbolic space. To better integrate
multimodal features in hyperbolic space, we introduce the ARCL module, which
uses ranking-based contrastive learning to preserve the ordinal nature of
survival time, along with the CUC module to fully explore the censored data.
Extensive experiments demonstrate that our method outperforms state-of-the-art
methods on five benchmark datasets. The source code is to be released.
|
2503.13874 | Cong Guo | Cong Guo and Changqin Huang and Wenhua Zhou and Xiaodi Huang | Multi-label feature selection based on binary hashing learning and
dynamic graph constraints | 21 pages,19 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-label learning poses significant challenges in extracting reliable
supervisory signals from the label space. Existing approaches often employ
continuous pseudo-labels to replace binary labels, improving supervisory
information representation. However, these methods can introduce noise from
irrelevant labels and lead to unreliable graph structures. To overcome these
limitations, this study introduces a novel multi-label feature selection method
called Binary Hashing and Dynamic Graph Constraint (BHDG), the first method to
integrate binary hashing into multi-label learning. BHDG utilizes
low-dimensional binary hashing codes as pseudo-labels to reduce noise and
improve representation robustness. A dynamically constrained sample projection
space is constructed based on the graph structure of these binary
pseudo-labels, enhancing the reliability of the dynamic graph. To further
enhance pseudo-label quality, BHDG incorporates label graph constraints and
inner product minimization within the sample space. Additionally, an
$l_{2,1}$-norm regularization term is added to the objective function to
facilitate the feature selection process. The augmented Lagrangian multiplier
(ALM) method is employed to optimize binary variables effectively.
Comprehensive experiments on 10 benchmark datasets demonstrate that BHDG
outperforms ten state-of-the-art methods across six evaluation metrics. BHDG
achieves the highest overall performance ranking, surpassing the next-best
method by an average of at least 2.7 ranks per metric, underscoring its
effectiveness and robustness in multi-label feature selection.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 03:58:31 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Guo",
"Cong",
""
],
[
"Huang",
"Changqin",
""
],
[
"Zhou",
"Wenhua",
""
],
[
"Huang",
"Xiaodi",
""
]
] | TITLE: Multi-label feature selection based on binary hashing learning and
dynamic graph constraints
ABSTRACT: Multi-label learning poses significant challenges in extracting reliable
supervisory signals from the label space. Existing approaches often employ
continuous pseudo-labels to replace binary labels, improving supervisory
information representation. However, these methods can introduce noise from
irrelevant labels and lead to unreliable graph structures. To overcome these
limitations, this study introduces a novel multi-label feature selection method
called Binary Hashing and Dynamic Graph Constraint (BHDG), the first method to
integrate binary hashing into multi-label learning. BHDG utilizes
low-dimensional binary hashing codes as pseudo-labels to reduce noise and
improve representation robustness. A dynamically constrained sample projection
space is constructed based on the graph structure of these binary
pseudo-labels, enhancing the reliability of the dynamic graph. To further
enhance pseudo-label quality, BHDG incorporates label graph constraints and
inner product minimization within the sample space. Additionally, an
$l_{2,1}$-norm regularization term is added to the objective function to
facilitate the feature selection process. The augmented Lagrangian multiplier
(ALM) method is employed to optimize binary variables effectively.
Comprehensive experiments on 10 benchmark datasets demonstrate that BHDG
outperforms ten state-of-the-art methods across six evaluation metrics. BHDG
achieves the highest overall performance ranking, surpassing the next-best
method by an average of at least 2.7 ranks per metric, underscoring its
effectiveness and robustness in multi-label feature selection.
|
2503.13881 | Donggon Jang | Donggon Jang, Yucheol Cho, Suin Lee, Taehyeon Kim, Dae-Shik Kim | MMR: A Large-scale Benchmark Dataset for Multi-target and
Multi-granularity Reasoning Segmentation | ICLR 2025, Code and dataset are available at
\url{https://github.com/jdg900/MMR} | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The fusion of Large Language Models with vision models is pioneering new
possibilities in user-interactive vision-language tasks. A notable application
is reasoning segmentation, where models generate pixel-level segmentation masks
by comprehending implicit meanings in human instructions. However, seamless
human-AI interaction demands more than just object-level recognition; it
requires understanding both objects and the functions of their detailed parts,
particularly in multi-target scenarios. For example, when instructing a robot
to \textit{turn on the TV"}, there could be various ways to accomplish this
command. Recognizing multiple objects capable of turning on the TV, such as the
TV itself or a remote control (multi-target), provides more flexible options
and aids in finding the optimized scenario. Furthermore, understanding specific
parts of these objects, like the TV's button or the remote's button
(part-level), is important for completing the action. Unfortunately, current
reasoning segmentation datasets predominantly focus on a single target
object-level reasoning, which limits the detailed recognition of an object's
parts in multi-target contexts. To address this gap, we construct a large-scale
dataset called Multi-target and Multi-granularity Reasoning (MMR). MMR
comprises 194K complex and implicit instructions that consider multi-target,
object-level, and part-level aspects, based on pre-existing image-mask sets.
This dataset supports diverse and context-aware interactions by hierarchically
providing object and part information. Moreover, we propose a straightforward
yet effective framework for multi-target, object-level, and part-level
reasoning segmentation. Experimental results on MMR show that the proposed
method can reason effectively in multi-target and multi-granularity scenarios,
while the existing reasoning segmentation model still has room for improvement.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 04:23:09 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jang",
"Donggon",
""
],
[
"Cho",
"Yucheol",
""
],
[
"Lee",
"Suin",
""
],
[
"Kim",
"Taehyeon",
""
],
[
"Kim",
"Dae-Shik",
""
]
] | TITLE: MMR: A Large-scale Benchmark Dataset for Multi-target and
Multi-granularity Reasoning Segmentation
ABSTRACT: The fusion of Large Language Models with vision models is pioneering new
possibilities in user-interactive vision-language tasks. A notable application
is reasoning segmentation, where models generate pixel-level segmentation masks
by comprehending implicit meanings in human instructions. However, seamless
human-AI interaction demands more than just object-level recognition; it
requires understanding both objects and the functions of their detailed parts,
particularly in multi-target scenarios. For example, when instructing a robot
to \textit{turn on the TV"}, there could be various ways to accomplish this
command. Recognizing multiple objects capable of turning on the TV, such as the
TV itself or a remote control (multi-target), provides more flexible options
and aids in finding the optimized scenario. Furthermore, understanding specific
parts of these objects, like the TV's button or the remote's button
(part-level), is important for completing the action. Unfortunately, current
reasoning segmentation datasets predominantly focus on a single target
object-level reasoning, which limits the detailed recognition of an object's
parts in multi-target contexts. To address this gap, we construct a large-scale
dataset called Multi-target and Multi-granularity Reasoning (MMR). MMR
comprises 194K complex and implicit instructions that consider multi-target,
object-level, and part-level aspects, based on pre-existing image-mask sets.
This dataset supports diverse and context-aware interactions by hierarchically
providing object and part information. Moreover, we propose a straightforward
yet effective framework for multi-target, object-level, and part-level
reasoning segmentation. Experimental results on MMR show that the proposed
method can reason effectively in multi-target and multi-granularity scenarios,
while the existing reasoning segmentation model still has room for improvement.
|
2503.13895 | Xinliang Zhang | Xinliang Zhang, Lei Zhu, Shuang Zeng, Hangzhou He, Ourui Fu, Zhengjian
Yao, Zhaoheng Xie, Yanye Lu | Exploiting Inherent Class Label: Towards Robust Scribble Supervised
Semantic Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Scribble-based weakly supervised semantic segmentation leverages only a few
annotated pixels as labels to train a segmentation model, presenting
significant potential for reducing the human labor involved in the annotation
process. This approach faces two primary challenges: first, the sparsity of
scribble annotations can lead to inconsistent predictions due to limited
supervision; second, the variability in scribble annotations, reflecting
differing human annotator preferences, can prevent the model from consistently
capturing the discriminative regions of objects, potentially leading to
unstable predictions. To address these issues, we propose a holistic framework,
the class-driven scribble promotion network, for robust scribble-supervised
semantic segmentation. This framework not only utilizes the provided scribble
annotations but also leverages their associated class labels to generate
reliable pseudo-labels. Within the network, we introduce a localization
rectification module to mitigate noisy labels and a distance perception module
to identify reliable regions surrounding scribble annotations and
pseudo-labels. In addition, we introduce new large-scale benchmarks,
ScribbleCOCO and ScribbleCityscapes, accompanied by a scribble simulation
algorithm that enables evaluation across varying scribble styles. Our method
demonstrates competitive performance in both accuracy and robustness,
underscoring its superiority over existing approaches. The datasets and the
codes will be made publicly available.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 04:43:07 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Xinliang",
""
],
[
"Zhu",
"Lei",
""
],
[
"Zeng",
"Shuang",
""
],
[
"He",
"Hangzhou",
""
],
[
"Fu",
"Ourui",
""
],
[
"Yao",
"Zhengjian",
""
],
[
"Xie",
"Zhaoheng",
""
],
[
"Lu",
"Yanye",
""
]
] | TITLE: Exploiting Inherent Class Label: Towards Robust Scribble Supervised
Semantic Segmentation
ABSTRACT: Scribble-based weakly supervised semantic segmentation leverages only a few
annotated pixels as labels to train a segmentation model, presenting
significant potential for reducing the human labor involved in the annotation
process. This approach faces two primary challenges: first, the sparsity of
scribble annotations can lead to inconsistent predictions due to limited
supervision; second, the variability in scribble annotations, reflecting
differing human annotator preferences, can prevent the model from consistently
capturing the discriminative regions of objects, potentially leading to
unstable predictions. To address these issues, we propose a holistic framework,
the class-driven scribble promotion network, for robust scribble-supervised
semantic segmentation. This framework not only utilizes the provided scribble
annotations but also leverages their associated class labels to generate
reliable pseudo-labels. Within the network, we introduce a localization
rectification module to mitigate noisy labels and a distance perception module
to identify reliable regions surrounding scribble annotations and
pseudo-labels. In addition, we introduce new large-scale benchmarks,
ScribbleCOCO and ScribbleCityscapes, accompanied by a scribble simulation
algorithm that enables evaluation across varying scribble styles. Our method
demonstrates competitive performance in both accuracy and robustness,
underscoring its superiority over existing approaches. The datasets and the
codes will be made publicly available.
|
2503.13896 | Yi Yang | Yi Yang, Xuran Zhao, H. Charles Zhao, Shumin Yuan, Samuel M. Bateman,
Tiffany A. Huang, Chris Beall and Will Maddern | Evaluating Global Geo-alignment for Precision Learned Autonomous Vehicle
Localization using Aerial Data | 8 pages, 7 figures, accepted by International Conference on Robotics
and Automation (ICRA) 2025 | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recently there has been growing interest in the use of aerial and satellite
map data for autonomous vehicles, primarily due to its potential for
significant cost reduction and enhanced scalability. Despite the advantages,
aerial data also comes with challenges such as a sensor-modality gap and a
viewpoint difference gap. Learned localization methods have shown promise for
overcoming these challenges to provide precise metric localization for
autonomous vehicles. Most learned localization methods rely on coarsely aligned
ground truth, or implicit consistency-based methods to learn the localization
task -- however, in this paper we find that improving the alignment between
aerial data and autonomous vehicle sensor data at training time is critical to
the performance of a learning-based localization system. We compare two data
alignment methods using a factor graph framework and, using these methods, we
then evaluate the effects of closely aligned ground truth on learned
localization accuracy through ablation studies. Finally, we evaluate a learned
localization system using the data alignment methods on a comprehensive
(1600km) autonomous vehicle dataset and demonstrate localization error below
0.3m and 0.5$^{\circ}$ sufficient for autonomous vehicle applications.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 04:44:43 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yang",
"Yi",
""
],
[
"Zhao",
"Xuran",
""
],
[
"Zhao",
"H. Charles",
""
],
[
"Yuan",
"Shumin",
""
],
[
"Bateman",
"Samuel M.",
""
],
[
"Huang",
"Tiffany A.",
""
],
[
"Beall",
"Chris",
""
],
[
"Maddern",
"Will",
""
]
] | TITLE: Evaluating Global Geo-alignment for Precision Learned Autonomous Vehicle
Localization using Aerial Data
ABSTRACT: Recently there has been growing interest in the use of aerial and satellite
map data for autonomous vehicles, primarily due to its potential for
significant cost reduction and enhanced scalability. Despite the advantages,
aerial data also comes with challenges such as a sensor-modality gap and a
viewpoint difference gap. Learned localization methods have shown promise for
overcoming these challenges to provide precise metric localization for
autonomous vehicles. Most learned localization methods rely on coarsely aligned
ground truth, or implicit consistency-based methods to learn the localization
task -- however, in this paper we find that improving the alignment between
aerial data and autonomous vehicle sensor data at training time is critical to
the performance of a learning-based localization system. We compare two data
alignment methods using a factor graph framework and, using these methods, we
then evaluate the effects of closely aligned ground truth on learned
localization accuracy through ablation studies. Finally, we evaluate a learned
localization system using the data alignment methods on a comprehensive
(1600km) autonomous vehicle dataset and demonstrate localization error below
0.3m and 0.5$^{\circ}$ sufficient for autonomous vehicle applications.
|
2503.13899 | Sarah Liaw | Sarah Liaw, Rebecca Morrison, Youssef Marzouk, Ricardo Baptista | Learning local neighborhoods of non-Gaussian graphical models: A measure
transport approach | Accepted in AAAI 2025: 23 pages, 9 figures | null | null | null | cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying the Markov properties or conditional independencies of a
collection of random variables is a fundamental task in statistics for modeling
and inference. Existing approaches often learn the structure of a probabilistic
graphical model, which encodes these dependencies, by assuming that the
variables follow a distribution with a simple parametric form. Moreover, the
computational cost of many algorithms scales poorly for high-dimensional
distributions, as they need to estimate all the edges in the graph
simultaneously. In this work, we propose a scalable algorithm to infer the
conditional independence relationships of each variable by exploiting the local
Markov property. The proposed method, named Localized Sparsity Identification
for Non-Gaussian Distributions (L-SING), estimates the graph by using flexible
classes of transport maps to represent the conditional distribution for each
variable. We show that L-SING includes existing approaches, such as
neighborhood selection with Lasso, as a special case. We demonstrate the
effectiveness of our algorithm in both Gaussian and non-Gaussian settings by
comparing it to existing methods. Lastly, we show the scalability of the
proposed approach by applying it to high-dimensional non-Gaussian examples,
including a biological dataset with more than 150 variables.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 04:53:22 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Liaw",
"Sarah",
""
],
[
"Morrison",
"Rebecca",
""
],
[
"Marzouk",
"Youssef",
""
],
[
"Baptista",
"Ricardo",
""
]
] | TITLE: Learning local neighborhoods of non-Gaussian graphical models: A measure
transport approach
ABSTRACT: Identifying the Markov properties or conditional independencies of a
collection of random variables is a fundamental task in statistics for modeling
and inference. Existing approaches often learn the structure of a probabilistic
graphical model, which encodes these dependencies, by assuming that the
variables follow a distribution with a simple parametric form. Moreover, the
computational cost of many algorithms scales poorly for high-dimensional
distributions, as they need to estimate all the edges in the graph
simultaneously. In this work, we propose a scalable algorithm to infer the
conditional independence relationships of each variable by exploiting the local
Markov property. The proposed method, named Localized Sparsity Identification
for Non-Gaussian Distributions (L-SING), estimates the graph by using flexible
classes of transport maps to represent the conditional distribution for each
variable. We show that L-SING includes existing approaches, such as
neighborhood selection with Lasso, as a special case. We demonstrate the
effectiveness of our algorithm in both Gaussian and non-Gaussian settings by
comparing it to existing methods. Lastly, we show the scalability of the
proposed approach by applying it to high-dimensional non-Gaussian examples,
including a biological dataset with more than 150 variables.
|
2503.13903 | Qiang Qi | Qiang Qi, Xiao Wang | TGBFormer: Transformer-GraphFormer Blender Network for Video Object
Detection | Accepted by AAAI2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Video object detection has made significant progress in recent years thanks
to convolutional neural networks (CNNs) and vision transformers (ViTs).
Typically, CNNs excel at capturing local features but struggle to model global
representations. Conversely, ViTs are adept at capturing long-range global
features but face challenges in representing local feature details.
Off-the-shelf video object detection methods solely rely on CNNs or ViTs to
conduct feature aggregation, which hampers their capability to simultaneously
leverage global and local information, thereby resulting in limited detection
performance. In this paper, we propose a Transformer-GraphFormer Blender
Network (TGBFormer) for video object detection, with three key technical
improvements to fully exploit the advantages of transformers and graph
convolutional networks while compensating for their limitations. First, we
develop a spatial-temporal transformer module to aggregate global contextual
information, constituting global representations with long-range feature
dependencies. Second, we introduce a spatial-temporal GraphFormer module that
utilizes local spatial and temporal relationships to aggregate features,
generating new local representations that are complementary to the transformer
outputs. Third, we design a global-local feature blender module to adaptively
couple transformer-based global representations and GraphFormer-based local
representations. Extensive experiments demonstrate that our TGBFormer
establishes new state-of-the-art results on the ImageNet VID dataset.
Particularly, our TGBFormer achieves 86.5% mAP while running at around 41.0 FPS
on a single Tesla A100 GPU.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 05:03:05 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Qi",
"Qiang",
""
],
[
"Wang",
"Xiao",
""
]
] | TITLE: TGBFormer: Transformer-GraphFormer Blender Network for Video Object
Detection
ABSTRACT: Video object detection has made significant progress in recent years thanks
to convolutional neural networks (CNNs) and vision transformers (ViTs).
Typically, CNNs excel at capturing local features but struggle to model global
representations. Conversely, ViTs are adept at capturing long-range global
features but face challenges in representing local feature details.
Off-the-shelf video object detection methods solely rely on CNNs or ViTs to
conduct feature aggregation, which hampers their capability to simultaneously
leverage global and local information, thereby resulting in limited detection
performance. In this paper, we propose a Transformer-GraphFormer Blender
Network (TGBFormer) for video object detection, with three key technical
improvements to fully exploit the advantages of transformers and graph
convolutional networks while compensating for their limitations. First, we
develop a spatial-temporal transformer module to aggregate global contextual
information, constituting global representations with long-range feature
dependencies. Second, we introduce a spatial-temporal GraphFormer module that
utilizes local spatial and temporal relationships to aggregate features,
generating new local representations that are complementary to the transformer
outputs. Third, we design a global-local feature blender module to adaptively
couple transformer-based global representations and GraphFormer-based local
representations. Extensive experiments demonstrate that our TGBFormer
establishes new state-of-the-art results on the ImageNet VID dataset.
Particularly, our TGBFormer achieves 86.5% mAP while running at around 41.0 FPS
on a single Tesla A100 GPU.
|
2503.13906 | Yuhao Qiu | Yuhao Qiu, Shuyan Bai, Tingfa Xu, Peifu Liu, Haolin Qin, Jianan Li | HSOD-BIT-V2: A New Challenging Benchmarkfor Hyperspectral Salient Object
Detection | AAAI 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Salient Object Detection (SOD) is crucial in computer vision, yet RGB-based
methods face limitations in challenging scenes, such as small objects and
similar color features. Hyperspectral images provide a promising solution for
more accurate Hyperspectral Salient Object Detection (HSOD) by abundant
spectral information, while HSOD methods are hindered by the lack of extensive
and available datasets. In this context, we introduce HSOD-BIT-V2, the largest
and most challenging HSOD benchmark dataset to date. Five distinct challenges
focusing on small objects and foreground-background similarity are designed to
emphasize spectral advantages and real-world complexity. To tackle these
challenges, we propose Hyper-HRNet, a high-resolution HSOD network. Hyper-HRNet
effectively extracts, integrates, and preserves effective spectral information
while reducing dimensionality by capturing the self-similar spectral features.
Additionally, it conveys fine details and precisely locates object contours by
incorporating comprehensive global information and detailed object saliency
representations. Experimental analysis demonstrates that Hyper-HRNet
outperforms existing models, especially in challenging scenarios.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 05:09:42 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Qiu",
"Yuhao",
""
],
[
"Bai",
"Shuyan",
""
],
[
"Xu",
"Tingfa",
""
],
[
"Liu",
"Peifu",
""
],
[
"Qin",
"Haolin",
""
],
[
"Li",
"Jianan",
""
]
] | TITLE: HSOD-BIT-V2: A New Challenging Benchmarkfor Hyperspectral Salient Object
Detection
ABSTRACT: Salient Object Detection (SOD) is crucial in computer vision, yet RGB-based
methods face limitations in challenging scenes, such as small objects and
similar color features. Hyperspectral images provide a promising solution for
more accurate Hyperspectral Salient Object Detection (HSOD) by abundant
spectral information, while HSOD methods are hindered by the lack of extensive
and available datasets. In this context, we introduce HSOD-BIT-V2, the largest
and most challenging HSOD benchmark dataset to date. Five distinct challenges
focusing on small objects and foreground-background similarity are designed to
emphasize spectral advantages and real-world complexity. To tackle these
challenges, we propose Hyper-HRNet, a high-resolution HSOD network. Hyper-HRNet
effectively extracts, integrates, and preserves effective spectral information
while reducing dimensionality by capturing the self-similar spectral features.
Additionally, it conveys fine details and precisely locates object contours by
incorporating comprehensive global information and detailed object saliency
representations. Experimental analysis demonstrates that Hyper-HRNet
outperforms existing models, especially in challenging scenarios.
|
2503.13909 | Pavia Bera | Pavia Bera and Sanjukta Bhanja | Quantification of Uncertainties in Probabilistic Deep Neural Network by
Implementing Boosting of Variational Inference | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Modern neural network architectures have achieved remarkable accuracies but
remain highly dependent on their training data, often lacking interpretability
in their learned mappings. While effective on large datasets, they tend to
overfit on smaller ones. Probabilistic neural networks, such as those utilizing
variational inference, address this limitation by incorporating uncertainty
estimation through weight distributions rather than point estimates. However,
standard variational inference often relies on a single-density approximation,
which can lead to poor posterior estimates and hinder model performance. We
propose Boosted Bayesian Neural Networks (BBNN), a novel approach that enhances
neural network weight distribution approximations using Boosting Variational
Inference (BVI). By iteratively constructing a mixture of densities, BVI
expands the approximating family, enabling a more expressive posterior that
leads to improved generalization and uncertainty estimation. While this
approach increases computational complexity, it significantly enhances accuracy
an essential tradeoff, particularly in high-stakes applications such as medical
diagnostics, where false negatives can have severe consequences. Our
experimental results demonstrate that BBNN achieves ~5% higher accuracy
compared to conventional neural networks while providing superior uncertainty
quantification. This improvement highlights the effectiveness of leveraging a
mixture-based variational family to better approximate the posterior
distribution, ultimately advancing probabilistic deep learning.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 05:11:21 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Bera",
"Pavia",
""
],
[
"Bhanja",
"Sanjukta",
""
]
] | TITLE: Quantification of Uncertainties in Probabilistic Deep Neural Network by
Implementing Boosting of Variational Inference
ABSTRACT: Modern neural network architectures have achieved remarkable accuracies but
remain highly dependent on their training data, often lacking interpretability
in their learned mappings. While effective on large datasets, they tend to
overfit on smaller ones. Probabilistic neural networks, such as those utilizing
variational inference, address this limitation by incorporating uncertainty
estimation through weight distributions rather than point estimates. However,
standard variational inference often relies on a single-density approximation,
which can lead to poor posterior estimates and hinder model performance. We
propose Boosted Bayesian Neural Networks (BBNN), a novel approach that enhances
neural network weight distribution approximations using Boosting Variational
Inference (BVI). By iteratively constructing a mixture of densities, BVI
expands the approximating family, enabling a more expressive posterior that
leads to improved generalization and uncertainty estimation. While this
approach increases computational complexity, it significantly enhances accuracy
an essential tradeoff, particularly in high-stakes applications such as medical
diagnostics, where false negatives can have severe consequences. Our
experimental results demonstrate that BBNN achieves ~5% higher accuracy
compared to conventional neural networks while providing superior uncertainty
quantification. This improvement highlights the effectiveness of leveraging a
mixture-based variational family to better approximate the posterior
distribution, ultimately advancing probabilistic deep learning.
|
2503.13912 | Ravi Kolla | Eshan Mehendale, Abhinav Thorat, Ravi Kolla, Niranjan Pedanekar | KANITE: Kolmogorov-Arnold Networks for ITE estimation | 16 pages, 4 figures | null | null | null | cs.LG cs.AI stat.ME | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce KANITE, a framework leveraging Kolmogorov-Arnold Networks (KANs)
for Individual Treatment Effect (ITE) estimation under multiple treatments
setting in causal inference. By utilizing KAN's unique abilities to learn
univariate activation functions as opposed to learning linear weights by
Multi-Layer Perceptrons (MLPs), we improve the estimates of ITEs. The KANITE
framework comprises two key architectures: 1.Integral Probability Metric (IPM)
architecture: This employs an IPM loss in a specialized manner to effectively
align towards ITE estimation across multiple treatments. 2. Entropy Balancing
(EB) architecture: This uses weights for samples that are learned by optimizing
entropy subject to balancing the covariates across treatment groups. Extensive
evaluations on benchmark datasets demonstrate that KANITE outperforms
state-of-the-art algorithms in both $\epsilon_{\text{PEHE}}$ and
$\epsilon_{\text{ATE}}$ metrics. Our experiments highlight the advantages of
KANITE in achieving improved causal estimates, emphasizing the potential of
KANs to advance causal inference methodologies across diverse application
areas.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 05:16:36 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Mehendale",
"Eshan",
""
],
[
"Thorat",
"Abhinav",
""
],
[
"Kolla",
"Ravi",
""
],
[
"Pedanekar",
"Niranjan",
""
]
] | TITLE: KANITE: Kolmogorov-Arnold Networks for ITE estimation
ABSTRACT: We introduce KANITE, a framework leveraging Kolmogorov-Arnold Networks (KANs)
for Individual Treatment Effect (ITE) estimation under multiple treatments
setting in causal inference. By utilizing KAN's unique abilities to learn
univariate activation functions as opposed to learning linear weights by
Multi-Layer Perceptrons (MLPs), we improve the estimates of ITEs. The KANITE
framework comprises two key architectures: 1.Integral Probability Metric (IPM)
architecture: This employs an IPM loss in a specialized manner to effectively
align towards ITE estimation across multiple treatments. 2. Entropy Balancing
(EB) architecture: This uses weights for samples that are learned by optimizing
entropy subject to balancing the covariates across treatment groups. Extensive
evaluations on benchmark datasets demonstrate that KANITE outperforms
state-of-the-art algorithms in both $\epsilon_{\text{PEHE}}$ and
$\epsilon_{\text{ATE}}$ metrics. Our experiments highlight the advantages of
KANITE in achieving improved causal estimates, emphasizing the potential of
KANs to advance causal inference methodologies across diverse application
areas.
|
2503.13914 | Barza Nisar | Barza Nisar and Steven L. Waslander | PSA-SSL: Pose and Size-aware Self-Supervised Learning on LiDAR Point
Clouds | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Self-supervised learning (SSL) on 3D point clouds has the potential to learn
feature representations that can transfer to diverse sensors and multiple
downstream perception tasks. However, recent SSL approaches fail to define
pretext tasks that retain geometric information such as object pose and scale,
which can be detrimental to the performance of downstream localization and
geometry-sensitive 3D scene understanding tasks, such as 3D semantic
segmentation and 3D object detection. We propose PSA-SSL, a novel extension to
point cloud SSL that learns object pose and size-aware (PSA) features. Our
approach defines a self-supervised bounding box regression pretext task, which
retains object pose and size information. Furthermore, we incorporate LiDAR
beam pattern augmentation on input point clouds, which encourages learning
sensor-agnostic features. Our experiments demonstrate that with a single
pretrained model, our light-weight yet effective extensions achieve significant
improvements on 3D semantic segmentation with limited labels across popular
autonomous driving datasets (Waymo, nuScenes, SemanticKITTI). Moreover, our
approach outperforms other state-of-the-art SSL methods on 3D semantic
segmentation (using up to 10 times less labels), as well as on 3D object
detection. Our code will be released on https://github.com/TRAILab/PSA-SSL.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 05:17:06 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Nisar",
"Barza",
""
],
[
"Waslander",
"Steven L.",
""
]
] | TITLE: PSA-SSL: Pose and Size-aware Self-Supervised Learning on LiDAR Point
Clouds
ABSTRACT: Self-supervised learning (SSL) on 3D point clouds has the potential to learn
feature representations that can transfer to diverse sensors and multiple
downstream perception tasks. However, recent SSL approaches fail to define
pretext tasks that retain geometric information such as object pose and scale,
which can be detrimental to the performance of downstream localization and
geometry-sensitive 3D scene understanding tasks, such as 3D semantic
segmentation and 3D object detection. We propose PSA-SSL, a novel extension to
point cloud SSL that learns object pose and size-aware (PSA) features. Our
approach defines a self-supervised bounding box regression pretext task, which
retains object pose and size information. Furthermore, we incorporate LiDAR
beam pattern augmentation on input point clouds, which encourages learning
sensor-agnostic features. Our experiments demonstrate that with a single
pretrained model, our light-weight yet effective extensions achieve significant
improvements on 3D semantic segmentation with limited labels across popular
autonomous driving datasets (Waymo, nuScenes, SemanticKITTI). Moreover, our
approach outperforms other state-of-the-art SSL methods on 3D semantic
segmentation (using up to 10 times less labels), as well as on 3D object
detection. Our code will be released on https://github.com/TRAILab/PSA-SSL.
|
2503.13917 | Yujia Tong | Yujia Tong, Yuze Wang, Jingling Yuan, Chuang Hu | Robust Machine Unlearning for Quantized Neural Networks via Adaptive
Gradient Reweighting with Similar Labels | 15 pages, 4 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model quantization enables efficient deployment of deep neural networks on
edge devices through low-bit parameter representation, yet raises critical
challenges for implementing machine unlearning (MU) under data privacy
regulations. Existing MU methods designed for full-precision models fail to
address two fundamental limitations in quantized networks: 1) Noise
amplification from label mismatch during data processing, and 2) Gradient
imbalance between forgotten and retained data during training. These issues are
exacerbated by quantized models' constrained parameter space and discrete
optimization. We propose Q-MUL, the first dedicated unlearning framework for
quantized models. Our method introduces two key innovations: 1) Similar Labels
assignment replaces random labels with semantically consistent alternatives to
minimize noise injection, and 2) Adaptive Gradient Reweighting dynamically
aligns parameter update contributions from forgotten and retained data. Through
systematic analysis of quantized model vulnerabilities, we establish
theoretical foundations for these mechanisms. Extensive evaluations on
benchmark datasets demonstrate Q-MUL's superiority over existing approaches.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 05:22:13 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Tong",
"Yujia",
""
],
[
"Wang",
"Yuze",
""
],
[
"Yuan",
"Jingling",
""
],
[
"Hu",
"Chuang",
""
]
] | TITLE: Robust Machine Unlearning for Quantized Neural Networks via Adaptive
Gradient Reweighting with Similar Labels
ABSTRACT: Model quantization enables efficient deployment of deep neural networks on
edge devices through low-bit parameter representation, yet raises critical
challenges for implementing machine unlearning (MU) under data privacy
regulations. Existing MU methods designed for full-precision models fail to
address two fundamental limitations in quantized networks: 1) Noise
amplification from label mismatch during data processing, and 2) Gradient
imbalance between forgotten and retained data during training. These issues are
exacerbated by quantized models' constrained parameter space and discrete
optimization. We propose Q-MUL, the first dedicated unlearning framework for
quantized models. Our method introduces two key innovations: 1) Similar Labels
assignment replaces random labels with semantically consistent alternatives to
minimize noise injection, and 2) Adaptive Gradient Reweighting dynamically
aligns parameter update contributions from forgotten and retained data. Through
systematic analysis of quantized model vulnerabilities, we establish
theoretical foundations for these mechanisms. Extensive evaluations on
benchmark datasets demonstrate Q-MUL's superiority over existing approaches.
|
2503.13921 | Cheng Zhen | Cheng Zhen, Nischal Aryal, Arash Termehchy, Prayoga, Garrett Biwer,
Sankalp Patil | Learning Accurate Models on Incomplete Data with Minimal Imputation | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Missing data often exists in real-world datasets, requiring significant time
and effort for imputation to learn accurate machine learning (ML) models. In
this paper, we demonstrate that imputing all missing values is not always
necessary to achieve an accurate ML model. We introduce the concept of minimal
data imputation, which ensures accurate ML models trained over the imputed
dataset. Implementing minimal imputation guarantees both minimal imputation
effort and optimal ML models. We propose algorithms to find exact and
approximate minimal imputation for various ML models. Our extensive experiments
indicate that our proposed algorithms significantly reduce the time and effort
required for data imputation.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 05:36:59 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhen",
"Cheng",
""
],
[
"Aryal",
"Nischal",
""
],
[
"Termehchy",
"Arash",
""
],
[
"Prayoga",
"",
""
],
[
"Biwer",
"Garrett",
""
],
[
"Patil",
"Sankalp",
""
]
] | TITLE: Learning Accurate Models on Incomplete Data with Minimal Imputation
ABSTRACT: Missing data often exists in real-world datasets, requiring significant time
and effort for imputation to learn accurate machine learning (ML) models. In
this paper, we demonstrate that imputing all missing values is not always
necessary to achieve an accurate ML model. We introduce the concept of minimal
data imputation, which ensures accurate ML models trained over the imputed
dataset. Implementing minimal imputation guarantees both minimal imputation
effort and optimal ML models. We propose algorithms to find exact and
approximate minimal imputation for various ML models. Our extensive experiments
indicate that our proposed algorithms significantly reduce the time and effort
required for data imputation.
|
2503.13928 | Santanu Roy Dr | Santanu Roy, Ashvath Suresh, Archit Gupta, Shubhi Tiwari, Palak Sahu,
Prashant Adhikari, Yuvraj S. Shekhawat | Fibonacci-Net: A Lightweight CNN model for Automatic Brain Tumor
Classification | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | This research proposes a very lightweight model "Fibonacci-Net" along with a
novel pooling technique, for automatic brain tumor classification from
imbalanced Magnetic Resonance Imaging (MRI) datasets. Automatic brain tumor
detection from MRI dataset has garnered significant attention in the research
community, since the inception of Convolutional Neural Network (CNN) models.
However, the performance of conventional CNN models is hindered due to class
imbalance problems. The novelties of this work are as follows: (I) A
lightweight CNN model is proposed in which the number of filters in different
convolutional layers is chosen according to the numbers of Fibonacci series.
(II) In the last two blocks of the proposed model, depth-wise separable
convolution (DWSC) layers are employed to considerably reduce the computational
complexity of the model. (III) Two parallel concatenations (or, skip
connections) are deployed from 2nd to 4th, and 3rd to 5th convolutional block
in the proposed Fibonacci-Net. This skip connection encompasses a novel
Average-2Max pooling layer that produces two stacks of convoluted output,
having a bit different statistics. Therefore, this parallel concatenation block
works as an efficient feature augmenter inside the model, thus, automatically
alleviating the class imbalance problem to a certain extent. For validity
purpose, we have implemented the proposed framework on three MRI datasets which
are highly class-imbalanced. (a) The first dataset has four classes, i.e.,
glioma tumor, meningioma tumor, pituitary tumor, and no-tumor. (b) Second and
third MRI datasets have 15 and 44 classes respectively. Experimental results
reveal that, after employing the proposed Fibonacci-Net we have achieved 96.2%
accuracy, 97.17% precision, 95.9% recall, 96.5% F1 score, and 99.9% specificity
on the most challenging ``44-classes MRI dataset''.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 05:47:02 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Roy",
"Santanu",
""
],
[
"Suresh",
"Ashvath",
""
],
[
"Gupta",
"Archit",
""
],
[
"Tiwari",
"Shubhi",
""
],
[
"Sahu",
"Palak",
""
],
[
"Adhikari",
"Prashant",
""
],
[
"Shekhawat",
"Yuvraj S.",
""
]
] | TITLE: Fibonacci-Net: A Lightweight CNN model for Automatic Brain Tumor
Classification
ABSTRACT: This research proposes a very lightweight model "Fibonacci-Net" along with a
novel pooling technique, for automatic brain tumor classification from
imbalanced Magnetic Resonance Imaging (MRI) datasets. Automatic brain tumor
detection from MRI dataset has garnered significant attention in the research
community, since the inception of Convolutional Neural Network (CNN) models.
However, the performance of conventional CNN models is hindered due to class
imbalance problems. The novelties of this work are as follows: (I) A
lightweight CNN model is proposed in which the number of filters in different
convolutional layers is chosen according to the numbers of Fibonacci series.
(II) In the last two blocks of the proposed model, depth-wise separable
convolution (DWSC) layers are employed to considerably reduce the computational
complexity of the model. (III) Two parallel concatenations (or, skip
connections) are deployed from 2nd to 4th, and 3rd to 5th convolutional block
in the proposed Fibonacci-Net. This skip connection encompasses a novel
Average-2Max pooling layer that produces two stacks of convoluted output,
having a bit different statistics. Therefore, this parallel concatenation block
works as an efficient feature augmenter inside the model, thus, automatically
alleviating the class imbalance problem to a certain extent. For validity
purpose, we have implemented the proposed framework on three MRI datasets which
are highly class-imbalanced. (a) The first dataset has four classes, i.e.,
glioma tumor, meningioma tumor, pituitary tumor, and no-tumor. (b) Second and
third MRI datasets have 15 and 44 classes respectively. Experimental results
reveal that, after employing the proposed Fibonacci-Net we have achieved 96.2%
accuracy, 97.17% precision, 95.9% recall, 96.5% F1 score, and 99.9% specificity
on the most challenging ``44-classes MRI dataset''.
|
2503.13935 | Bowen Yuan | Bowen Yuan, Yuxia Fu, Zijian Wang, Yadan Luo, Zi Huang | SCORE: Soft Label Compression-Centric Dataset Condensation via Coding
Rate Optimization | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Dataset Condensation (DC) aims to obtain a condensed dataset that allows
models trained on the condensed dataset to achieve performance comparable to
those trained on the full dataset. Recent DC approaches increasingly focus on
encoding knowledge into realistic images with soft labeling, for their
scalability to ImageNet-scale datasets and strong capability of cross-domain
generalization. However, this strong performance comes at a substantial storage
cost which could significantly exceed the storage cost of the original dataset.
We argue that the three key properties to alleviate this performance-storage
dilemma are informativeness, discriminativeness, and compressibility of the
condensed data. Towards this end, this paper proposes a \textbf{S}oft label
compression-centric dataset condensation framework using \textbf{CO}ding
\textbf{R}at\textbf{E} (SCORE). SCORE formulates dataset condensation as a
min-max optimization problem, which aims to balance the three key properties
from an information-theoretic perspective. In particular, we theoretically
demonstrate that our coding rate-inspired objective function is submodular, and
its optimization naturally enforces low-rank structure in the soft label set
corresponding to each condensed data. Extensive experiments on large-scale
datasets, including ImageNet-1K and Tiny-ImageNet, demonstrate that SCORE
outperforms existing methods in most cases. Even with 30$\times$ compression of
soft labels, performance decreases by only 5.5\% and 2.7\% for ImageNet-1K with
IPC 10 and 50, respectively. Code will be released upon paper acceptance.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 06:04:44 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yuan",
"Bowen",
""
],
[
"Fu",
"Yuxia",
""
],
[
"Wang",
"Zijian",
""
],
[
"Luo",
"Yadan",
""
],
[
"Huang",
"Zi",
""
]
] | TITLE: SCORE: Soft Label Compression-Centric Dataset Condensation via Coding
Rate Optimization
ABSTRACT: Dataset Condensation (DC) aims to obtain a condensed dataset that allows
models trained on the condensed dataset to achieve performance comparable to
those trained on the full dataset. Recent DC approaches increasingly focus on
encoding knowledge into realistic images with soft labeling, for their
scalability to ImageNet-scale datasets and strong capability of cross-domain
generalization. However, this strong performance comes at a substantial storage
cost which could significantly exceed the storage cost of the original dataset.
We argue that the three key properties to alleviate this performance-storage
dilemma are informativeness, discriminativeness, and compressibility of the
condensed data. Towards this end, this paper proposes a \textbf{S}oft label
compression-centric dataset condensation framework using \textbf{CO}ding
\textbf{R}at\textbf{E} (SCORE). SCORE formulates dataset condensation as a
min-max optimization problem, which aims to balance the three key properties
from an information-theoretic perspective. In particular, we theoretically
demonstrate that our coding rate-inspired objective function is submodular, and
its optimization naturally enforces low-rank structure in the soft label set
corresponding to each condensed data. Extensive experiments on large-scale
datasets, including ImageNet-1K and Tiny-ImageNet, demonstrate that SCORE
outperforms existing methods in most cases. Even with 30$\times$ compression of
soft labels, performance decreases by only 5.5\% and 2.7\% for ImageNet-1K with
IPC 10 and 50, respectively. Code will be released upon paper acceptance.
|
2503.13940 | Hang Zhao | Hang Zhao, Hongru Li, Dongfang Xu, Shenghui Song, and Khaled B.
Letaief | Multi-Modal Self-Supervised Semantic Communication | null | null | null | null | cs.CV eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic communication is emerging as a promising paradigm that focuses on
the extraction and transmission of semantic meanings using deep learning
techniques. While current research primarily addresses the reduction of
semantic communication overhead, it often overlooks the training phase, which
can incur significant communication costs in dynamic wireless environments. To
address this challenge, we propose a multi-modal semantic communication system
that leverages multi-modal self-supervised learning to enhance task-agnostic
feature extraction. The proposed approach employs self-supervised learning
during the pre-training phase to extract task-agnostic semantic features,
followed by supervised fine-tuning for downstream tasks. This dual-phase
strategy effectively captures both modality-invariant and modality-specific
features while minimizing training-related communication overhead. Experimental
results on the NYU Depth V2 dataset demonstrate that the proposed method
significantly reduces training-related communication overhead while maintaining
or exceeding the performance of existing supervised learning approaches. The
findings underscore the advantages of multi-modal self-supervised learning in
semantic communication, paving the way for more efficient and scalable edge
inference systems.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 06:13:02 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhao",
"Hang",
""
],
[
"Li",
"Hongru",
""
],
[
"Xu",
"Dongfang",
""
],
[
"Song",
"Shenghui",
""
],
[
"Letaief",
"Khaled B.",
""
]
] | TITLE: Multi-Modal Self-Supervised Semantic Communication
ABSTRACT: Semantic communication is emerging as a promising paradigm that focuses on
the extraction and transmission of semantic meanings using deep learning
techniques. While current research primarily addresses the reduction of
semantic communication overhead, it often overlooks the training phase, which
can incur significant communication costs in dynamic wireless environments. To
address this challenge, we propose a multi-modal semantic communication system
that leverages multi-modal self-supervised learning to enhance task-agnostic
feature extraction. The proposed approach employs self-supervised learning
during the pre-training phase to extract task-agnostic semantic features,
followed by supervised fine-tuning for downstream tasks. This dual-phase
strategy effectively captures both modality-invariant and modality-specific
features while minimizing training-related communication overhead. Experimental
results on the NYU Depth V2 dataset demonstrate that the proposed method
significantly reduces training-related communication overhead while maintaining
or exceeding the performance of existing supervised learning approaches. The
findings underscore the advantages of multi-modal self-supervised learning in
semantic communication, paving the way for more efficient and scalable edge
inference systems.
|
2503.13945 | Long Tang | Long Tang, Dengpan Ye, Sirun Chen, Xiuwen Shi, Yunna Lv, Ziyi Liu | Make the Most of Everything: Further Considerations on Disrupting
Diffusion-based Customization | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The fine-tuning technique for text-to-image diffusion models facilitates
image customization but risks privacy breaches and opinion manipulation.
Current research focuses on prompt- or image-level adversarial attacks for
anti-customization, yet it overlooks the correlation between these two levels
and the relationship between internal modules and inputs. This hinders
anti-customization performance in practical threat scenarios. We propose Dual
Anti-Diffusion (DADiff), a two-stage adversarial attack targeting diffusion
customization, which, for the first time, integrates the adversarial
prompt-level attack into the generation process of image-level adversarial
examples. In stage 1, we generate prompt-level adversarial vectors to guide the
subsequent image-level attack. In stage 2, besides conducting the end-to-end
attack on the UNet model, we disrupt its self- and cross-attention modules,
aiming to break the correlations between image pixels and align the
cross-attention results computed using instance prompts and adversarial prompt
vectors within the images. Furthermore, we introduce a local random timestep
gradient ensemble strategy, which updates adversarial perturbations by
integrating random gradients from multiple segmented timesets. Experimental
results on various mainstream facial datasets demonstrate 10%-30% improvements
in cross-prompt, keyword mismatch, cross-model, and cross-mechanism
anti-customization with DADiff compared to existing methods.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 06:22:03 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Tang",
"Long",
""
],
[
"Ye",
"Dengpan",
""
],
[
"Chen",
"Sirun",
""
],
[
"Shi",
"Xiuwen",
""
],
[
"Lv",
"Yunna",
""
],
[
"Liu",
"Ziyi",
""
]
] | TITLE: Make the Most of Everything: Further Considerations on Disrupting
Diffusion-based Customization
ABSTRACT: The fine-tuning technique for text-to-image diffusion models facilitates
image customization but risks privacy breaches and opinion manipulation.
Current research focuses on prompt- or image-level adversarial attacks for
anti-customization, yet it overlooks the correlation between these two levels
and the relationship between internal modules and inputs. This hinders
anti-customization performance in practical threat scenarios. We propose Dual
Anti-Diffusion (DADiff), a two-stage adversarial attack targeting diffusion
customization, which, for the first time, integrates the adversarial
prompt-level attack into the generation process of image-level adversarial
examples. In stage 1, we generate prompt-level adversarial vectors to guide the
subsequent image-level attack. In stage 2, besides conducting the end-to-end
attack on the UNet model, we disrupt its self- and cross-attention modules,
aiming to break the correlations between image pixels and align the
cross-attention results computed using instance prompts and adversarial prompt
vectors within the images. Furthermore, we introduce a local random timestep
gradient ensemble strategy, which updates adversarial perturbations by
integrating random gradients from multiple segmented timesets. Experimental
results on various mainstream facial datasets demonstrate 10%-30% improvements
in cross-prompt, keyword mismatch, cross-model, and cross-mechanism
anti-customization with DADiff compared to existing methods.
|
2503.13946 | Kang Yang | Kang Yang, Tianci Bu, Lantao Li, Chunxu Li, Yongcai Wang and Deying Li | Is Discretization Fusion All You Need for Collaborative Perception? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative perception in multi-agent system enhances overall perceptual
capabilities by facilitating the exchange of complementary information among
agents. Current mainstream collaborative perception methods rely on discretized
feature maps to conduct fusion, which however, lacks flexibility in extracting
and transmitting the informative features and can hardly focus on the
informative features during fusion. To address these problems, this paper
proposes a novel Anchor-Centric paradigm for Collaborative Object detection
(ACCO). It avoids grid precision issues and allows more flexible and efficient
anchor-centric communication and fusion. ACCO is composed by three main
components: (1) Anchor featuring block (AFB) that targets to generate anchor
proposals and projects prepared anchor queries to image features. (2) Anchor
confidence generator (ACG) is designed to minimize communication by selecting
only the features in the confident anchors to transmit. (3) A local-global
fusion module, in which local fusion is anchor alignment-based fusion (LAAF)
and global fusion is conducted by spatial-aware cross-attention (SACA). LAAF
and SACA run in multi-layers, so agents conduct anchor-centric fusion
iteratively to adjust the anchor proposals. Comprehensive experiments are
conducted to evaluate ACCO on OPV2V and Dair-V2X datasets, which demonstrate
ACCO's superiority in reducing the communication volume, and in improving the
perception range and detection performances. Code can be found at:
\href{https://github.com/sidiangongyuan/ACCO}{https://github.com/sidiangongyuan/ACCO}.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 06:25:03 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yang",
"Kang",
""
],
[
"Bu",
"Tianci",
""
],
[
"Li",
"Lantao",
""
],
[
"Li",
"Chunxu",
""
],
[
"Wang",
"Yongcai",
""
],
[
"Li",
"Deying",
""
]
] | TITLE: Is Discretization Fusion All You Need for Collaborative Perception?
ABSTRACT: Collaborative perception in multi-agent system enhances overall perceptual
capabilities by facilitating the exchange of complementary information among
agents. Current mainstream collaborative perception methods rely on discretized
feature maps to conduct fusion, which however, lacks flexibility in extracting
and transmitting the informative features and can hardly focus on the
informative features during fusion. To address these problems, this paper
proposes a novel Anchor-Centric paradigm for Collaborative Object detection
(ACCO). It avoids grid precision issues and allows more flexible and efficient
anchor-centric communication and fusion. ACCO is composed by three main
components: (1) Anchor featuring block (AFB) that targets to generate anchor
proposals and projects prepared anchor queries to image features. (2) Anchor
confidence generator (ACG) is designed to minimize communication by selecting
only the features in the confident anchors to transmit. (3) A local-global
fusion module, in which local fusion is anchor alignment-based fusion (LAAF)
and global fusion is conducted by spatial-aware cross-attention (SACA). LAAF
and SACA run in multi-layers, so agents conduct anchor-centric fusion
iteratively to adjust the anchor proposals. Comprehensive experiments are
conducted to evaluate ACCO on OPV2V and Dair-V2X datasets, which demonstrate
ACCO's superiority in reducing the communication volume, and in improving the
perception range and detection performances. Code can be found at:
\href{https://github.com/sidiangongyuan/ACCO}{https://github.com/sidiangongyuan/ACCO}.
|
2503.13951 | Mengshuai Chang | Lili Yang, Mengshuai Chang, Xiao Guo, Yuxin Feng, Yiwen Mei, Caicong
Wu | FrustumFusionNets: A Three-Dimensional Object Detection Network Based on
Tractor Road Scene | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To address the issues of the existing frustum-based methods' underutilization
of image information in road three-dimensional object detection as well as the
lack of research on agricultural scenes, we constructed an object detection
dataset using an 80-line Light Detection And Ranging (LiDAR) and a camera in a
complex tractor road scene and proposed a new network called FrustumFusionNets
(FFNets). Initially, we utilize the results of image-based two-dimensional
object detection to narrow down the search region in the three-dimensional
space of the point cloud. Next, we introduce a Gaussian mask to enhance the
point cloud information. Then, we extract the features from the frustum point
cloud and the crop image using the point cloud feature extraction pipeline and
the image feature extraction pipeline, respectively. Finally, we concatenate
and fuse the data features from both modalities to achieve three-dimensional
object detection. Experiments demonstrate that on the constructed test set of
tractor road data, the FrustumFusionNetv2 achieves 82.28% and 95.68% accuracy
in the three-dimensional object detection of the two main road objects, cars
and people, respectively. This performance is 1.83% and 2.33% better than the
original model. It offers a hybrid fusion-based multi-object, high-precision,
real-time three-dimensional object detection technique for unmanned
agricultural machines in tractor road scenarios. On the Karlsruhe Institute of
Technology and Toyota Technological Institute (KITTI) Benchmark Suite
validation set, the FrustumFusionNetv2 also demonstrates significant
superiority in detecting road pedestrian objects compared with other
frustum-based three-dimensional object detection methods.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 06:40:39 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yang",
"Lili",
""
],
[
"Chang",
"Mengshuai",
""
],
[
"Guo",
"Xiao",
""
],
[
"Feng",
"Yuxin",
""
],
[
"Mei",
"Yiwen",
""
],
[
"Wu",
"Caicong",
""
]
] | TITLE: FrustumFusionNets: A Three-Dimensional Object Detection Network Based on
Tractor Road Scene
ABSTRACT: To address the issues of the existing frustum-based methods' underutilization
of image information in road three-dimensional object detection as well as the
lack of research on agricultural scenes, we constructed an object detection
dataset using an 80-line Light Detection And Ranging (LiDAR) and a camera in a
complex tractor road scene and proposed a new network called FrustumFusionNets
(FFNets). Initially, we utilize the results of image-based two-dimensional
object detection to narrow down the search region in the three-dimensional
space of the point cloud. Next, we introduce a Gaussian mask to enhance the
point cloud information. Then, we extract the features from the frustum point
cloud and the crop image using the point cloud feature extraction pipeline and
the image feature extraction pipeline, respectively. Finally, we concatenate
and fuse the data features from both modalities to achieve three-dimensional
object detection. Experiments demonstrate that on the constructed test set of
tractor road data, the FrustumFusionNetv2 achieves 82.28% and 95.68% accuracy
in the three-dimensional object detection of the two main road objects, cars
and people, respectively. This performance is 1.83% and 2.33% better than the
original model. It offers a hybrid fusion-based multi-object, high-precision,
real-time three-dimensional object detection technique for unmanned
agricultural machines in tractor road scenarios. On the Karlsruhe Institute of
Technology and Toyota Technological Institute (KITTI) Benchmark Suite
validation set, the FrustumFusionNetv2 also demonstrates significant
superiority in detecting road pedestrian objects compared with other
frustum-based three-dimensional object detection methods.
|
2503.13952 | Xinqing Li | Xinqing Li, Ruiqi Song, Qingyu Xie, Ye Wu, Nanxin Zeng, Yunfeng Ai | SimWorld: A Unified Benchmark for Simulator-Conditioned Scene Generation
via World Model | 8 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of autonomous driving technology, a lack of data
has become a major obstacle to enhancing perception model accuracy. Researchers
are now exploring controllable data generation using world models to diversify
datasets. However, previous work has been limited to studying image generation
quality on specific public datasets. There is still relatively little research
on how to build data generation engines for real-world application scenes to
achieve large-scale data generation for challenging scenes. In this paper, a
simulator-conditioned scene generation engine based on world model is proposed.
By constructing a simulation system consistent with real-world scenes,
simulation data and labels, which serve as the conditions for data generation
in the world model, for any scenes can be collected. It is a novel data
generation pipeline by combining the powerful scene simulation capabilities of
the simulation engine with the robust data generation capabilities of the world
model. In addition, a benchmark with proportionally constructed virtual and
real data, is provided for exploring the capabilities of world models in
real-world scenes. Quantitative results show that these generated images
significantly improve downstream perception models performance. Finally, we
explored the generative performance of the world model in urban autonomous
driving scenarios. All the data and code will be available at
https://github.com/Li-Zn-H/SimWorld.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 06:41:02 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Li",
"Xinqing",
""
],
[
"Song",
"Ruiqi",
""
],
[
"Xie",
"Qingyu",
""
],
[
"Wu",
"Ye",
""
],
[
"Zeng",
"Nanxin",
""
],
[
"Ai",
"Yunfeng",
""
]
] | TITLE: SimWorld: A Unified Benchmark for Simulator-Conditioned Scene Generation
via World Model
ABSTRACT: With the rapid advancement of autonomous driving technology, a lack of data
has become a major obstacle to enhancing perception model accuracy. Researchers
are now exploring controllable data generation using world models to diversify
datasets. However, previous work has been limited to studying image generation
quality on specific public datasets. There is still relatively little research
on how to build data generation engines for real-world application scenes to
achieve large-scale data generation for challenging scenes. In this paper, a
simulator-conditioned scene generation engine based on world model is proposed.
By constructing a simulation system consistent with real-world scenes,
simulation data and labels, which serve as the conditions for data generation
in the world model, for any scenes can be collected. It is a novel data
generation pipeline by combining the powerful scene simulation capabilities of
the simulation engine with the robust data generation capabilities of the world
model. In addition, a benchmark with proportionally constructed virtual and
real data, is provided for exploring the capabilities of world models in
real-world scenes. Quantitative results show that these generated images
significantly improve downstream perception models performance. Finally, we
explored the generative performance of the world model in urban autonomous
driving scenarios. All the data and code will be available at
https://github.com/Li-Zn-H/SimWorld.
|
2503.13962 | Chengze Jiang | Chengze Jiang, Zhuangzhuang Wang, Minjing Dong, Jie Gui | Survey of Adversarial Robustness in Multimodal Large Language Models | 9 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Large Language Models (MLLMs) have demonstrated exceptional
performance in artificial intelligence by facilitating integrated understanding
across diverse modalities, including text, images, video, audio, and speech.
However, their deployment in real-world applications raises significant
concerns about adversarial vulnerabilities that could compromise their safety
and reliability. Unlike unimodal models, MLLMs face unique challenges due to
the interdependencies among modalities, making them susceptible to
modality-specific threats and cross-modal adversarial manipulations. This paper
reviews the adversarial robustness of MLLMs, covering different modalities. We
begin with an overview of MLLMs and a taxonomy of adversarial attacks tailored
to each modality. Next, we review key datasets and evaluation metrics used to
assess the robustness of MLLMs. After that, we provide an in-depth review of
attacks targeting MLLMs across different modalities. Our survey also identifies
critical challenges and suggests promising future research directions.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 06:54:59 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jiang",
"Chengze",
""
],
[
"Wang",
"Zhuangzhuang",
""
],
[
"Dong",
"Minjing",
""
],
[
"Gui",
"Jie",
""
]
] | TITLE: Survey of Adversarial Robustness in Multimodal Large Language Models
ABSTRACT: Multimodal Large Language Models (MLLMs) have demonstrated exceptional
performance in artificial intelligence by facilitating integrated understanding
across diverse modalities, including text, images, video, audio, and speech.
However, their deployment in real-world applications raises significant
concerns about adversarial vulnerabilities that could compromise their safety
and reliability. Unlike unimodal models, MLLMs face unique challenges due to
the interdependencies among modalities, making them susceptible to
modality-specific threats and cross-modal adversarial manipulations. This paper
reviews the adversarial robustness of MLLMs, covering different modalities. We
begin with an overview of MLLMs and a taxonomy of adversarial attacks tailored
to each modality. Next, we review key datasets and evaluation metrics used to
assess the robustness of MLLMs. After that, we provide an in-depth review of
attacks targeting MLLMs across different modalities. Our survey also identifies
critical challenges and suggests promising future research directions.
|
2503.13966 | Siqi Zhang | Siqi Zhang, Yanyuan Qiao, Qunbo Wang, Longteng Guo, Zhihua Wei, Jing
Liu | FlexVLN: Flexible Adaptation for Diverse Vision-and-Language Navigation
Tasks | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The aspiration of the Vision-and-Language Navigation (VLN) task has long been
to develop an embodied agent with robust adaptability, capable of seamlessly
transferring its navigation capabilities across various tasks. Despite
remarkable advancements in recent years, most methods necessitate
dataset-specific training, thereby lacking the capability to generalize across
diverse datasets encompassing distinct types of instructions. Large language
models (LLMs) have demonstrated exceptional reasoning and generalization
abilities, exhibiting immense potential in robot action planning. In this
paper, we propose FlexVLN, an innovative hierarchical approach to VLN that
integrates the fundamental navigation ability of a supervised-learning-based
Instruction Follower with the robust generalization ability of the LLM Planner,
enabling effective generalization across diverse VLN datasets. Moreover, a
verification mechanism and a multi-model integration mechanism are proposed to
mitigate potential hallucinations by the LLM Planner and enhance execution
accuracy of the Instruction Follower. We take REVERIE, SOON, and CVDN-target as
out-of-domain datasets for assessing generalization ability. The generalization
performance of FlexVLN surpasses that of all the previous methods to a large
extent.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 06:58:41 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhang",
"Siqi",
""
],
[
"Qiao",
"Yanyuan",
""
],
[
"Wang",
"Qunbo",
""
],
[
"Guo",
"Longteng",
""
],
[
"Wei",
"Zhihua",
""
],
[
"Liu",
"Jing",
""
]
] | TITLE: FlexVLN: Flexible Adaptation for Diverse Vision-and-Language Navigation
Tasks
ABSTRACT: The aspiration of the Vision-and-Language Navigation (VLN) task has long been
to develop an embodied agent with robust adaptability, capable of seamlessly
transferring its navigation capabilities across various tasks. Despite
remarkable advancements in recent years, most methods necessitate
dataset-specific training, thereby lacking the capability to generalize across
diverse datasets encompassing distinct types of instructions. Large language
models (LLMs) have demonstrated exceptional reasoning and generalization
abilities, exhibiting immense potential in robot action planning. In this
paper, we propose FlexVLN, an innovative hierarchical approach to VLN that
integrates the fundamental navigation ability of a supervised-learning-based
Instruction Follower with the robust generalization ability of the LLM Planner,
enabling effective generalization across diverse VLN datasets. Moreover, a
verification mechanism and a multi-model integration mechanism are proposed to
mitigate potential hallucinations by the LLM Planner and enhance execution
accuracy of the Instruction Follower. We take REVERIE, SOON, and CVDN-target as
out-of-domain datasets for assessing generalization ability. The generalization
performance of FlexVLN surpasses that of all the previous methods to a large
extent.
|
2503.13969 | Haobin Qin | HaoBin Qin, Jiale Fang, Keisuke Fujii | SoccerSynth Field: enhancing field detection with synthetic data from
virtual soccer simulator | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Field detection in team sports is an essential task in sports video analysis.
However, collecting large-scale and diverse real-world datasets for training
detection models is often cost and time-consuming. Synthetic datasets, which
allow controlled variability in lighting, textures, and camera angles, will be
a promising alternative for addressing these problems. This study addresses the
challenges of high costs and difficulties in collecting real-world datasets by
investigating the effectiveness of pretraining models using synthetic datasets.
In this paper, we propose the effectiveness of using a synthetic dataset
(SoccerSynth-Field) for soccer field detection. A synthetic soccer field
dataset was created to pretrain models, and the performance of these models was
compared with models trained on real-world datasets. The results demonstrate
that models pretrained on the synthetic dataset exhibit superior performance in
detecting soccer fields. This highlights the effectiveness of synthetic data in
enhancing model robustness and accuracy, offering a cost-effective and scalable
solution for advancing detection tasks in sports field detection.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 07:05:24 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Qin",
"HaoBin",
""
],
[
"Fang",
"Jiale",
""
],
[
"Fujii",
"Keisuke",
""
]
] | TITLE: SoccerSynth Field: enhancing field detection with synthetic data from
virtual soccer simulator
ABSTRACT: Field detection in team sports is an essential task in sports video analysis.
However, collecting large-scale and diverse real-world datasets for training
detection models is often cost and time-consuming. Synthetic datasets, which
allow controlled variability in lighting, textures, and camera angles, will be
a promising alternative for addressing these problems. This study addresses the
challenges of high costs and difficulties in collecting real-world datasets by
investigating the effectiveness of pretraining models using synthetic datasets.
In this paper, we propose the effectiveness of using a synthetic dataset
(SoccerSynth-Field) for soccer field detection. A synthetic soccer field
dataset was created to pretrain models, and the performance of these models was
compared with models trained on real-world datasets. The results demonstrate
that models pretrained on the synthetic dataset exhibit superior performance in
detecting soccer fields. This highlights the effectiveness of synthetic data in
enhancing model robustness and accuracy, offering a cost-effective and scalable
solution for advancing detection tasks in sports field detection.
|
2503.13975 | Omar Shaikh | Omar Shaikh, Hussein Mozannar, Gagan Bansal, Adam Fourney, Eric
Horvitz | Navigating Rifts in Human-LLM Grounding: Study and Benchmark | 16 pages, 5 figures | null | null | null | cs.CL cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language models excel at following instructions but often struggle with the
collaborative aspects of conversation that humans naturally employ. This
limitation in grounding -- the process by which conversation participants
establish mutual understanding -- can lead to outcomes ranging from frustrated
users to serious consequences in high-stakes scenarios. To systematically study
grounding challenges in human-LLM interactions, we analyze logs from three
human-assistant datasets: WildChat, MultiWOZ, and Bing Chat. We develop a
taxonomy of grounding acts and build models to annotate and forecast grounding
behavior. Our findings reveal significant differences in human-human and
human-LLM grounding: LLMs were three times less likely to initiate
clarification and sixteen times less likely to provide follow-up requests than
humans. Additionally, early grounding failures predicted later interaction
breakdowns. Building on these insights, we introduce RIFTS: a benchmark derived
from publicly available LLM interaction data containing situations where LLMs
fail to initiate grounding. We note that current frontier models perform poorly
on RIFTS, highlighting the need to reconsider how we train and prompt LLMs for
human interaction. To this end, we develop a preliminary intervention that
mitigates grounding failures.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 07:24:05 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Shaikh",
"Omar",
""
],
[
"Mozannar",
"Hussein",
""
],
[
"Bansal",
"Gagan",
""
],
[
"Fourney",
"Adam",
""
],
[
"Horvitz",
"Eric",
""
]
] | TITLE: Navigating Rifts in Human-LLM Grounding: Study and Benchmark
ABSTRACT: Language models excel at following instructions but often struggle with the
collaborative aspects of conversation that humans naturally employ. This
limitation in grounding -- the process by which conversation participants
establish mutual understanding -- can lead to outcomes ranging from frustrated
users to serious consequences in high-stakes scenarios. To systematically study
grounding challenges in human-LLM interactions, we analyze logs from three
human-assistant datasets: WildChat, MultiWOZ, and Bing Chat. We develop a
taxonomy of grounding acts and build models to annotate and forecast grounding
behavior. Our findings reveal significant differences in human-human and
human-LLM grounding: LLMs were three times less likely to initiate
clarification and sixteen times less likely to provide follow-up requests than
humans. Additionally, early grounding failures predicted later interaction
breakdowns. Building on these insights, we introduce RIFTS: a benchmark derived
from publicly available LLM interaction data containing situations where LLMs
fail to initiate grounding. We note that current frontier models perform poorly
on RIFTS, highlighting the need to reconsider how we train and prompt LLMs for
human interaction. To this end, we develop a preliminary intervention that
mitigates grounding failures.
|
2503.13980 | Haolin Wang | Haolin Wang, Xueyan Li, Yazhe Niu, Shuai Hu, Hongsheng Li | Empowering LLMs in Decision Games through Algorithmic Data Synthesis | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have exhibited impressive capabilities across
numerous domains, yet they often struggle with complex reasoning and
decision-making tasks. Decision-making games, which inherently require
multifaceted reasoning logic, serve as ideal sandboxes for evaluating and
enhancing the reasoning abilities of LLMs. In this work, we first explore
whether LLMs can master complex decision-making games through targeted
post-training. To this end, we design data synthesis strategies and curate
extensive offline datasets from two classic games, Doudizhu and Go. We further
develop a suite of techniques to effectively incorporate this data into LLM
training, resulting in two novel agents: Mastermind-Dou and Mastermind-Go. Our
experimental results demonstrate that these Mastermind LLMs achieve competitive
performance in their respective games. Additionally, we explore whether
integrating decision-making data can enhance the general reasoning abilities of
LLMs. Our findings suggest that such post-training improves certain aspects of
reasoning, providing valuable insights for optimizing LLM data collection and
synthesis strategies.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 07:30:29 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Haolin",
""
],
[
"Li",
"Xueyan",
""
],
[
"Niu",
"Yazhe",
""
],
[
"Hu",
"Shuai",
""
],
[
"Li",
"Hongsheng",
""
]
] | TITLE: Empowering LLMs in Decision Games through Algorithmic Data Synthesis
ABSTRACT: Large Language Models (LLMs) have exhibited impressive capabilities across
numerous domains, yet they often struggle with complex reasoning and
decision-making tasks. Decision-making games, which inherently require
multifaceted reasoning logic, serve as ideal sandboxes for evaluating and
enhancing the reasoning abilities of LLMs. In this work, we first explore
whether LLMs can master complex decision-making games through targeted
post-training. To this end, we design data synthesis strategies and curate
extensive offline datasets from two classic games, Doudizhu and Go. We further
develop a suite of techniques to effectively incorporate this data into LLM
training, resulting in two novel agents: Mastermind-Dou and Mastermind-Go. Our
experimental results demonstrate that these Mastermind LLMs achieve competitive
performance in their respective games. Additionally, we explore whether
integrating decision-making data can enhance the general reasoning abilities of
LLMs. Our findings suggest that such post-training improves certain aspects of
reasoning, providing valuable insights for optimizing LLM data collection and
synthesis strategies.
|
2503.13987 | Lichao Mou | Yaxiong Chen, Yujie Wang, Zixuan Zheng, Jingliang Hu, Yilei Shi,
Shengwu Xiong, Xiao Xiang Zhu, Lichao Mou | Striving for Simplicity: Simple Yet Effective Prior-Aware
Pseudo-Labeling for Semi-Supervised Ultrasound Image Segmentation | MICCAI 2024 | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical ultrasound imaging is ubiquitous, but manual analysis struggles to
keep pace. Automated segmentation can help but requires large labeled datasets,
which are scarce. Semi-supervised learning leveraging both unlabeled and
limited labeled data is a promising approach. State-of-the-art methods use
consistency regularization or pseudo-labeling but grow increasingly complex.
Without sufficient labels, these models often latch onto artifacts or allow
anatomically implausible segmentations. In this paper, we present a simple yet
effective pseudo-labeling method with an adversarially learned shape prior to
regularize segmentations. Specifically, we devise an encoder-twin-decoder
network where the shape prior acts as an implicit shape model, penalizing
anatomically implausible but not ground-truth-deviating predictions. Without
bells and whistles, our simple approach achieves state-of-the-art performance
on two benchmarks under different partition protocols. We provide a strong
baseline for future semi-supervised medical image segmentation. Code is
available at https://github.com/WUTCM-Lab/Shape-Prior-Semi-Seg.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 07:44:09 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chen",
"Yaxiong",
""
],
[
"Wang",
"Yujie",
""
],
[
"Zheng",
"Zixuan",
""
],
[
"Hu",
"Jingliang",
""
],
[
"Shi",
"Yilei",
""
],
[
"Xiong",
"Shengwu",
""
],
[
"Zhu",
"Xiao Xiang",
""
],
[
"Mou",
"Lichao",
""
]
] | TITLE: Striving for Simplicity: Simple Yet Effective Prior-Aware
Pseudo-Labeling for Semi-Supervised Ultrasound Image Segmentation
ABSTRACT: Medical ultrasound imaging is ubiquitous, but manual analysis struggles to
keep pace. Automated segmentation can help but requires large labeled datasets,
which are scarce. Semi-supervised learning leveraging both unlabeled and
limited labeled data is a promising approach. State-of-the-art methods use
consistency regularization or pseudo-labeling but grow increasingly complex.
Without sufficient labels, these models often latch onto artifacts or allow
anatomically implausible segmentations. In this paper, we present a simple yet
effective pseudo-labeling method with an adversarially learned shape prior to
regularize segmentations. Specifically, we devise an encoder-twin-decoder
network where the shape prior acts as an implicit shape model, penalizing
anatomically implausible but not ground-truth-deviating predictions. Without
bells and whistles, our simple approach achieves state-of-the-art performance
on two benchmarks under different partition protocols. We provide a strong
baseline for future semi-supervised medical image segmentation. Code is
available at https://github.com/WUTCM-Lab/Shape-Prior-Semi-Seg.
|
2503.13989 | Lichao Mou | Zixuan Zheng, Yilei Shi, Chunlei Li, Jingliang Hu, Xiao Xiang Zhu,
Lichao Mou | Rethinking Cell Counting Methods: Decoupling Counting and Localization | MICCAI 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell counting in microscopy images is vital in medicine and biology but
extremely tedious and time-consuming to perform manually. While automated
methods have advanced in recent years, state-of-the-art approaches tend to
increasingly complex model designs. In this paper, we propose a conceptually
simple yet effective decoupled learning scheme for automated cell counting,
consisting of separate counter and localizer networks. In contrast to jointly
learning counting and density map estimation, we show that decoupling these
objectives surprisingly improves results. The counter operates on intermediate
feature maps rather than pixel space to leverage global context and produce
count estimates, while also generating coarse density maps. The localizer then
reconstructs high-resolution density maps that precisely localize individual
cells, conditional on the original images and coarse density maps from the
counter. Besides, to boost counting accuracy, we further introduce a global
message passing module to integrate cross-region patterns. Extensive
experiments on four datasets demonstrate that our approach, despite its
simplicity, challenges common practice and achieves state-of-the-art
performance by significant margins. Our key insight is that decoupled learning
alleviates the need to learn counting on high-resolution density maps directly,
allowing the model to focus on global features critical for accurate estimates.
Code is available at https://github.com/MedAITech/DCL.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 07:50:03 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zheng",
"Zixuan",
""
],
[
"Shi",
"Yilei",
""
],
[
"Li",
"Chunlei",
""
],
[
"Hu",
"Jingliang",
""
],
[
"Zhu",
"Xiao Xiang",
""
],
[
"Mou",
"Lichao",
""
]
] | TITLE: Rethinking Cell Counting Methods: Decoupling Counting and Localization
ABSTRACT: Cell counting in microscopy images is vital in medicine and biology but
extremely tedious and time-consuming to perform manually. While automated
methods have advanced in recent years, state-of-the-art approaches tend to
increasingly complex model designs. In this paper, we propose a conceptually
simple yet effective decoupled learning scheme for automated cell counting,
consisting of separate counter and localizer networks. In contrast to jointly
learning counting and density map estimation, we show that decoupling these
objectives surprisingly improves results. The counter operates on intermediate
feature maps rather than pixel space to leverage global context and produce
count estimates, while also generating coarse density maps. The localizer then
reconstructs high-resolution density maps that precisely localize individual
cells, conditional on the original images and coarse density maps from the
counter. Besides, to boost counting accuracy, we further introduce a global
message passing module to integrate cross-region patterns. Extensive
experiments on four datasets demonstrate that our approach, despite its
simplicity, challenges common practice and achieves state-of-the-art
performance by significant margins. Our key insight is that decoupled learning
alleviates the need to learn counting on high-resolution density maps directly,
allowing the model to focus on global features critical for accurate estimates.
Code is available at https://github.com/MedAITech/DCL.
|
2503.13991 | Chenhao Zhang | Bo Peng, Jintao Chen, Mufeng Yao, Chenhao Zhang, Jianghui Zhang,
Mingmin Chi, Jiang Tao | GraphTEN: Graph Enhanced Texture Encoding Network | 6 pages, 7 figures, conference paper | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Texture recognition is a fundamental problem in computer vision and pattern
recognition. Recent progress leverages feature aggregation into discriminative
descriptions based on convolutional neural networks (CNNs). However, modeling
non-local context relations through visual primitives remains challenging due
to the variability and randomness of texture primitives in spatial
distributions. In this paper, we propose a graph-enhanced texture encoding
network (GraphTEN) designed to capture both local and global features of
texture primitives. GraphTEN models global associations through fully connected
graphs and captures cross-scale dependencies of texture primitives via
bipartite graphs. Additionally, we introduce a patch encoding module that
utilizes a codebook to achieve an orderless representation of texture by
encoding multi-scale patch features into a unified feature space. The proposed
GraphTEN achieves superior performance compared to state-of-the-art methods
across five publicly available datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 07:51:13 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Peng",
"Bo",
""
],
[
"Chen",
"Jintao",
""
],
[
"Yao",
"Mufeng",
""
],
[
"Zhang",
"Chenhao",
""
],
[
"Zhang",
"Jianghui",
""
],
[
"Chi",
"Mingmin",
""
],
[
"Tao",
"Jiang",
""
]
] | TITLE: GraphTEN: Graph Enhanced Texture Encoding Network
ABSTRACT: Texture recognition is a fundamental problem in computer vision and pattern
recognition. Recent progress leverages feature aggregation into discriminative
descriptions based on convolutional neural networks (CNNs). However, modeling
non-local context relations through visual primitives remains challenging due
to the variability and randomness of texture primitives in spatial
distributions. In this paper, we propose a graph-enhanced texture encoding
network (GraphTEN) designed to capture both local and global features of
texture primitives. GraphTEN models global associations through fully connected
graphs and captures cross-scale dependencies of texture primitives via
bipartite graphs. Additionally, we introduce a patch encoding module that
utilizes a codebook to achieve an orderless representation of texture by
encoding multi-scale patch features into a unified feature space. The proposed
GraphTEN achieves superior performance compared to state-of-the-art methods
across five publicly available datasets.
|
2503.14002 | Damian Boborzi | Damian Boborzi and Phillip Mueller and Jonas Emrich and Dominik Schmid
and Sebastian Mueller and Lars Mikelsons | MeshFleet: Filtered and Annotated 3D Vehicle Dataset for Domain Specific
Generative Modeling | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Generative models have recently made remarkable progress in the field of 3D
objects. However, their practical application in fields like engineering
remains limited since they fail to deliver the accuracy, quality, and
controllability needed for domain-specific tasks. Fine-tuning large generative
models is a promising perspective for making these models available in these
fields. Creating high-quality, domain-specific 3D datasets is crucial for
fine-tuning large generative models, yet the data filtering and annotation
process remains a significant bottleneck. We present MeshFleet, a filtered and
annotated 3D vehicle dataset extracted from Objaverse-XL, the most extensive
publicly available collection of 3D objects. Our approach proposes a pipeline
for automated data filtering based on a quality classifier. This classifier is
trained on a manually labeled subset of Objaverse, incorporating DINOv2 and
SigLIP embeddings, refined through caption-based analysis and uncertainty
estimation. We demonstrate the efficacy of our filtering method through a
comparative analysis against caption and image aesthetic score-based techniques
and fine-tuning experiments with SV3D, highlighting the importance of targeted
data selection for domain-specific 3D generative modeling.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:09:24 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Boborzi",
"Damian",
""
],
[
"Mueller",
"Phillip",
""
],
[
"Emrich",
"Jonas",
""
],
[
"Schmid",
"Dominik",
""
],
[
"Mueller",
"Sebastian",
""
],
[
"Mikelsons",
"Lars",
""
]
] | TITLE: MeshFleet: Filtered and Annotated 3D Vehicle Dataset for Domain Specific
Generative Modeling
ABSTRACT: Generative models have recently made remarkable progress in the field of 3D
objects. However, their practical application in fields like engineering
remains limited since they fail to deliver the accuracy, quality, and
controllability needed for domain-specific tasks. Fine-tuning large generative
models is a promising perspective for making these models available in these
fields. Creating high-quality, domain-specific 3D datasets is crucial for
fine-tuning large generative models, yet the data filtering and annotation
process remains a significant bottleneck. We present MeshFleet, a filtered and
annotated 3D vehicle dataset extracted from Objaverse-XL, the most extensive
publicly available collection of 3D objects. Our approach proposes a pipeline
for automated data filtering based on a quality classifier. This classifier is
trained on a manually labeled subset of Objaverse, incorporating DINOv2 and
SigLIP embeddings, refined through caption-based analysis and uncertainty
estimation. We demonstrate the efficacy of our filtering method through a
comparative analysis against caption and image aesthetic score-based techniques
and fine-tuning experiments with SV3D, highlighting the importance of targeted
data selection for domain-specific 3D generative modeling.
|
2503.14004 | Eyal Marantz | Eyal Marantz and Ori Plonsky | Predicting Human Choice Between Textually Described Lotteries | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Predicting human decision-making under risk and uncertainty is a
long-standing challenge in cognitive science, economics, and AI. While prior
research has focused on numerically described lotteries, real-world decisions
often rely on textual descriptions. This study conducts the first large-scale
exploration of human decision-making in such tasks using a large dataset of
one-shot binary choices between textually described lotteries. We evaluate
multiple computational approaches, including fine-tuning Large Language Models
(LLMs), leveraging embeddings, and integrating behavioral theories of choice
under risk. Our results show that fine-tuned LLMs, specifically RoBERTa and
GPT-4o outperform hybrid models that incorporate behavioral theory, challenging
established methods in numerical settings. These findings highlight fundamental
differences in how textual and numerical information influence decision-making
and underscore the need for new modeling strategies to bridge this gap.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:10:33 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Marantz",
"Eyal",
""
],
[
"Plonsky",
"Ori",
""
]
] | TITLE: Predicting Human Choice Between Textually Described Lotteries
ABSTRACT: Predicting human decision-making under risk and uncertainty is a
long-standing challenge in cognitive science, economics, and AI. While prior
research has focused on numerically described lotteries, real-world decisions
often rely on textual descriptions. This study conducts the first large-scale
exploration of human decision-making in such tasks using a large dataset of
one-shot binary choices between textually described lotteries. We evaluate
multiple computational approaches, including fine-tuning Large Language Models
(LLMs), leveraging embeddings, and integrating behavioral theories of choice
under risk. Our results show that fine-tuned LLMs, specifically RoBERTa and
GPT-4o outperform hybrid models that incorporate behavioral theory, challenging
established methods in numerical settings. These findings highlight fundamental
differences in how textual and numerical information influence decision-making
and underscore the need for new modeling strategies to bridge this gap.
|
2503.14012 | Wei Lu | Wei Lu, Si-Bao Chen, Hui-Dong Li, Qing-Ling Shu, Chris H. Q. Ding, Jin
Tang, and Bin Luo | LEGNet: Lightweight Edge-Gaussian Driven Network for Low-Quality Remote
Sensing Image Object Detection | 12 pages, 5 figures. Remote Sensing Image Object Detection | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote sensing object detection (RSOD) faces formidable challenges in complex
visual environments. Aerial and satellite images inherently suffer from
limitations such as low spatial resolution, sensor noise, blurred objects,
low-light degradation, and partial occlusions. These degradation factors
collectively compromise the feature discriminability in detection models,
resulting in three key issues: (1) reduced contrast that hampers
foreground-background separation, (2) structural discontinuities in edge
representations, and (3) ambiguous feature responses caused by variations in
illumination. These collectively weaken model robustness and deployment
feasibility. To address these challenges, we propose LEGNet, a lightweight
network that incorporates a novel edge-Gaussian aggregation (EGA) module
specifically designed for low-quality remote sensing images. Our key innovation
lies in the synergistic integration of Scharr operator-based edge priors with
uncertainty-aware Gaussian modeling: (a) The orientation-aware Scharr filters
preserve high-frequency edge details with rotational invariance; (b) The
uncertainty-aware Gaussian layers probabilistically refine low-confidence
features through variance estimation. This design enables precision enhancement
while maintaining architectural simplicity. Comprehensive evaluations across
four RSOD benchmarks (DOTA-v1.0, v1.5, DIOR-R, FAIR1M-v1.0) and a UAV-view
dataset (VisDrone2019) demonstrate significant improvements. LEGNet achieves
state-of-the-art performance across five benchmark datasets while ensuring
computational efficiency, making it well-suited for deployment on
resource-constrained edge devices in real-world remote sensing applications.
The code is available at https://github.com/lwCVer/LEGNet.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:20:24 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Lu",
"Wei",
""
],
[
"Chen",
"Si-Bao",
""
],
[
"Li",
"Hui-Dong",
""
],
[
"Shu",
"Qing-Ling",
""
],
[
"Ding",
"Chris H. Q.",
""
],
[
"Tang",
"Jin",
""
],
[
"Luo",
"Bin",
""
]
] | TITLE: LEGNet: Lightweight Edge-Gaussian Driven Network for Low-Quality Remote
Sensing Image Object Detection
ABSTRACT: Remote sensing object detection (RSOD) faces formidable challenges in complex
visual environments. Aerial and satellite images inherently suffer from
limitations such as low spatial resolution, sensor noise, blurred objects,
low-light degradation, and partial occlusions. These degradation factors
collectively compromise the feature discriminability in detection models,
resulting in three key issues: (1) reduced contrast that hampers
foreground-background separation, (2) structural discontinuities in edge
representations, and (3) ambiguous feature responses caused by variations in
illumination. These collectively weaken model robustness and deployment
feasibility. To address these challenges, we propose LEGNet, a lightweight
network that incorporates a novel edge-Gaussian aggregation (EGA) module
specifically designed for low-quality remote sensing images. Our key innovation
lies in the synergistic integration of Scharr operator-based edge priors with
uncertainty-aware Gaussian modeling: (a) The orientation-aware Scharr filters
preserve high-frequency edge details with rotational invariance; (b) The
uncertainty-aware Gaussian layers probabilistically refine low-confidence
features through variance estimation. This design enables precision enhancement
while maintaining architectural simplicity. Comprehensive evaluations across
four RSOD benchmarks (DOTA-v1.0, v1.5, DIOR-R, FAIR1M-v1.0) and a UAV-view
dataset (VisDrone2019) demonstrate significant improvements. LEGNet achieves
state-of-the-art performance across five benchmark datasets while ensuring
computational efficiency, making it well-suited for deployment on
resource-constrained edge devices in real-world remote sensing applications.
The code is available at https://github.com/lwCVer/LEGNet.
|
2503.14013 | Pengcheng Zhou | Pengcheng Zhou, Lantian Zhang, Wei Li | Boosting Semi-Supervised Medical Image Segmentation via Masked Image
Consistency and Discrepancy Learning | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised learning is of great significance in medical image
segmentation by exploiting unlabeled data. Among its strategies, the
co-training framework is prominent. However, previous co-training studies
predominantly concentrate on network initialization variances and pseudo-label
generation, while overlooking the equilibrium between information interchange
and model diversity preservation. In this paper, we propose the Masked Image
Consistency and Discrepancy Learning (MICD) framework with three key modules.
The Masked Cross Pseudo Consistency (MCPC) module enriches context perception
and small sample learning via pseudo-labeling across masked-input branches. The
Cross Feature Consistency (CFC) module fortifies information exchange and model
robustness by ensuring decoder feature consistency. The Cross Model Discrepancy
(CMD) module utilizes EMA teacher networks to oversee outputs and preserve
branch diversity. Together, these modules address existing limitations by
focusing on fine-grained local information and maintaining diversity in a
heterogeneous framework. Experiments on two public medical image datasets, AMOS
and Synapse, demonstrate that our approach outperforms state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:20:35 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhou",
"Pengcheng",
""
],
[
"Zhang",
"Lantian",
""
],
[
"Li",
"Wei",
""
]
] | TITLE: Boosting Semi-Supervised Medical Image Segmentation via Masked Image
Consistency and Discrepancy Learning
ABSTRACT: Semi-supervised learning is of great significance in medical image
segmentation by exploiting unlabeled data. Among its strategies, the
co-training framework is prominent. However, previous co-training studies
predominantly concentrate on network initialization variances and pseudo-label
generation, while overlooking the equilibrium between information interchange
and model diversity preservation. In this paper, we propose the Masked Image
Consistency and Discrepancy Learning (MICD) framework with three key modules.
The Masked Cross Pseudo Consistency (MCPC) module enriches context perception
and small sample learning via pseudo-labeling across masked-input branches. The
Cross Feature Consistency (CFC) module fortifies information exchange and model
robustness by ensuring decoder feature consistency. The Cross Model Discrepancy
(CMD) module utilizes EMA teacher networks to oversee outputs and preserve
branch diversity. Together, these modules address existing limitations by
focusing on fine-grained local information and maintaining diversity in a
heterogeneous framework. Experiments on two public medical image datasets, AMOS
and Synapse, demonstrate that our approach outperforms state-of-the-art
methods.
|
2503.14023 | Mihai Nadas | Mihai Nadas, Laura Diosan, and Andreea Tomescu | Synthetic Data Generation Using Large Language Models: Advances in Text
and Code | 21 pages, 3 tables, 64 references, preprint | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have unlocked new possibilities for generating
synthetic training data in both natural language and code. By producing
artificial but task-relevant examples, these models can significantly augment
or even replace real-world datasets, especially when labeled data is scarce or
sensitive. This paper surveys recent advances in using LLMs to create synthetic
text and code, emphasizing prompt-based generation, retrieval-augmented
pipelines, and iterative self-refinement. We show how these methods enrich
low-resource tasks such as classification and question answering, as well as
code-centric applications such as instruction tuning, code translation, and bug
repair, by enabling automated verification of functional correctness. Alongside
potential benefits like cost-effectiveness, broad coverage, and controllable
diversity, we address challenges such as factual inaccuracies in generated
text, lack of stylistic realism, and the risk of bias amplification. Proposed
mitigations include filtering and weighting outputs and reinforcement learning
with execution feedback for code. We conclude with open research directions
like automated prompt engineering, cross-modal data synthesis, and robust
evaluation frameworks, highlighting the importance of LLM-generated synthetic
data in advancing AI while emphasizing ethical and quality safeguards.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:34:03 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Nadas",
"Mihai",
""
],
[
"Diosan",
"Laura",
""
],
[
"Tomescu",
"Andreea",
""
]
] | TITLE: Synthetic Data Generation Using Large Language Models: Advances in Text
and Code
ABSTRACT: Large language models (LLMs) have unlocked new possibilities for generating
synthetic training data in both natural language and code. By producing
artificial but task-relevant examples, these models can significantly augment
or even replace real-world datasets, especially when labeled data is scarce or
sensitive. This paper surveys recent advances in using LLMs to create synthetic
text and code, emphasizing prompt-based generation, retrieval-augmented
pipelines, and iterative self-refinement. We show how these methods enrich
low-resource tasks such as classification and question answering, as well as
code-centric applications such as instruction tuning, code translation, and bug
repair, by enabling automated verification of functional correctness. Alongside
potential benefits like cost-effectiveness, broad coverage, and controllable
diversity, we address challenges such as factual inaccuracies in generated
text, lack of stylistic realism, and the risk of bias amplification. Proposed
mitigations include filtering and weighting outputs and reinforcement learning
with execution feedback for code. We conclude with open research directions
like automated prompt engineering, cross-modal data synthesis, and robust
evaluation frameworks, highlighting the importance of LLM-generated synthetic
data in advancing AI while emphasizing ethical and quality safeguards.
|
2503.14024 | Wanfu Gao | Pingting Hao, Kunpeng Liu, Wanfu Gao | Uncertainty-Aware Global-View Reconstruction for Multi-View Multi-Label
Feature Selection | 9 pages,5 figures, accept in AAAI 25 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, multi-view multi-label learning (MVML) has gained popularity
due to its close resemblance to real-world scenarios. However, the challenge of
selecting informative features to ensure both performance and efficiency
remains a significant question in MVML. Existing methods often extract
information separately from the consistency part and the complementary part,
which may result in noise due to unclear segmentation. In this paper, we
propose a unified model constructed from the perspective of global-view
reconstruction. Additionally, while feature selection methods can discern the
importance of features, they typically overlook the uncertainty of samples,
which is prevalent in realistic scenarios. To address this, we incorporate the
perception of sample uncertainty during the reconstruction process to enhance
trustworthiness. Thus, the global-view is reconstructed through the graph
structure between samples, sample confidence, and the view relationship. The
accurate mapping is established between the reconstructed view and the label
matrix. Experimental results demonstrate the superior performance of our method
on multi-view datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:35:39 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Hao",
"Pingting",
""
],
[
"Liu",
"Kunpeng",
""
],
[
"Gao",
"Wanfu",
""
]
] | TITLE: Uncertainty-Aware Global-View Reconstruction for Multi-View Multi-Label
Feature Selection
ABSTRACT: In recent years, multi-view multi-label learning (MVML) has gained popularity
due to its close resemblance to real-world scenarios. However, the challenge of
selecting informative features to ensure both performance and efficiency
remains a significant question in MVML. Existing methods often extract
information separately from the consistency part and the complementary part,
which may result in noise due to unclear segmentation. In this paper, we
propose a unified model constructed from the perspective of global-view
reconstruction. Additionally, while feature selection methods can discern the
importance of features, they typically overlook the uncertainty of samples,
which is prevalent in realistic scenarios. To address this, we incorporate the
perception of sample uncertainty during the reconstruction process to enhance
trustworthiness. Thus, the global-view is reconstructed through the graph
structure between samples, sample confidence, and the view relationship. The
accurate mapping is established between the reconstructed view and the label
matrix. Experimental results demonstrate the superior performance of our method
on multi-view datasets.
|
2503.14029 | Runsong Zhu | Runsong Zhu, Shi Qiu, Zhengzhe Liu, Ka-Hei Hui, Qianyi Wu, Pheng-Ann
Heng, Chi-Wing Fu | Rethinking End-to-End 2D to 3D Scene Segmentation in Gaussian Splatting | CVPR 2025. The code is publicly available at this https URL
(https://github.com/Runsong123/Unified-Lift) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Lifting multi-view 2D instance segmentation to a radiance field has proven to
be effective to enhance 3D understanding. Existing methods rely on direct
matching for end-to-end lifting, yielding inferior results; or employ a
two-stage solution constrained by complex pre- or post-processing. In this
work, we design a new end-to-end object-aware lifting approach, named
Unified-Lift that provides accurate 3D segmentation based on the 3D Gaussian
representation. To start, we augment each Gaussian point with an additional
Gaussian-level feature learned using a contrastive loss to encode instance
information. Importantly, we introduce a learnable object-level codebook to
account for individual objects in the scene for an explicit object-level
understanding and associate the encoded object-level features with the
Gaussian-level point features for segmentation predictions. While promising,
achieving effective codebook learning is non-trivial and a naive solution leads
to degraded performance. Therefore, we formulate the association learning
module and the noisy label filtering module for effective and robust codebook
learning. We conduct experiments on three benchmarks: LERF-Masked, Replica, and
Messy Rooms datasets. Both qualitative and quantitative results manifest that
our Unified-Lift clearly outperforms existing methods in terms of segmentation
quality and time efficiency. The code is publicly available at
\href{https://github.com/Runsong123/Unified-Lift}{https://github.com/Runsong123/Unified-Lift}.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:42:23 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Zhu",
"Runsong",
""
],
[
"Qiu",
"Shi",
""
],
[
"Liu",
"Zhengzhe",
""
],
[
"Hui",
"Ka-Hei",
""
],
[
"Wu",
"Qianyi",
""
],
[
"Heng",
"Pheng-Ann",
""
],
[
"Fu",
"Chi-Wing",
""
]
] | TITLE: Rethinking End-to-End 2D to 3D Scene Segmentation in Gaussian Splatting
ABSTRACT: Lifting multi-view 2D instance segmentation to a radiance field has proven to
be effective to enhance 3D understanding. Existing methods rely on direct
matching for end-to-end lifting, yielding inferior results; or employ a
two-stage solution constrained by complex pre- or post-processing. In this
work, we design a new end-to-end object-aware lifting approach, named
Unified-Lift that provides accurate 3D segmentation based on the 3D Gaussian
representation. To start, we augment each Gaussian point with an additional
Gaussian-level feature learned using a contrastive loss to encode instance
information. Importantly, we introduce a learnable object-level codebook to
account for individual objects in the scene for an explicit object-level
understanding and associate the encoded object-level features with the
Gaussian-level point features for segmentation predictions. While promising,
achieving effective codebook learning is non-trivial and a naive solution leads
to degraded performance. Therefore, we formulate the association learning
module and the noisy label filtering module for effective and robust codebook
learning. We conduct experiments on three benchmarks: LERF-Masked, Replica, and
Messy Rooms datasets. Both qualitative and quantitative results manifest that
our Unified-Lift clearly outperforms existing methods in terms of segmentation
quality and time efficiency. The code is publicly available at
\href{https://github.com/Runsong123/Unified-Lift}{https://github.com/Runsong123/Unified-Lift}.
|
2503.14036 | Ina Kodrasi | Mingchi Hou and Ina Kodrasi | Variational Autoencoder for Personalized Pathological Speech Enhancement | Submitted to EUSIPCO 2025 | null | null | null | eess.AS cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generalizability of speech enhancement (SE) models across speaker
conditions remains largely unexplored, despite its critical importance for
broader applicability. This paper investigates the performance of the hybrid
variational autoencoder (VAE)-non-negative matrix factorization (NMF) model for
SE, focusing primarily on its generalizability to pathological speakers with
Parkinson's disease. We show that VAE models trained on large neurotypical
datasets perform poorly on pathological speech. While fine-tuning these
pre-trained models with pathological speech improves performance, a performance
gap remains between neurotypical and pathological speakers. To address this
gap, we propose using personalized SE models derived from fine-tuning
pre-trained models with only a few seconds of clean data from each speaker. Our
results demonstrate that personalized models considerably enhance performance
for all speakers, achieving comparable results for both neurotypical and
pathological speakers.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 08:54:00 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Hou",
"Mingchi",
""
],
[
"Kodrasi",
"Ina",
""
]
] | TITLE: Variational Autoencoder for Personalized Pathological Speech Enhancement
ABSTRACT: The generalizability of speech enhancement (SE) models across speaker
conditions remains largely unexplored, despite its critical importance for
broader applicability. This paper investigates the performance of the hybrid
variational autoencoder (VAE)-non-negative matrix factorization (NMF) model for
SE, focusing primarily on its generalizability to pathological speakers with
Parkinson's disease. We show that VAE models trained on large neurotypical
datasets perform poorly on pathological speech. While fine-tuning these
pre-trained models with pathological speech improves performance, a performance
gap remains between neurotypical and pathological speakers. To address this
gap, we propose using personalized SE models derived from fine-tuning
pre-trained models with only a few seconds of clean data from each speaker. Our
results demonstrate that personalized models considerably enhance performance
for all speakers, achieving comparable results for both neurotypical and
pathological speakers.
|
2503.14040 | Songen Gu | Binjie Liu, Lina Liu, Sanyi Zhang, Songen Gu, Yihao Zhi, Tianyi Zhu,
Lei Yang, Long Ye | MAG: Multi-Modal Aligned Autoregressive Co-Speech Gesture Generation
without Vector Quantization | null | null | null | null | cs.GR cs.CV cs.SD | http://creativecommons.org/licenses/by/4.0/ | This work focuses on full-body co-speech gesture generation. Existing methods
typically employ an autoregressive model accompanied by vector-quantized tokens
for gesture generation, which results in information loss and compromises the
realism of the generated gestures. To address this, inspired by the natural
continuity of real-world human motion, we propose MAG, a novel multi-modal
aligned framework for high-quality and diverse co-speech gesture synthesis
without relying on discrete tokenization. Specifically, (1) we introduce a
motion-text-audio-aligned variational autoencoder (MTA-VAE), which leverages
pre-trained WavCaps' text and audio embeddings to enhance both semantic and
rhythmic alignment with motion, ultimately producing more realistic gestures.
(2) Building on this, we propose a multimodal masked autoregressive model
(MMAG) that enables autoregressive modeling in continuous motion embeddings
through diffusion without vector quantization. To further ensure multi-modal
consistency, MMAG incorporates a hybrid granularity audio-text fusion block,
which serves as conditioning for diffusion process. Extensive experiments on
two benchmark datasets demonstrate that MAG achieves stateof-the-art
performance both quantitatively and qualitatively, producing highly realistic
and diverse co-speech gestures.The code will be released to facilitate future
research.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 09:02:02 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Liu",
"Binjie",
""
],
[
"Liu",
"Lina",
""
],
[
"Zhang",
"Sanyi",
""
],
[
"Gu",
"Songen",
""
],
[
"Zhi",
"Yihao",
""
],
[
"Zhu",
"Tianyi",
""
],
[
"Yang",
"Lei",
""
],
[
"Ye",
"Long",
""
]
] | TITLE: MAG: Multi-Modal Aligned Autoregressive Co-Speech Gesture Generation
without Vector Quantization
ABSTRACT: This work focuses on full-body co-speech gesture generation. Existing methods
typically employ an autoregressive model accompanied by vector-quantized tokens
for gesture generation, which results in information loss and compromises the
realism of the generated gestures. To address this, inspired by the natural
continuity of real-world human motion, we propose MAG, a novel multi-modal
aligned framework for high-quality and diverse co-speech gesture synthesis
without relying on discrete tokenization. Specifically, (1) we introduce a
motion-text-audio-aligned variational autoencoder (MTA-VAE), which leverages
pre-trained WavCaps' text and audio embeddings to enhance both semantic and
rhythmic alignment with motion, ultimately producing more realistic gestures.
(2) Building on this, we propose a multimodal masked autoregressive model
(MMAG) that enables autoregressive modeling in continuous motion embeddings
through diffusion without vector quantization. To further ensure multi-modal
consistency, MMAG incorporates a hybrid granularity audio-text fusion block,
which serves as conditioning for diffusion process. Extensive experiments on
two benchmark datasets demonstrate that MAG achieves stateof-the-art
performance both quantitatively and qualitatively, producing highly realistic
and diverse co-speech gestures.The code will be released to facilitate future
research.
|
2503.14043 | Guy Bar-Shalom | Guy Bar-Shalom, Fabrizio Frasca, Derek Lim, Yoav Gelberg, Yftah Ziser,
Ran El-Yaniv, Gal Chechik, Haggai Maron | Learning on LLM Output Signatures for gray-box LLM Behavior Analysis | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have achieved widespread adoption, yet our
understanding of their behavior remains limited, particularly in detecting data
contamination and hallucinations. While recently proposed probing techniques
provide insights through activation analysis, they require "white-box" access
to model internals, often unavailable. Current "gray-box" approaches typically
analyze only the probability of the actual tokens in the sequence with simple
task-specific heuristics. Importantly, these methods overlook the rich
information contained in the full token distribution at each processing step.
To address these limitations, we propose that gray-box analysis should leverage
the complete observable output of LLMs, consisting of both the previously used
token probabilities as well as the complete token distribution sequences - a
unified data type we term LOS (LLM Output Signature). To this end, we develop a
transformer-based approach to process LOS that theoretically guarantees
approximation of existing techniques while enabling more nuanced analysis. Our
approach achieves superior performance on hallucination and data contamination
detection in gray-box settings, significantly outperforming existing baselines.
Furthermore, it demonstrates strong transfer capabilities across datasets and
LLMs, suggesting that LOS captures fundamental patterns in LLM behavior. Our
code is available at: https://github.com/BarSGuy/LLM-Output-Signatures-Network.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 09:04:37 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Bar-Shalom",
"Guy",
""
],
[
"Frasca",
"Fabrizio",
""
],
[
"Lim",
"Derek",
""
],
[
"Gelberg",
"Yoav",
""
],
[
"Ziser",
"Yftah",
""
],
[
"El-Yaniv",
"Ran",
""
],
[
"Chechik",
"Gal",
""
],
[
"Maron",
"Haggai",
""
]
] | TITLE: Learning on LLM Output Signatures for gray-box LLM Behavior Analysis
ABSTRACT: Large Language Models (LLMs) have achieved widespread adoption, yet our
understanding of their behavior remains limited, particularly in detecting data
contamination and hallucinations. While recently proposed probing techniques
provide insights through activation analysis, they require "white-box" access
to model internals, often unavailable. Current "gray-box" approaches typically
analyze only the probability of the actual tokens in the sequence with simple
task-specific heuristics. Importantly, these methods overlook the rich
information contained in the full token distribution at each processing step.
To address these limitations, we propose that gray-box analysis should leverage
the complete observable output of LLMs, consisting of both the previously used
token probabilities as well as the complete token distribution sequences - a
unified data type we term LOS (LLM Output Signature). To this end, we develop a
transformer-based approach to process LOS that theoretically guarantees
approximation of existing techniques while enabling more nuanced analysis. Our
approach achieves superior performance on hallucination and data contamination
detection in gray-box settings, significantly outperforming existing baselines.
Furthermore, it demonstrates strong transfer capabilities across datasets and
LLMs, suggesting that LOS captures fundamental patterns in LLM behavior. Our
code is available at: https://github.com/BarSGuy/LLM-Output-Signatures-Network.
|
2503.14053 | Jake Rap | Jake Rap, Amritam Das | ON-Traffic: An Operator Learning Framework for Online Traffic Flow
Estimation and Uncertainty Quantification from Lagrangian Sensors | null | null | null | null | cs.LG cs.AI cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate traffic flow estimation and prediction are critical for the
efficient management of transportation systems, particularly under increasing
urbanization. Traditional methods relying on static sensors often suffer from
limited spatial coverage, while probe vehicles provide richer, albeit sparse
and irregular data. This work introduces ON-Traffic, a novel deep operator
Network and a receding horizon learning-based framework tailored for online
estimation of spatio-temporal traffic state along with quantified uncertainty
by using measurements from moving probe vehicles and downstream boundary
inputs. Our framework is evaluated in both numerical and simulation datasets,
showcasing its ability to handle irregular, sparse input data, adapt to
time-shifted scenarios, and provide well-calibrated uncertainty estimates. The
results demonstrate that the model captures complex traffic phenomena,
including shockwaves and congestion propagation, while maintaining robustness
to noise and sensor dropout. These advancements present a significant step
toward online, adaptive traffic management systems.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 09:13:24 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Rap",
"Jake",
""
],
[
"Das",
"Amritam",
""
]
] | TITLE: ON-Traffic: An Operator Learning Framework for Online Traffic Flow
Estimation and Uncertainty Quantification from Lagrangian Sensors
ABSTRACT: Accurate traffic flow estimation and prediction are critical for the
efficient management of transportation systems, particularly under increasing
urbanization. Traditional methods relying on static sensors often suffer from
limited spatial coverage, while probe vehicles provide richer, albeit sparse
and irregular data. This work introduces ON-Traffic, a novel deep operator
Network and a receding horizon learning-based framework tailored for online
estimation of spatio-temporal traffic state along with quantified uncertainty
by using measurements from moving probe vehicles and downstream boundary
inputs. Our framework is evaluated in both numerical and simulation datasets,
showcasing its ability to handle irregular, sparse input data, adapt to
time-shifted scenarios, and provide well-calibrated uncertainty estimates. The
results demonstrate that the model captures complex traffic phenomena,
including shockwaves and congestion propagation, while maintaining robustness
to noise and sensor dropout. These advancements present a significant step
toward online, adaptive traffic management systems.
|
2503.14057 | Arnaud Legout | Mohamed El Khatib (DIANA), Arnaud Legout (DIANA) | Bitcoin Burn Addresses: Unveiling the Permanent Losses and Their
Underlying Causes | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bitcoin burn addresses are addresses where bitcoins can be sent but never
retrieved, resulting in the permanent loss of those coins. Given Bitcoin's
fixed supply of 21 million coins, understanding the usage and the amount of
bitcoins lost in burn addresses is crucial for evaluating their economic
impact. However, identifying burn addresses is challenging due to the lack of
standardized format or convention. In this paper, we propose a novel
methodology for the automatic detection of burn addresses using a multi-layer
perceptron model trained on a manually classified dataset of 196,088 regular
addresses and 2,082 burn addresses. Our model identified 7,905 true burn
addresses from a pool of 1,283,997,050 addresses with only 1,767 false
positive. We determined that 3,197.61 bitcoins have been permanently lost,
representing only 0.016% of the total supply, yet 295 million USD on November
2024. More than 99% of the lost bitcoins are concentrated in just three
addresses. This skewness highlights diverse uses of burn addresses, including
token creation via proof-of-burn, storage of plain text messages, or storage of
images using the OLGA Stamps protocol.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 09:21:15 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Khatib",
"Mohamed El",
"",
"DIANA"
],
[
"Legout",
"Arnaud",
"",
"DIANA"
]
] | TITLE: Bitcoin Burn Addresses: Unveiling the Permanent Losses and Their
Underlying Causes
ABSTRACT: Bitcoin burn addresses are addresses where bitcoins can be sent but never
retrieved, resulting in the permanent loss of those coins. Given Bitcoin's
fixed supply of 21 million coins, understanding the usage and the amount of
bitcoins lost in burn addresses is crucial for evaluating their economic
impact. However, identifying burn addresses is challenging due to the lack of
standardized format or convention. In this paper, we propose a novel
methodology for the automatic detection of burn addresses using a multi-layer
perceptron model trained on a manually classified dataset of 196,088 regular
addresses and 2,082 burn addresses. Our model identified 7,905 true burn
addresses from a pool of 1,283,997,050 addresses with only 1,767 false
positive. We determined that 3,197.61 bitcoins have been permanently lost,
representing only 0.016% of the total supply, yet 295 million USD on November
2024. More than 99% of the lost bitcoins are concentrated in just three
addresses. This skewness highlights diverse uses of burn addresses, including
token creation via proof-of-burn, storage of plain text messages, or storage of
images using the OLGA Stamps protocol.
|
2503.14062 | Hillol Biswas | Hillol Biswas | Data Encoding for VQC in Qiskit, A Comparison With Novel Hybrid Encoding | 13 pdf pages in current format | null | null | null | quant-ph cs.ET | http://creativecommons.org/licenses/by/4.0/ | If quantum machine learning emulates the ways of classical machine learning,
data encoding in a quantum neural network is imperative for many reasons. One
of the key ones is the complexity attributed to the data size depending upon
the features and types, which is the essence of machine learning. While the
standard various encoding techniques exist for quantum computing, hybrid one is
not among many, though it tends to offer some distinct advantages, viz.
efficient qubits utilization and increased entanglement, which fits well for
variation quantum classifier algorithm by manipulating the essential criteria
of ZZFeatureMaps and RealAmplitudes. While Amplitude encoding can turn traits
normalized into quantum amplitudes, encoding an angle by using Ry gates to
encode feature values into rotation angles, and phase encoding by using Rz
gates to encode extra feature information as phase is plausible to combine all
together. By combining these three methods, this paper demonstrates that
efficient qubit usage is ensured as Amplitude encoding reduces the required
qubits, Angle encoding makes state freedom better and is used for expressive
encoding, and Phase-based distinction. Finally, using classical optimizers, the
hybrid encoding technique through VQC is fit in training and testing using a
synthetic dataset, and results have been compared to the standard VQC encoding
in qiskit machine learning ecosystems.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 09:36:09 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Biswas",
"Hillol",
""
]
] | TITLE: Data Encoding for VQC in Qiskit, A Comparison With Novel Hybrid Encoding
ABSTRACT: If quantum machine learning emulates the ways of classical machine learning,
data encoding in a quantum neural network is imperative for many reasons. One
of the key ones is the complexity attributed to the data size depending upon
the features and types, which is the essence of machine learning. While the
standard various encoding techniques exist for quantum computing, hybrid one is
not among many, though it tends to offer some distinct advantages, viz.
efficient qubits utilization and increased entanglement, which fits well for
variation quantum classifier algorithm by manipulating the essential criteria
of ZZFeatureMaps and RealAmplitudes. While Amplitude encoding can turn traits
normalized into quantum amplitudes, encoding an angle by using Ry gates to
encode feature values into rotation angles, and phase encoding by using Rz
gates to encode extra feature information as phase is plausible to combine all
together. By combining these three methods, this paper demonstrates that
efficient qubit usage is ensured as Amplitude encoding reduces the required
qubits, Angle encoding makes state freedom better and is used for expressive
encoding, and Phase-based distinction. Finally, using classical optimizers, the
hybrid encoding technique through VQC is fit in training and testing using a
synthetic dataset, and results have been compared to the standard VQC encoding
in qiskit machine learning ecosystems.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.