Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.16742 | Alexander Fix | Esther Y. H. Lin, Yimin Ding, Jogendra Kundu, Yatong An, Mohamed T.
El-Haddad, Alexander Fix | Digitally Prototype Your Eye Tracker: Simulating Hardware Performance
using 3D Synthetic Data | 14 pages, 12 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eye tracking (ET) is a key enabler for Augmented and Virtual Reality (AR/VR).
Prototyping new ET hardware requires assessing the impact of hardware choices
on eye tracking performance. This task is compounded by the high cost of
obtaining data from sufficiently many variations of real hardware, especially
for machine learning, which requires large training datasets. We propose a
method for end-to-end evaluation of how hardware changes impact machine
learning-based ET performance using only synthetic data. We utilize a dataset
of real 3D eyes, reconstructed from light dome data using neural radiance
fields (NeRF), to synthesize captured eyes from novel viewpoints and camera
parameters. Using this framework, we demonstrate that we can predict the
relative performance across various hardware configurations, accounting for
variations in sensor noise, illumination brightness, and optical blur. We also
compare our simulator with the publicly available eye tracking dataset from the
Project Aria glasses, demonstrating a strong correlation with real-world
performance. Finally, we present a first-of-its-kind analysis in which we vary
ET camera positions, evaluating ET performance ranging from on-axis direct
views of the eye to peripheral views on the frame. Such an analysis would have
previously required manufacturing physical devices to capture evaluation data.
In short, our method enables faster prototyping of ET hardware.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 23:09:15 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Lin",
"Esther Y. H.",
""
],
[
"Ding",
"Yimin",
""
],
[
"Kundu",
"Jogendra",
""
],
[
"An",
"Yatong",
""
],
[
"El-Haddad",
"Mohamed T.",
""
],
[
"Fix",
"Alexander",
""
]
] | TITLE: Digitally Prototype Your Eye Tracker: Simulating Hardware Performance
using 3D Synthetic Data
ABSTRACT: Eye tracking (ET) is a key enabler for Augmented and Virtual Reality (AR/VR).
Prototyping new ET hardware requires assessing the impact of hardware choices
on eye tracking performance. This task is compounded by the high cost of
obtaining data from sufficiently many variations of real hardware, especially
for machine learning, which requires large training datasets. We propose a
method for end-to-end evaluation of how hardware changes impact machine
learning-based ET performance using only synthetic data. We utilize a dataset
of real 3D eyes, reconstructed from light dome data using neural radiance
fields (NeRF), to synthesize captured eyes from novel viewpoints and camera
parameters. Using this framework, we demonstrate that we can predict the
relative performance across various hardware configurations, accounting for
variations in sensor noise, illumination brightness, and optical blur. We also
compare our simulator with the publicly available eye tracking dataset from the
Project Aria glasses, demonstrating a strong correlation with real-world
performance. Finally, we present a first-of-its-kind analysis in which we vary
ET camera positions, evaluating ET performance ranging from on-axis direct
views of the eye to peripheral views on the frame. Such an analysis would have
previously required manufacturing physical devices to capture evaluation data.
In short, our method enables faster prototyping of ET hardware.
|
2503.16745 | Shiva Upadhye | Shiva Upadhye, Jiaxuan Li, and Richard Futrell | SPACER: A Parallel Dataset of Speech Production And Comprehension of
Error Repairs | 11 pages, 11 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Speech errors are a natural part of communication, yet they rarely lead to
complete communicative failure because both speakers and comprehenders can
detect and correct errors. Although prior research has examined error
monitoring and correction in production and comprehension separately,
integrated investigation of both systems has been impeded by the scarcity of
parallel data. In this study, we present SPACER, a parallel dataset that
captures how naturalistic speech errors are corrected by both speakers and
comprehenders. We focus on single-word substitution errors extracted from the
Switchboard corpus, accompanied by speaker's self-repairs and comprehenders'
responses from an offline text-editing experiment. Our exploratory analysis
suggests asymmetries in error correction strategies: speakers are more likely
to repair errors that introduce greater semantic and phonemic deviations,
whereas comprehenders tend to correct errors that are phonemically similar to
more plausible alternatives or do not fit into prior contexts. Our dataset
enables future research on integrated approaches toward studying language
production and comprehension.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 23:12:00 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Upadhye",
"Shiva",
""
],
[
"Li",
"Jiaxuan",
""
],
[
"Futrell",
"Richard",
""
]
] | TITLE: SPACER: A Parallel Dataset of Speech Production And Comprehension of
Error Repairs
ABSTRACT: Speech errors are a natural part of communication, yet they rarely lead to
complete communicative failure because both speakers and comprehenders can
detect and correct errors. Although prior research has examined error
monitoring and correction in production and comprehension separately,
integrated investigation of both systems has been impeded by the scarcity of
parallel data. In this study, we present SPACER, a parallel dataset that
captures how naturalistic speech errors are corrected by both speakers and
comprehenders. We focus on single-word substitution errors extracted from the
Switchboard corpus, accompanied by speaker's self-repairs and comprehenders'
responses from an offline text-editing experiment. Our exploratory analysis
suggests asymmetries in error correction strategies: speakers are more likely
to repair errors that introduce greater semantic and phonemic deviations,
whereas comprehenders tend to correct errors that are phonemically similar to
more plausible alternatives or do not fit into prior contexts. Our dataset
enables future research on integrated approaches toward studying language
production and comprehension.
|
2503.16759 | Yancheng Cai | Yancheng Cai, Ali Bozorgian, Maliha Ashraf, Robert Wanat, and Rafa{\l}
K. Mantiuk | elaTCSF: A Temporal Contrast Sensitivity Function for Flicker Detection
and Modeling Variable Refresh Rate Flicker | Published at SIGGRAPH Asia 2024 | null | 10.1145/3680528.3687586 | null | cs.GR cs.CV | http://creativecommons.org/licenses/by/4.0/ | The perception of flicker has been a prominent concern in illumination and
electronic display fields for over a century. Traditional approaches often rely
on Critical Flicker Frequency (CFF), primarily suited for high-contrast
(full-on, full-off) flicker. To tackle varying contrast flicker, the
International Committee for Display Metrology (ICDM) introduced a Temporal
Contrast Sensitivity Function TCSF$_{IDMS}$ within the Information Display
Measurements Standard (IDMS). Nevertheless, this standard overlooks crucial
parameters: luminance, eccentricity, and area. Existing models incorporating
these parameters are inadequate for flicker detection, especially at low
spatial frequencies. To address these limitations, we extend the TCSF$_{IDMS}$
and combine it with a new spatial probability summation model to incorporate
the effects of luminance, eccentricity, and area (elaTCSF). We train the
elaTCSF on various flicker detection datasets and establish the first variable
refresh rate flicker detection dataset for further verification. Additionally,
we contribute to resolving a longstanding debate on whether the flicker is more
visible in peripheral vision. We demonstrate how elaTCSF can be used to predict
flicker due to low-persistence in VR headsets, identify flicker-free VRR
operational ranges, and determine flicker sensitivity in lighting design.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 00:23:10 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Cai",
"Yancheng",
""
],
[
"Bozorgian",
"Ali",
""
],
[
"Ashraf",
"Maliha",
""
],
[
"Wanat",
"Robert",
""
],
[
"Mantiuk",
"Rafał K.",
""
]
] | TITLE: elaTCSF: A Temporal Contrast Sensitivity Function for Flicker Detection
and Modeling Variable Refresh Rate Flicker
ABSTRACT: The perception of flicker has been a prominent concern in illumination and
electronic display fields for over a century. Traditional approaches often rely
on Critical Flicker Frequency (CFF), primarily suited for high-contrast
(full-on, full-off) flicker. To tackle varying contrast flicker, the
International Committee for Display Metrology (ICDM) introduced a Temporal
Contrast Sensitivity Function TCSF$_{IDMS}$ within the Information Display
Measurements Standard (IDMS). Nevertheless, this standard overlooks crucial
parameters: luminance, eccentricity, and area. Existing models incorporating
these parameters are inadequate for flicker detection, especially at low
spatial frequencies. To address these limitations, we extend the TCSF$_{IDMS}$
and combine it with a new spatial probability summation model to incorporate
the effects of luminance, eccentricity, and area (elaTCSF). We train the
elaTCSF on various flicker detection datasets and establish the first variable
refresh rate flicker detection dataset for further verification. Additionally,
we contribute to resolving a longstanding debate on whether the flicker is more
visible in peripheral vision. We demonstrate how elaTCSF can be used to predict
flicker due to low-persistence in VR headsets, identify flicker-free VRR
operational ranges, and determine flicker sensitivity in lighting design.
|
2503.16779 | Mengsong Wu | Mengsong Wu, Tong Zhu, Han Han, Xiang Zhang, Wenbiao Shao, Wenliang
Chen | Chain-of-Tools: Utilizing Massive Unseen Tools in the CoT Reasoning of
Frozen Language Models | 11 pages, 10 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Tool learning can further broaden the usage scenarios of large language
models (LLMs). However most of the existing methods either need to finetune
that the model can only use tools seen in the training data, or add tool
demonstrations into the prompt with lower efficiency. In this paper, we present
a new Tool Learning method Chain-of-Tools. It makes full use of the powerful
semantic representation capability of frozen LLMs to finish tool calling in CoT
reasoning with a huge and flexible tool pool which may contain unseen tools.
Especially, to validate the effectiveness of our approach in the massive unseen
tool scenario, we construct a new dataset SimpleToolQuestions. We conduct
experiments on two numerical reasoning benchmarks (GSM8K-XL and FuncQA) and two
knowledge-based question answering benchmarks (KAMEL and SimpleToolQuestions).
Experimental results show that our approach performs better than the baseline.
We also identify dimensions of the model output that are critical in tool
selection, enhancing the model interpretability. Our code and data are
available at: https://github.com/fairyshine/Chain-of-Tools .
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 01:26:12 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Wu",
"Mengsong",
""
],
[
"Zhu",
"Tong",
""
],
[
"Han",
"Han",
""
],
[
"Zhang",
"Xiang",
""
],
[
"Shao",
"Wenbiao",
""
],
[
"Chen",
"Wenliang",
""
]
] | TITLE: Chain-of-Tools: Utilizing Massive Unseen Tools in the CoT Reasoning of
Frozen Language Models
ABSTRACT: Tool learning can further broaden the usage scenarios of large language
models (LLMs). However most of the existing methods either need to finetune
that the model can only use tools seen in the training data, or add tool
demonstrations into the prompt with lower efficiency. In this paper, we present
a new Tool Learning method Chain-of-Tools. It makes full use of the powerful
semantic representation capability of frozen LLMs to finish tool calling in CoT
reasoning with a huge and flexible tool pool which may contain unseen tools.
Especially, to validate the effectiveness of our approach in the massive unseen
tool scenario, we construct a new dataset SimpleToolQuestions. We conduct
experiments on two numerical reasoning benchmarks (GSM8K-XL and FuncQA) and two
knowledge-based question answering benchmarks (KAMEL and SimpleToolQuestions).
Experimental results show that our approach performs better than the baseline.
We also identify dimensions of the model output that are critical in tool
selection, enhancing the model interpretability. Our code and data are
available at: https://github.com/fairyshine/Chain-of-Tools .
|
2503.16780 | Ui Hyun Cho | Uihyun Cho, Namhun Kim | A-IDE : Agent-Integrated Denoising Experts | 10 pages, 11 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in deep-learning based denoising methods have improved
Low-Dose CT image quality. However, due to distinct HU distributions and
diverse anatomical characteristics, a single model often struggles to
generalize across multiple anatomies. To address this limitation, we introduce
\textbf{Agent-Integrated Denoising Experts (A-IDE)} framework, which integrates
three anatomical region-specialized RED-CNN models under the management of
decision-making LLM agent. The agent analyzes semantic cues from BiomedCLIP to
dynamically route incoming LDCT scans to the most appropriate expert model. We
highlight three major advantages of our approach. A-IDE excels in
heterogeneous, data-scarce environments. The framework automatically prevents
overfitting by distributing tasks among multiple experts. Finally, our
LLM-driven agentic pipeline eliminates the need for manual interventions.
Experimental evaluations on the Mayo-2016 dataset confirm that A-IDE achieves
superior performance in RMSE, PSNR, and SSIM compared to a single unified
denoiser.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 01:26:54 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Cho",
"Uihyun",
""
],
[
"Kim",
"Namhun",
""
]
] | TITLE: A-IDE : Agent-Integrated Denoising Experts
ABSTRACT: Recent advances in deep-learning based denoising methods have improved
Low-Dose CT image quality. However, due to distinct HU distributions and
diverse anatomical characteristics, a single model often struggles to
generalize across multiple anatomies. To address this limitation, we introduce
\textbf{Agent-Integrated Denoising Experts (A-IDE)} framework, which integrates
three anatomical region-specialized RED-CNN models under the management of
decision-making LLM agent. The agent analyzes semantic cues from BiomedCLIP to
dynamically route incoming LDCT scans to the most appropriate expert model. We
highlight three major advantages of our approach. A-IDE excels in
heterogeneous, data-scarce environments. The framework automatically prevents
overfitting by distributing tasks among multiple experts. Finally, our
LLM-driven agentic pipeline eliminates the need for manual interventions.
Experimental evaluations on the Mayo-2016 dataset confirm that A-IDE achieves
superior performance in RMSE, PSNR, and SSIM compared to a single unified
denoiser.
|
2503.16782 | Xialei Liu | Enguang Wang, Zhimao Peng, Zhengyuan Xie, Haori Lu, Fei Yang, Xialei
Liu | Learning Part Knowledge to Facilitate Category Understanding for
Fine-Grained Generalized Category Discovery | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Generalized Category Discovery (GCD) aims to classify unlabeled data
containing both seen and novel categories. Although existing methods perform
well on generic datasets, they struggle in fine-grained scenarios. We attribute
this difficulty to their reliance on contrastive learning over global image
features to automatically capture discriminative cues, which fails to capture
the subtle local differences essential for distinguishing fine-grained
categories. Therefore, in this paper, we propose incorporating part knowledge
to address fine-grained GCD, which introduces two key challenges: the absence
of annotations for novel classes complicates the extraction of the part
features, and global contrastive learning prioritizes holistic feature
invariance, inadvertently suppressing discriminative local part patterns. To
address these challenges, we propose PartGCD, including 1) Adaptive Part
Decomposition, which automatically extracts class-specific semantic parts via
Gaussian Mixture Models, and 2) Part Discrepancy Regularization, enforcing
explicit separation between part features to amplify fine-grained local part
distinctions.
Experiments demonstrate state-of-the-art performance across multiple
fine-grained benchmarks while maintaining competitiveness on generic datasets,
validating the effectiveness and robustness of our approach.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 01:37:51 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Wang",
"Enguang",
""
],
[
"Peng",
"Zhimao",
""
],
[
"Xie",
"Zhengyuan",
""
],
[
"Lu",
"Haori",
""
],
[
"Yang",
"Fei",
""
],
[
"Liu",
"Xialei",
""
]
] | TITLE: Learning Part Knowledge to Facilitate Category Understanding for
Fine-Grained Generalized Category Discovery
ABSTRACT: Generalized Category Discovery (GCD) aims to classify unlabeled data
containing both seen and novel categories. Although existing methods perform
well on generic datasets, they struggle in fine-grained scenarios. We attribute
this difficulty to their reliance on contrastive learning over global image
features to automatically capture discriminative cues, which fails to capture
the subtle local differences essential for distinguishing fine-grained
categories. Therefore, in this paper, we propose incorporating part knowledge
to address fine-grained GCD, which introduces two key challenges: the absence
of annotations for novel classes complicates the extraction of the part
features, and global contrastive learning prioritizes holistic feature
invariance, inadvertently suppressing discriminative local part patterns. To
address these challenges, we propose PartGCD, including 1) Adaptive Part
Decomposition, which automatically extracts class-specific semantic parts via
Gaussian Mixture Models, and 2) Part Discrepancy Regularization, enforcing
explicit separation between part features to amplify fine-grained local part
distinctions.
Experiments demonstrate state-of-the-art performance across multiple
fine-grained benchmarks while maintaining competitiveness on generic datasets,
validating the effectiveness and robustness of our approach.
|
2503.16784 | Kedar Hippalgaonkar | Shuya Yamazaki, Wei Nong, Ruiming Zhu, Kostya S. Novoselov, Andrey
Ustyuzhanin, Kedar Hippalgaonkar | Multi-property directed generative design of inorganic materials through
Wyckoff-augmented transfer learning | null | null | null | null | cond-mat.mtrl-sci physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Accelerated materials discovery is an urgent demand to drive advancements in
fields such as energy conversion, storage, and catalysis. Property-directed
generative design has emerged as a transformative approach for rapidly
discovering new functional inorganic materials with multiple desired properties
within vast and complex search spaces. However, this approach faces two primary
challenges: data scarcity for functional properties and the multi-objective
optimization required to balance competing tasks. Here, we present a
multi-property-directed generative framework designed to overcome these
limitations and enhance site symmetry-compliant crystal generation beyond P1
(translational) symmetry. By incorporating Wyckoff-position-based data
augmentation and transfer learning, our framework effectively handles sparse
and small functional datasets, enabling the generation of new stable materials
simultaneously conditioned on targeted space group, band gap, and formation
energy. Using this approach, we identified previously unknown thermodynamically
and lattice-dynamically stable semiconductors in tetragonal, trigonal, and
cubic systems, with bandgaps ranging from 0.13 to 2.20 eV, as validated by
density functional theory (DFT) calculations. Additionally, we assessed their
thermoelectric descriptors using DFT, indicating their potential suitability
for thermoelectric applications. We believe our integrated framework represents
a significant step forward in generative design of inorganic materials.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 01:41:25 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Yamazaki",
"Shuya",
""
],
[
"Nong",
"Wei",
""
],
[
"Zhu",
"Ruiming",
""
],
[
"Novoselov",
"Kostya S.",
""
],
[
"Ustyuzhanin",
"Andrey",
""
],
[
"Hippalgaonkar",
"Kedar",
""
]
] | TITLE: Multi-property directed generative design of inorganic materials through
Wyckoff-augmented transfer learning
ABSTRACT: Accelerated materials discovery is an urgent demand to drive advancements in
fields such as energy conversion, storage, and catalysis. Property-directed
generative design has emerged as a transformative approach for rapidly
discovering new functional inorganic materials with multiple desired properties
within vast and complex search spaces. However, this approach faces two primary
challenges: data scarcity for functional properties and the multi-objective
optimization required to balance competing tasks. Here, we present a
multi-property-directed generative framework designed to overcome these
limitations and enhance site symmetry-compliant crystal generation beyond P1
(translational) symmetry. By incorporating Wyckoff-position-based data
augmentation and transfer learning, our framework effectively handles sparse
and small functional datasets, enabling the generation of new stable materials
simultaneously conditioned on targeted space group, band gap, and formation
energy. Using this approach, we identified previously unknown thermodynamically
and lattice-dynamically stable semiconductors in tetragonal, trigonal, and
cubic systems, with bandgaps ranging from 0.13 to 2.20 eV, as validated by
density functional theory (DFT) calculations. Additionally, we assessed their
thermoelectric descriptors using DFT, indicating their potential suitability
for thermoelectric applications. We believe our integrated framework represents
a significant step forward in generative design of inorganic materials.
|
2503.16789 | Rupak Sarkar | Rupak Sarkar, Bahareh Sarrafzadeh, Nirupama Chandrasekaran, Nagu
Rangan, Philip Resnik, Longqi Yang, Sujay Kumar Jauhar | Conversational User-AI Intervention: A Study on Prompt Rewriting for
Improved LLM Response Generation | 8 pages, ACL style | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Human-LLM conversations are increasingly becoming more pervasive in peoples'
professional and personal lives, yet many users still struggle to elicit
helpful responses from LLM Chatbots. One of the reasons for this issue is
users' lack of understanding in crafting effective prompts that accurately
convey their information needs. Meanwhile, the existence of real-world
conversational datasets on the one hand, and the text understanding faculties
of LLMs on the other, present a unique opportunity to study this problem, and
its potential solutions at scale. Thus, in this paper we present the first
LLM-centric study of real human-AI chatbot conversations, focused on
investigating aspects in which user queries fall short of expressing
information needs, and the potential of using LLMs to rewrite suboptimal user
prompts. Our findings demonstrate that rephrasing ineffective prompts can
elicit better responses from a conversational system, while preserving the
user's original intent. Notably, the performance of rewrites improves in longer
conversations, where contextual inferences about user needs can be made more
accurately. Additionally, we observe that LLMs often need to -- and inherently
do -- make \emph{plausible} assumptions about a user's intentions and goals
when interpreting prompts. Our findings largely hold true across conversational
domains, user intents, and LLMs of varying sizes and families, indicating the
promise of using prompt rewriting as a solution for better human-AI
interactions.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 02:01:02 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Sarkar",
"Rupak",
""
],
[
"Sarrafzadeh",
"Bahareh",
""
],
[
"Chandrasekaran",
"Nirupama",
""
],
[
"Rangan",
"Nagu",
""
],
[
"Resnik",
"Philip",
""
],
[
"Yang",
"Longqi",
""
],
[
"Jauhar",
"Sujay Kumar",
""
]
] | TITLE: Conversational User-AI Intervention: A Study on Prompt Rewriting for
Improved LLM Response Generation
ABSTRACT: Human-LLM conversations are increasingly becoming more pervasive in peoples'
professional and personal lives, yet many users still struggle to elicit
helpful responses from LLM Chatbots. One of the reasons for this issue is
users' lack of understanding in crafting effective prompts that accurately
convey their information needs. Meanwhile, the existence of real-world
conversational datasets on the one hand, and the text understanding faculties
of LLMs on the other, present a unique opportunity to study this problem, and
its potential solutions at scale. Thus, in this paper we present the first
LLM-centric study of real human-AI chatbot conversations, focused on
investigating aspects in which user queries fall short of expressing
information needs, and the potential of using LLMs to rewrite suboptimal user
prompts. Our findings demonstrate that rephrasing ineffective prompts can
elicit better responses from a conversational system, while preserving the
user's original intent. Notably, the performance of rewrites improves in longer
conversations, where contextual inferences about user needs can be made more
accurately. Additionally, we observe that LLMs often need to -- and inherently
do -- make \emph{plausible} assumptions about a user's intentions and goals
when interpreting prompts. Our findings largely hold true across conversational
domains, user intents, and LLMs of varying sizes and families, indicating the
promise of using prompt rewriting as a solution for better human-AI
interactions.
|
2503.16793 | Xialei Liu | Haori Lu, Xusheng Cao, Linlan Huang, Enguang Wang, Fei Yang, Xialei
Liu | Restoring Forgotten Knowledge in Non-Exemplar Class Incremental Learning
through Test-Time Semantic Evolution | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Continual learning aims to accumulate knowledge over a data stream while
mitigating catastrophic forgetting. In Non-exemplar Class Incremental Learning
(NECIL), forgetting arises during incremental optimization because old classes
are inaccessible, hindering the retention of prior knowledge. To solve this,
previous methods struggle in achieving the stability-plasticity balance in the
training stages. However, we note that the testing stage is rarely considered
among them, but is promising to be a solution to forgetting. Therefore, we
propose RoSE, which is a simple yet effective method that
\textbf{R}est\textbf{o}res forgotten knowledge through test-time
\textbf{S}emantic \textbf{E}volution. Specifically designed for minimizing
forgetting, RoSE is a test-time semantic drift compensation framework that
enables more accurate drift estimation in a self-supervised manner. Moreover,
to avoid incomplete optimization during online testing, we derive an analytical
solution as an alternative to gradient descent. We evaluate RoSE on CIFAR-100,
TinyImageNet, and ImageNet100 datasets, under both cold-start and warm-start
settings. Our method consistently outperforms most state-of-the-art (SOTA)
methods across various scenarios, validating the potential and feasibility of
test-time evolution in NECIL.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 02:02:35 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Lu",
"Haori",
""
],
[
"Cao",
"Xusheng",
""
],
[
"Huang",
"Linlan",
""
],
[
"Wang",
"Enguang",
""
],
[
"Yang",
"Fei",
""
],
[
"Liu",
"Xialei",
""
]
] | TITLE: Restoring Forgotten Knowledge in Non-Exemplar Class Incremental Learning
through Test-Time Semantic Evolution
ABSTRACT: Continual learning aims to accumulate knowledge over a data stream while
mitigating catastrophic forgetting. In Non-exemplar Class Incremental Learning
(NECIL), forgetting arises during incremental optimization because old classes
are inaccessible, hindering the retention of prior knowledge. To solve this,
previous methods struggle in achieving the stability-plasticity balance in the
training stages. However, we note that the testing stage is rarely considered
among them, but is promising to be a solution to forgetting. Therefore, we
propose RoSE, which is a simple yet effective method that
\textbf{R}est\textbf{o}res forgotten knowledge through test-time
\textbf{S}emantic \textbf{E}volution. Specifically designed for minimizing
forgetting, RoSE is a test-time semantic drift compensation framework that
enables more accurate drift estimation in a self-supervised manner. Moreover,
to avoid incomplete optimization during online testing, we derive an analytical
solution as an alternative to gradient descent. We evaluate RoSE on CIFAR-100,
TinyImageNet, and ImageNet100 datasets, under both cold-start and warm-start
settings. Our method consistently outperforms most state-of-the-art (SOTA)
methods across various scenarios, validating the potential and feasibility of
test-time evolution in NECIL.
|
2503.16801 | Zichen Geng Mr | Zichen Geng, Zeeshan Hayder, Wei Liu, and Ajmal Saeed Mian | Auto-Regressive Diffusion for Generating 3D Human-Object Interactions | null | null | null | null | cs.GR cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-driven Human-Object Interaction (Text-to-HOI) generation is an emerging
field with applications in animation, video games, virtual reality, and
robotics. A key challenge in HOI generation is maintaining interaction
consistency in long sequences. Existing Text-to-Motion-based approaches, such
as discrete motion tokenization, cannot be directly applied to HOI generation
due to limited data in this domain and the complexity of the modality. To
address the problem of interaction consistency in long sequences, we propose an
autoregressive diffusion model (ARDHOI) that predicts the next continuous
token. Specifically, we introduce a Contrastive Variational Autoencoder (cVAE)
to learn a physically plausible space of continuous HOI tokens, thereby
ensuring that generated human-object motions are realistic and natural. For
generating sequences autoregressively, we develop a Mamba-based context encoder
to capture and maintain consistent sequential actions. Additionally, we
implement an MLP-based denoiser to generate the subsequent token conditioned on
the encoded context. Our model has been evaluated on the OMOMO and BEHAVE
datasets, where it outperforms existing state-of-the-art methods in terms of
both performance and inference speed. This makes ARDHOI a robust and efficient
solution for text-driven HOI tasks
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 02:25:59 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Geng",
"Zichen",
""
],
[
"Hayder",
"Zeeshan",
""
],
[
"Liu",
"Wei",
""
],
[
"Mian",
"Ajmal Saeed",
""
]
] | TITLE: Auto-Regressive Diffusion for Generating 3D Human-Object Interactions
ABSTRACT: Text-driven Human-Object Interaction (Text-to-HOI) generation is an emerging
field with applications in animation, video games, virtual reality, and
robotics. A key challenge in HOI generation is maintaining interaction
consistency in long sequences. Existing Text-to-Motion-based approaches, such
as discrete motion tokenization, cannot be directly applied to HOI generation
due to limited data in this domain and the complexity of the modality. To
address the problem of interaction consistency in long sequences, we propose an
autoregressive diffusion model (ARDHOI) that predicts the next continuous
token. Specifically, we introduce a Contrastive Variational Autoencoder (cVAE)
to learn a physically plausible space of continuous HOI tokens, thereby
ensuring that generated human-object motions are realistic and natural. For
generating sequences autoregressively, we develop a Mamba-based context encoder
to capture and maintain consistent sequential actions. Additionally, we
implement an MLP-based denoiser to generate the subsequent token conditioned on
the encoded context. Our model has been evaluated on the OMOMO and BEHAVE
datasets, where it outperforms existing state-of-the-art methods in terms of
both performance and inference speed. This makes ARDHOI a robust and efficient
solution for text-driven HOI tasks
|
2503.16811 | Qiming Xia | Maoji Zheng, Ziyu Xu, Qiming Xia, Hai Wu, Chenglu Wen, Cheng Wang | Seg2Box: 3D Object Detection by Point-Wise Semantics Supervision | 8 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LiDAR-based 3D object detection and semantic segmentation are critical tasks
in 3D scene understanding. Traditional detection and segmentation methods
supervise their models through bounding box labels and semantic mask labels.
However, these two independent labels inherently contain significant
redundancy. This paper aims to eliminate the redundancy by supervising 3D
object detection using only semantic labels. However, the challenge arises due
to the incomplete geometry structure and boundary ambiguity of point-cloud
instances, leading to inaccurate pseudo labels and poor detection results. To
address these challenges, we propose a novel method, named Seg2Box. We first
introduce a Multi-Frame Multi-Scale Clustering (MFMS-C) module, which leverages
the spatio-temporal consistency of point clouds to generate accurate box-level
pseudo-labels. Additionally, the Semantic?Guiding Iterative-Mining
Self-Training (SGIM-ST) module is proposed to enhance the performance by
progressively refining the pseudo-labels and mining the instances without
generating pseudo-labels. Experiments on the Waymo Open Dataset and nuScenes
Dataset show that our method significantly outperforms other competitive
methods by 23.7\% and 10.3\% in mAP, respectively. The results demonstrate the
great label-efficient potential and advancement of our method.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 02:39:32 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zheng",
"Maoji",
""
],
[
"Xu",
"Ziyu",
""
],
[
"Xia",
"Qiming",
""
],
[
"Wu",
"Hai",
""
],
[
"Wen",
"Chenglu",
""
],
[
"Wang",
"Cheng",
""
]
] | TITLE: Seg2Box: 3D Object Detection by Point-Wise Semantics Supervision
ABSTRACT: LiDAR-based 3D object detection and semantic segmentation are critical tasks
in 3D scene understanding. Traditional detection and segmentation methods
supervise their models through bounding box labels and semantic mask labels.
However, these two independent labels inherently contain significant
redundancy. This paper aims to eliminate the redundancy by supervising 3D
object detection using only semantic labels. However, the challenge arises due
to the incomplete geometry structure and boundary ambiguity of point-cloud
instances, leading to inaccurate pseudo labels and poor detection results. To
address these challenges, we propose a novel method, named Seg2Box. We first
introduce a Multi-Frame Multi-Scale Clustering (MFMS-C) module, which leverages
the spatio-temporal consistency of point clouds to generate accurate box-level
pseudo-labels. Additionally, the Semantic?Guiding Iterative-Mining
Self-Training (SGIM-ST) module is proposed to enhance the performance by
progressively refining the pseudo-labels and mining the instances without
generating pseudo-labels. Experiments on the Waymo Open Dataset and nuScenes
Dataset show that our method significantly outperforms other competitive
methods by 23.7\% and 10.3\% in mAP, respectively. The results demonstrate the
great label-efficient potential and advancement of our method.
|
2503.16816 | Yi Niu | Yi Niu, Jiashuai Liu, Yingkang Zhan, Jiangbo Shi, Di Zhang, Ines
Machado, Mireia Crispin-Ortuzar, Chen Li, Zeyu Gao | ST-Prompt Guided Histological Hypergraph Learning for Spatial Gene
Expression Prediction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial Transcriptomics (ST) reveals the spatial distribution of gene
expression in tissues, offering critical insights into biological processes and
disease mechanisms. However, predicting ST from H\&E-stained histology images
is challenging due to the heterogeneous relationship between histomorphology
and gene expression, which arises from substantial variability across different
patients and tissue sections. A more practical and valuable approach is to
utilize ST data from a few local regions to predict the spatial transcriptomic
landscape across the remaining regions in H&E slides. In response, we propose
PHG2ST, an ST-prompt guided histological hypergraph learning framework, which
leverages sparse ST signals as prompts to guide histological hypergraph
learning for global spatial gene expression prediction. Our framework fuses
histological hypergraph representations at multiple scales through a masked
ST-prompt encoding mechanism, improving robustness and generalizability.
Benchmark evaluations on two public ST datasets demonstrate that PHG2ST
outperforms the existing state-of-the-art methods and closely aligns with the
ground truth. These results underscore the potential of leveraging sparse local
ST data for scalable and cost-effective spatial gene expression mapping in
real-world biomedical applications.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 03:10:43 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Niu",
"Yi",
""
],
[
"Liu",
"Jiashuai",
""
],
[
"Zhan",
"Yingkang",
""
],
[
"Shi",
"Jiangbo",
""
],
[
"Zhang",
"Di",
""
],
[
"Machado",
"Ines",
""
],
[
"Crispin-Ortuzar",
"Mireia",
""
],
[
"Li",
"Chen",
""
],
[
"Gao",
"Zeyu",
""
]
] | TITLE: ST-Prompt Guided Histological Hypergraph Learning for Spatial Gene
Expression Prediction
ABSTRACT: Spatial Transcriptomics (ST) reveals the spatial distribution of gene
expression in tissues, offering critical insights into biological processes and
disease mechanisms. However, predicting ST from H\&E-stained histology images
is challenging due to the heterogeneous relationship between histomorphology
and gene expression, which arises from substantial variability across different
patients and tissue sections. A more practical and valuable approach is to
utilize ST data from a few local regions to predict the spatial transcriptomic
landscape across the remaining regions in H&E slides. In response, we propose
PHG2ST, an ST-prompt guided histological hypergraph learning framework, which
leverages sparse ST signals as prompts to guide histological hypergraph
learning for global spatial gene expression prediction. Our framework fuses
histological hypergraph representations at multiple scales through a masked
ST-prompt encoding mechanism, improving robustness and generalizability.
Benchmark evaluations on two public ST datasets demonstrate that PHG2ST
outperforms the existing state-of-the-art methods and closely aligns with the
ground truth. These results underscore the potential of leveraging sparse local
ST data for scalable and cost-effective spatial gene expression mapping in
real-world biomedical applications.
|
2503.16826 | Eunsu Kim | Jun Seong Kim, Kyaw Ye Thu, Javad Ismayilzada, Junyeong Park, Eunsu
Kim, Huzama Ahmad, Na Min An, James Thorne, Alice Oh | When Tom Eats Kimchi: Evaluating Cultural Bias of Multimodal Large
Language Models in Cultural Mixture Contexts | 12 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In a highly globalized world, it is important for multi-modal large language
models (MLLMs) to recognize and respond correctly to mixed-cultural inputs. For
example, a model should correctly identify kimchi (Korean food) in an image
both when an Asian woman is eating it, as well as an African man is eating it.
However, current MLLMs show an over-reliance on the visual features of the
person, leading to misclassification of the entities. To examine the robustness
of MLLMs to different ethnicity, we introduce MixCuBe, a cross-cultural bias
benchmark, and study elements from five countries and four ethnicities. Our
findings reveal that MLLMs achieve both higher accuracy and lower sensitivity
to such perturbation for high-resource cultures, but not for low-resource
cultures. GPT-4o, the best-performing model overall, shows up to 58% difference
in accuracy between the original and perturbed cultural settings in
low-resource cultures. Our dataset is publicly available at:
https://huggingface.co/datasets/kyawyethu/MixCuBe.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 03:50:05 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Kim",
"Jun Seong",
""
],
[
"Thu",
"Kyaw Ye",
""
],
[
"Ismayilzada",
"Javad",
""
],
[
"Park",
"Junyeong",
""
],
[
"Kim",
"Eunsu",
""
],
[
"Ahmad",
"Huzama",
""
],
[
"An",
"Na Min",
""
],
[
"Thorne",
"James",
""
],
[
"Oh",
"Alice",
""
]
] | TITLE: When Tom Eats Kimchi: Evaluating Cultural Bias of Multimodal Large
Language Models in Cultural Mixture Contexts
ABSTRACT: In a highly globalized world, it is important for multi-modal large language
models (MLLMs) to recognize and respond correctly to mixed-cultural inputs. For
example, a model should correctly identify kimchi (Korean food) in an image
both when an Asian woman is eating it, as well as an African man is eating it.
However, current MLLMs show an over-reliance on the visual features of the
person, leading to misclassification of the entities. To examine the robustness
of MLLMs to different ethnicity, we introduce MixCuBe, a cross-cultural bias
benchmark, and study elements from five countries and four ethnicities. Our
findings reveal that MLLMs achieve both higher accuracy and lower sensitivity
to such perturbation for high-resource cultures, but not for low-resource
cultures. GPT-4o, the best-performing model overall, shows up to 58% difference
in accuracy between the original and perturbed cultural settings in
low-resource cultures. Our dataset is publicly available at:
https://huggingface.co/datasets/kyawyethu/MixCuBe.
|
2503.16832 | Quoc-Huy Tran | Ali Shah Ali, Syed Ahmed Mahmood, Mubin Saeed, Andrey Konin, M.
Zeeshan Zia, Quoc-Huy Tran | Joint Self-Supervised Video Alignment and Action Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel approach for simultaneous self-supervised video
alignment and action segmentation based on a unified optimal transport
framework. In particular, we first tackle self-supervised video alignment by
developing a fused Gromov-Wasserstein optimal transport formulation with a
structural prior, which trains efficiently on GPUs and needs only a few
iterations for solving the optimal transport problem. Our single-task method
achieves the state-of-the-art performance on multiple video alignment
benchmarks and outperforms VAVA, which relies on a traditional Kantorovich
optimal transport formulation with an optimality prior. Furthermore, we extend
our approach by proposing a unified optimal transport framework for joint
self-supervised video alignment and action segmentation, which requires
training and storing a single model and saves both time and memory consumption
as compared to two different single-task models. Extensive evaluations on
several video alignment and action segmentation datasets demonstrate that our
multi-task method achieves comparable video alignment yet superior action
segmentation results over previous methods in video alignment and action
segmentation respectively. Finally, to the best of our knowledge, this is the
first work to unify video alignment and action segmentation into a single
model.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 04:02:00 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Ali",
"Ali Shah",
""
],
[
"Mahmood",
"Syed Ahmed",
""
],
[
"Saeed",
"Mubin",
""
],
[
"Konin",
"Andrey",
""
],
[
"Zia",
"M. Zeeshan",
""
],
[
"Tran",
"Quoc-Huy",
""
]
] | TITLE: Joint Self-Supervised Video Alignment and Action Segmentation
ABSTRACT: We introduce a novel approach for simultaneous self-supervised video
alignment and action segmentation based on a unified optimal transport
framework. In particular, we first tackle self-supervised video alignment by
developing a fused Gromov-Wasserstein optimal transport formulation with a
structural prior, which trains efficiently on GPUs and needs only a few
iterations for solving the optimal transport problem. Our single-task method
achieves the state-of-the-art performance on multiple video alignment
benchmarks and outperforms VAVA, which relies on a traditional Kantorovich
optimal transport formulation with an optimality prior. Furthermore, we extend
our approach by proposing a unified optimal transport framework for joint
self-supervised video alignment and action segmentation, which requires
training and storing a single model and saves both time and memory consumption
as compared to two different single-task models. Extensive evaluations on
several video alignment and action segmentation datasets demonstrate that our
multi-task method achieves comparable video alignment yet superior action
segmentation results over previous methods in video alignment and action
segmentation respectively. Finally, to the best of our knowledge, this is the
first work to unify video alignment and action segmentation into a single
model.
|
2503.16836 | Elham Dolatabadi | Wen Xu and Elham Dolatabadi | A Flexible Fairness Framework with Surrogate Loss Reweighting for
Addressing Sociodemographic Disparities | Under review | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper presents a new algorithmic fairness framework called
$\boldsymbol{\alpha}$-$\boldsymbol{\beta}$ Fair Machine Learning
($\boldsymbol{\alpha}$-$\boldsymbol{\beta}$ FML), designed to optimize fairness
levels across sociodemographic attributes. Our framework employs a new family
of surrogate loss functions, paired with loss reweighting techniques, allowing
precise control over fairness-accuracy trade-offs through tunable
hyperparameters $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$. To efficiently
solve the learning objective, we propose Parallel Stochastic Gradient Descent
with Surrogate Loss (P-SGD-S) and establish convergence guarantees for both
convex and nonconvex loss functions. Experimental results demonstrate that our
framework improves overall accuracy while reducing fairness violations,
offering a smooth trade-off between standard empirical risk minimization and
strict minimax fairness. Results across multiple datasets confirm its
adaptability, ensuring fairness improvements without excessive performance
degradation.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 04:10:14 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Xu",
"Wen",
""
],
[
"Dolatabadi",
"Elham",
""
]
] | TITLE: A Flexible Fairness Framework with Surrogate Loss Reweighting for
Addressing Sociodemographic Disparities
ABSTRACT: This paper presents a new algorithmic fairness framework called
$\boldsymbol{\alpha}$-$\boldsymbol{\beta}$ Fair Machine Learning
($\boldsymbol{\alpha}$-$\boldsymbol{\beta}$ FML), designed to optimize fairness
levels across sociodemographic attributes. Our framework employs a new family
of surrogate loss functions, paired with loss reweighting techniques, allowing
precise control over fairness-accuracy trade-offs through tunable
hyperparameters $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$. To efficiently
solve the learning objective, we propose Parallel Stochastic Gradient Descent
with Surrogate Loss (P-SGD-S) and establish convergence guarantees for both
convex and nonconvex loss functions. Experimental results demonstrate that our
framework improves overall accuracy while reducing fairness violations,
offering a smooth trade-off between standard empirical risk minimization and
strict minimax fairness. Results across multiple datasets confirm its
adaptability, ensuring fairness improvements without excessive performance
degradation.
|
2503.16846 | Qingsong Wang | Qingsong Wang | An Accelerated Bregman Algorithm for ReLU-based Symmetric Matrix
Decomposition | 5 pages, 2 figures | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetric matrix decomposition is an active research area in machine
learning. This paper focuses on exploiting the low-rank structure of
non-negative and sparse symmetric matrices via the rectified linear unit (ReLU)
activation function. We propose the ReLU-based nonlinear symmetric matrix
decomposition (ReLU-NSMD) model, introduce an accelerated alternating partial
Bregman (AAPB) method for its solution, and present the algorithm's convergence
results. Our algorithm leverages the Bregman proximal gradient framework to
overcome the challenge of estimating the global $L$-smooth constant in the
classic proximal gradient algorithm. Numerical experiments on synthetic and
real datasets validate the effectiveness of our model and algorithm.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 04:32:53 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Wang",
"Qingsong",
""
]
] | TITLE: An Accelerated Bregman Algorithm for ReLU-based Symmetric Matrix
Decomposition
ABSTRACT: Symmetric matrix decomposition is an active research area in machine
learning. This paper focuses on exploiting the low-rank structure of
non-negative and sparse symmetric matrices via the rectified linear unit (ReLU)
activation function. We propose the ReLU-based nonlinear symmetric matrix
decomposition (ReLU-NSMD) model, introduce an accelerated alternating partial
Bregman (AAPB) method for its solution, and present the algorithm's convergence
results. Our algorithm leverages the Bregman proximal gradient framework to
overcome the challenge of estimating the global $L$-smooth constant in the
classic proximal gradient algorithm. Numerical experiments on synthetic and
real datasets validate the effectiveness of our model and algorithm.
|
2503.16855 | Koki Hirooka | Koki Hirooka, Abu Saleh Musa Miah, Tatsuya Murakami, Yuto Akiba, Yong
Seok Hwang, Jungpil Shin | Stack Transformer Based Spatial-Temporal Attention Model for Dynamic
Multi-Culture Sign Language Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Hand gesture-based Sign Language Recognition (SLR) serves as a crucial
communication bridge between deaf and non-deaf individuals. Existing SLR
systems perform well for their cultural SL but may struggle with multi-cultural
sign languages (McSL). To address these challenges, this paper proposes a Stack
Spatial-Temporal Transformer Network that leverages multi-head attention
mechanisms to capture both spatial and temporal dependencies with hierarchical
features using the Stack Transfer concept. In the proceed, firstly, we applied
a fully connected layer to make a embedding vector which has high expressive
power from the original dataset, then fed them a stack newly proposed
transformer to achieve hierarchical features with short-range and long-range
dependency. The network architecture is composed of several stages that process
spatial and temporal relationships sequentially, ensuring effective feature
extraction. After making the fully connected layer, the embedding vector is
processed by the Spatial Multi-Head Attention Transformer, which captures
spatial dependencies between joints. In the next stage, the Temporal Multi-Head
Attention Transformer captures long-range temporal dependencies, and again, the
features are concatenated with the output using another skip connection. The
processed features are then passed to the Feed-Forward Network (FFN), which
refines the feature representations further. After the FFN, additional skip
connections are applied to combine the output with earlier layers, followed by
a final normalization layer to produce the final output feature tensor. This
process is repeated for 10 transformer blocks. The extensive experiment shows
that the JSL, KSL and ASL datasets achieved good performance accuracy. Our
approach demonstrates improved performance in McSL, and it will be consider as
a novel work in this domain.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 04:57:18 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Hirooka",
"Koki",
""
],
[
"Miah",
"Abu Saleh Musa",
""
],
[
"Murakami",
"Tatsuya",
""
],
[
"Akiba",
"Yuto",
""
],
[
"Hwang",
"Yong Seok",
""
],
[
"Shin",
"Jungpil",
""
]
] | TITLE: Stack Transformer Based Spatial-Temporal Attention Model for Dynamic
Multi-Culture Sign Language Recognition
ABSTRACT: Hand gesture-based Sign Language Recognition (SLR) serves as a crucial
communication bridge between deaf and non-deaf individuals. Existing SLR
systems perform well for their cultural SL but may struggle with multi-cultural
sign languages (McSL). To address these challenges, this paper proposes a Stack
Spatial-Temporal Transformer Network that leverages multi-head attention
mechanisms to capture both spatial and temporal dependencies with hierarchical
features using the Stack Transfer concept. In the proceed, firstly, we applied
a fully connected layer to make a embedding vector which has high expressive
power from the original dataset, then fed them a stack newly proposed
transformer to achieve hierarchical features with short-range and long-range
dependency. The network architecture is composed of several stages that process
spatial and temporal relationships sequentially, ensuring effective feature
extraction. After making the fully connected layer, the embedding vector is
processed by the Spatial Multi-Head Attention Transformer, which captures
spatial dependencies between joints. In the next stage, the Temporal Multi-Head
Attention Transformer captures long-range temporal dependencies, and again, the
features are concatenated with the output using another skip connection. The
processed features are then passed to the Feed-Forward Network (FFN), which
refines the feature representations further. After the FFN, additional skip
connections are applied to combine the output with earlier layers, followed by
a final normalization layer to produce the final output feature tensor. This
process is repeated for 10 transformer blocks. The extensive experiment shows
that the JSL, KSL and ASL datasets achieved good performance accuracy. Our
approach demonstrates improved performance in McSL, and it will be consider as
a novel work in this domain.
|
2503.16858 | Jialin Chen | Jialin Chen, Aosong Feng, Ziyu Zhao, Juan Garza, Gaukhar Nurbek, Cheng
Qin, Ali Maatouk, Leandros Tassiulas, Yifeng Gao, Rex Ying | MTBench: A Multimodal Time Series Benchmark for Temporal Reasoning and
Question Answering | 14 pages | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Understanding the relationship between textual news and time-series evolution
is a critical yet under-explored challenge in applied data science. While
multimodal learning has gained traction, existing multimodal time-series
datasets fall short in evaluating cross-modal reasoning and complex question
answering, which are essential for capturing complex interactions between
narrative information and temporal patterns. To bridge this gap, we introduce
Multimodal Time Series Benchmark (MTBench), a large-scale benchmark designed to
evaluate large language models (LLMs) on time series and text understanding
across financial and weather domains. MTbench comprises paired time series and
textual data, including financial news with corresponding stock price movements
and weather reports aligned with historical temperature records. Unlike
existing benchmarks that focus on isolated modalities, MTbench provides a
comprehensive testbed for models to jointly reason over structured numerical
trends and unstructured textual narratives. The richness of MTbench enables
formulation of diverse tasks that require a deep understanding of both text and
time-series data, including time-series forecasting, semantic and technical
trend analysis, and news-driven question answering (QA). These tasks target the
model's ability to capture temporal dependencies, extract key insights from
textual context, and integrate cross-modal information. We evaluate
state-of-the-art LLMs on MTbench, analyzing their effectiveness in modeling the
complex relationships between news narratives and temporal patterns. Our
findings reveal significant challenges in current models, including
difficulties in capturing long-term dependencies, interpreting causality in
financial and weather trends, and effectively fusing multimodal information.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 05:04:53 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Chen",
"Jialin",
""
],
[
"Feng",
"Aosong",
""
],
[
"Zhao",
"Ziyu",
""
],
[
"Garza",
"Juan",
""
],
[
"Nurbek",
"Gaukhar",
""
],
[
"Qin",
"Cheng",
""
],
[
"Maatouk",
"Ali",
""
],
[
"Tassiulas",
"Leandros",
""
],
[
"Gao",
"Yifeng",
""
],
[
"Ying",
"Rex",
""
]
] | TITLE: MTBench: A Multimodal Time Series Benchmark for Temporal Reasoning and
Question Answering
ABSTRACT: Understanding the relationship between textual news and time-series evolution
is a critical yet under-explored challenge in applied data science. While
multimodal learning has gained traction, existing multimodal time-series
datasets fall short in evaluating cross-modal reasoning and complex question
answering, which are essential for capturing complex interactions between
narrative information and temporal patterns. To bridge this gap, we introduce
Multimodal Time Series Benchmark (MTBench), a large-scale benchmark designed to
evaluate large language models (LLMs) on time series and text understanding
across financial and weather domains. MTbench comprises paired time series and
textual data, including financial news with corresponding stock price movements
and weather reports aligned with historical temperature records. Unlike
existing benchmarks that focus on isolated modalities, MTbench provides a
comprehensive testbed for models to jointly reason over structured numerical
trends and unstructured textual narratives. The richness of MTbench enables
formulation of diverse tasks that require a deep understanding of both text and
time-series data, including time-series forecasting, semantic and technical
trend analysis, and news-driven question answering (QA). These tasks target the
model's ability to capture temporal dependencies, extract key insights from
textual context, and integrate cross-modal information. We evaluate
state-of-the-art LLMs on MTbench, analyzing their effectiveness in modeling the
complex relationships between news narratives and temporal patterns. Our
findings reveal significant challenges in current models, including
difficulties in capturing long-term dependencies, interpreting causality in
financial and weather trends, and effectively fusing multimodal information.
|
2503.16860 | Honoka Anada | Honoka Anada, Sefutsu Ryu, Masayuki Usui, Tatsuya Kaneko, Shinya
Takamaeda-Yamazaki | PRIOT: Pruning-Based Integer-Only Transfer Learning for Embedded Systems | Accepted for publication in IEEE Embedded Systems Letters | null | 10.1109/LES.2024.3485003 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | On-device transfer learning is crucial for adapting a common backbone model
to the unique environment of each edge device. Tiny microcontrollers, such as
the Raspberry Pi Pico, are key targets for on-device learning but often lack
floating-point units, necessitating integer-only training. Dynamic computation
of quantization scale factors, which is adopted in former studies, incurs high
computational costs. Therefore, this study focuses on integer-only training
with static scale factors, which is challenging with existing training methods.
We propose a new training method named PRIOT, which optimizes the network by
pruning selected edges rather than updating weights, allowing effective
training with static scale factors. The pruning pattern is determined by the
edge-popup algorithm, which trains a parameter named score assigned to each
edge instead of the original parameters and prunes the edges with low scores
before inference. Additionally, we introduce a memory-efficient variant,
PRIOT-S, which only assigns scores to a small fraction of edges. We implement
PRIOT and PRIOT-S on the Raspberry Pi Pico and evaluate their accuracy and
computational costs using a tiny CNN model on the rotated MNIST dataset and the
VGG11 model on the rotated CIFAR-10 dataset. Our results demonstrate that PRIOT
improves accuracy by 8.08 to 33.75 percentage points over existing methods,
while PRIOT-S reduces memory footprint with minimal accuracy loss.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 05:07:57 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Anada",
"Honoka",
""
],
[
"Ryu",
"Sefutsu",
""
],
[
"Usui",
"Masayuki",
""
],
[
"Kaneko",
"Tatsuya",
""
],
[
"Takamaeda-Yamazaki",
"Shinya",
""
]
] | TITLE: PRIOT: Pruning-Based Integer-Only Transfer Learning for Embedded Systems
ABSTRACT: On-device transfer learning is crucial for adapting a common backbone model
to the unique environment of each edge device. Tiny microcontrollers, such as
the Raspberry Pi Pico, are key targets for on-device learning but often lack
floating-point units, necessitating integer-only training. Dynamic computation
of quantization scale factors, which is adopted in former studies, incurs high
computational costs. Therefore, this study focuses on integer-only training
with static scale factors, which is challenging with existing training methods.
We propose a new training method named PRIOT, which optimizes the network by
pruning selected edges rather than updating weights, allowing effective
training with static scale factors. The pruning pattern is determined by the
edge-popup algorithm, which trains a parameter named score assigned to each
edge instead of the original parameters and prunes the edges with low scores
before inference. Additionally, we introduce a memory-efficient variant,
PRIOT-S, which only assigns scores to a small fraction of edges. We implement
PRIOT and PRIOT-S on the Raspberry Pi Pico and evaluate their accuracy and
computational costs using a tiny CNN model on the rotated MNIST dataset and the
VGG11 model on the rotated CIFAR-10 dataset. Our results demonstrate that PRIOT
improves accuracy by 8.08 to 33.75 percentage points over existing methods,
while PRIOT-S reduces memory footprint with minimal accuracy loss.
|
2503.16862 | Yiqiang Cai | Yiqiang Cai, Yizhou Tan, Peihong Zhang, Yuxuan Liu, Shengchen Li, Xi
Shao, Mark D. Plumbley | City2Scene: Improving Acoustic Scene Classification with City Features | null | null | null | null | cs.SD cs.CV eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Acoustic scene recordings are often collected from a diverse range of cities.
Most existing acoustic scene classification (ASC) approaches focus on
identifying common acoustic scene patterns across cities to enhance
generalization. In contrast, we hypothesize that city-specific environmental
and cultural differences in acoustic features are beneficial for the ASC task.
In this paper, we introduce City2Scene, a novel framework that leverages city
features to improve ASC. City2Scene transfers the city-specific knowledge from
city classification models to a scene classification model using knowledge
distillation. We evaluated City2Scene on the DCASE Challenge Task 1 datasets,
where each audio clip is annotated with both scene and city labels.
Experimental results demonstrate that city features provide valuable
information for classifying scenes. By distilling the city-specific knowledge,
City2Scene effectively improves accuracy for various state-of-the-art ASC
backbone models, including both CNNs and Transformers.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 05:24:48 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Cai",
"Yiqiang",
""
],
[
"Tan",
"Yizhou",
""
],
[
"Zhang",
"Peihong",
""
],
[
"Liu",
"Yuxuan",
""
],
[
"Li",
"Shengchen",
""
],
[
"Shao",
"Xi",
""
],
[
"Plumbley",
"Mark D.",
""
]
] | TITLE: City2Scene: Improving Acoustic Scene Classification with City Features
ABSTRACT: Acoustic scene recordings are often collected from a diverse range of cities.
Most existing acoustic scene classification (ASC) approaches focus on
identifying common acoustic scene patterns across cities to enhance
generalization. In contrast, we hypothesize that city-specific environmental
and cultural differences in acoustic features are beneficial for the ASC task.
In this paper, we introduce City2Scene, a novel framework that leverages city
features to improve ASC. City2Scene transfers the city-specific knowledge from
city classification models to a scene classification model using knowledge
distillation. We evaluated City2Scene on the DCASE Challenge Task 1 datasets,
where each audio clip is annotated with both scene and city labels.
Experimental results demonstrate that city features provide valuable
information for classifying scenes. By distilling the city-specific knowledge,
City2Scene effectively improves accuracy for various state-of-the-art ASC
backbone models, including both CNNs and Transformers.
|
2503.16868 | Mengsay Loem | Mengsay Loem and Taiju Hosaka | Joint Extraction Matters: Prompt-Based Visual Question Answering for
Multi-Field Document Information Extraction | null | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual question answering (VQA) has emerged as a flexible approach for
extracting specific pieces of information from document images. However,
existing work typically queries each field in isolation, overlooking potential
dependencies across multiple items. This paper investigates the merits of
extracting multiple fields jointly versus separately. Through experiments on
multiple large vision language models and datasets, we show that jointly
extracting fields often improves accuracy, especially when the fields share
strong numeric or contextual dependencies. We further analyze how performance
scales with the number of requested items and use a regression based metric to
quantify inter field relationships. Our results suggest that multi field
prompts can mitigate confusion arising from similar surface forms and related
numeric values, providing practical methods for designing robust VQA systems in
document information extraction tasks.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 05:54:42 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Loem",
"Mengsay",
""
],
[
"Hosaka",
"Taiju",
""
]
] | TITLE: Joint Extraction Matters: Prompt-Based Visual Question Answering for
Multi-Field Document Information Extraction
ABSTRACT: Visual question answering (VQA) has emerged as a flexible approach for
extracting specific pieces of information from document images. However,
existing work typically queries each field in isolation, overlooking potential
dependencies across multiple items. This paper investigates the merits of
extracting multiple fields jointly versus separately. Through experiments on
multiple large vision language models and datasets, we show that jointly
extracting fields often improves accuracy, especially when the fields share
strong numeric or contextual dependencies. We further analyze how performance
scales with the number of requested items and use a regression based metric to
quantify inter field relationships. Our results suggest that multi field
prompts can mitigate confusion arising from similar surface forms and related
numeric values, providing practical methods for designing robust VQA systems in
document information extraction tasks.
|
2503.16873 | Dongseob Kim | Dongseob Kim, Hyunjung Shim | Classifier-guided CLIP Distillation for Unsupervised Multi-label
Classification | CVPR 2025 Accepted | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-label classification is crucial for comprehensive image understanding,
yet acquiring accurate annotations is challenging and costly. To address this,
a recent study suggests exploiting unsupervised multi-label classification
leveraging CLIP, a powerful vision-language model. Despite CLIP's proficiency,
it suffers from view-dependent predictions and inherent bias, limiting its
effectiveness. We propose a novel method that addresses these issues by
leveraging multiple views near target objects, guided by Class Activation
Mapping (CAM) of the classifier, and debiasing pseudo-labels derived from CLIP
predictions. Our Classifier-guided CLIP Distillation (CCD) enables selecting
multiple local views without extra labels and debiasing predictions to enhance
classification performance. Experimental results validate our method's
superiority over existing techniques across diverse datasets. The code is
available at https://github.com/k0u-id/CCD.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 06:12:14 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Kim",
"Dongseob",
""
],
[
"Shim",
"Hyunjung",
""
]
] | TITLE: Classifier-guided CLIP Distillation for Unsupervised Multi-label
Classification
ABSTRACT: Multi-label classification is crucial for comprehensive image understanding,
yet acquiring accurate annotations is challenging and costly. To address this,
a recent study suggests exploiting unsupervised multi-label classification
leveraging CLIP, a powerful vision-language model. Despite CLIP's proficiency,
it suffers from view-dependent predictions and inherent bias, limiting its
effectiveness. We propose a novel method that addresses these issues by
leveraging multiple views near target objects, guided by Class Activation
Mapping (CAM) of the classifier, and debiasing pseudo-labels derived from CLIP
predictions. Our Classifier-guided CLIP Distillation (CCD) enables selecting
multiple local views without extra labels and debiasing predictions to enhance
classification performance. Experimental results validate our method's
superiority over existing techniques across diverse datasets. The code is
available at https://github.com/k0u-id/CCD.
|
2503.16874 | Jian Zhang | Jian Zhang, Zhangqi Wang, Haiping Zhu, Jun Liu, Qika Lin, Erik Cambria | MARS: A Multi-Agent Framework Incorporating Socratic Guidance for
Automated Prompt Optimization | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The basic question-answering format of large language models involves
inputting a prompt and receiving a response, and the quality of the prompt
directly impacts the effectiveness of the response. Automated Prompt
Optimization (APO) aims to break free from the cognitive biases of manually
designed prompts and explores a broader design space for prompts. However,
existing APO methods suffer from limited flexibility of fixed templates and
inefficient search in prompt spaces as key issues. To this end, we propose a
Multi-Agent framework Incorporating Socratic guidance (MARS), which utilizes
multi-agent fusion technology for automatic planning, with gradual continuous
optimization and evaluation. Specifically, MARS comprises seven agents, each
with distinct functionalities, which autonomously use the Planner to devise an
optimization path that ensures flexibility. Additionally, it employs a
Teacher-Critic-Student Socratic dialogue pattern to iteratively optimize the
prompts while conducting effective search. We conduct extensive experiments on
various datasets to validate the effectiveness of our method, and perform
additional analytical experiments to assess the model's advancement as well as
the interpretability.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 06:19:55 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhang",
"Jian",
""
],
[
"Wang",
"Zhangqi",
""
],
[
"Zhu",
"Haiping",
""
],
[
"Liu",
"Jun",
""
],
[
"Lin",
"Qika",
""
],
[
"Cambria",
"Erik",
""
]
] | TITLE: MARS: A Multi-Agent Framework Incorporating Socratic Guidance for
Automated Prompt Optimization
ABSTRACT: The basic question-answering format of large language models involves
inputting a prompt and receiving a response, and the quality of the prompt
directly impacts the effectiveness of the response. Automated Prompt
Optimization (APO) aims to break free from the cognitive biases of manually
designed prompts and explores a broader design space for prompts. However,
existing APO methods suffer from limited flexibility of fixed templates and
inefficient search in prompt spaces as key issues. To this end, we propose a
Multi-Agent framework Incorporating Socratic guidance (MARS), which utilizes
multi-agent fusion technology for automatic planning, with gradual continuous
optimization and evaluation. Specifically, MARS comprises seven agents, each
with distinct functionalities, which autonomously use the Planner to devise an
optimization path that ensures flexibility. Additionally, it employs a
Teacher-Critic-Student Socratic dialogue pattern to iteratively optimize the
prompts while conducting effective search. We conduct extensive experiments on
various datasets to validate the effectiveness of our method, and perform
additional analytical experiments to assess the model's advancement as well as
the interpretability.
|
2503.16875 | Jiangcheng Qin | Jiangcheng Qin, Xueyuan Zhang, Baisong Liu, Jiangbo Qian, Yangyang
Wang | Federated Cross-Domain Click-Through Rate Prediction With Large Language
Model Augmentation | null | null | null | null | cs.IR cs.CL cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurately predicting click-through rates (CTR) under stringent privacy
constraints poses profound challenges, particularly when user-item interactions
are sparse and fragmented across domains. Conventional cross-domain CTR (CCTR)
methods frequently assume homogeneous feature spaces and rely on centralized
data sharing, neglecting complex inter-domain discrepancies and the subtle
trade-offs imposed by privacy-preserving protocols. Here, we present Federated
Cross-Domain CTR Prediction with Large Language Model Augmentation
(FedCCTR-LM), a federated framework engineered to address these limitations by
synchronizing data augmentation, representation disentanglement, and adaptive
privacy protection. Our approach integrates three core innovations. First, the
Privacy-Preserving Augmentation Network (PrivAugNet) employs large language
models to enrich user and item representations and expand interaction
sequences, mitigating data sparsity and feature incompleteness. Second, the
Independent Domain-Specific Transformer with Contrastive Learning (IDST-CL)
module disentangles domain-specific and shared user preferences, employing
intra-domain representation alignment (IDRA) and crossdomain representation
disentanglement (CDRD) to refine the learned embeddings and enhance knowledge
transfer across domains. Finally, the Adaptive Local Differential Privacy
(AdaLDP) mechanism dynamically calibrates noise injection to achieve an optimal
balance between rigorous privacy guarantees and predictive accuracy. Empirical
evaluations on four real-world datasets demonstrate that FedCCTR-LM
substantially outperforms existing baselines, offering robust,
privacy-preserving, and generalizable cross-domain CTR prediction in
heterogeneous, federated environments.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 06:22:42 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Qin",
"Jiangcheng",
""
],
[
"Zhang",
"Xueyuan",
""
],
[
"Liu",
"Baisong",
""
],
[
"Qian",
"Jiangbo",
""
],
[
"Wang",
"Yangyang",
""
]
] | TITLE: Federated Cross-Domain Click-Through Rate Prediction With Large Language
Model Augmentation
ABSTRACT: Accurately predicting click-through rates (CTR) under stringent privacy
constraints poses profound challenges, particularly when user-item interactions
are sparse and fragmented across domains. Conventional cross-domain CTR (CCTR)
methods frequently assume homogeneous feature spaces and rely on centralized
data sharing, neglecting complex inter-domain discrepancies and the subtle
trade-offs imposed by privacy-preserving protocols. Here, we present Federated
Cross-Domain CTR Prediction with Large Language Model Augmentation
(FedCCTR-LM), a federated framework engineered to address these limitations by
synchronizing data augmentation, representation disentanglement, and adaptive
privacy protection. Our approach integrates three core innovations. First, the
Privacy-Preserving Augmentation Network (PrivAugNet) employs large language
models to enrich user and item representations and expand interaction
sequences, mitigating data sparsity and feature incompleteness. Second, the
Independent Domain-Specific Transformer with Contrastive Learning (IDST-CL)
module disentangles domain-specific and shared user preferences, employing
intra-domain representation alignment (IDRA) and crossdomain representation
disentanglement (CDRD) to refine the learned embeddings and enhance knowledge
transfer across domains. Finally, the Adaptive Local Differential Privacy
(AdaLDP) mechanism dynamically calibrates noise injection to achieve an optimal
balance between rigorous privacy guarantees and predictive accuracy. Empirical
evaluations on four real-world datasets demonstrate that FedCCTR-LM
substantially outperforms existing baselines, offering robust,
privacy-preserving, and generalizable cross-domain CTR prediction in
heterogeneous, federated environments.
|
2503.16893 | Jingzhi Fang | Jingzhi Fang, Yanyan Shen, Yue Wang, Lei Chen | Improving the End-to-End Efficiency of Offline Inference for Multi-LLM
Applications Based on Sampling and Simulation | null | null | null | null | cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As large language models (LLMs) have shown great success in many tasks, they
are used in various applications. While a lot of works have focused on the
efficiency of single-LLM application (e.g., offloading, request scheduling,
parallelism strategy selection), multi-LLM applications receive less attention,
particularly in offline inference scenarios. In this work, we aim to improve
the offline end-to-end inference efficiency of multi-LLM applications in the
single-node multi-GPU environment. The problem involves two key decisions: (1)
determining which LLMs to run concurrently each time (we may not run all the
models at the same time), and (2) selecting a parallelism strategy to use for
each LLM. This problem is NP-hard. Naive solutions may not work well because
the running time for a model to complete a set of requests depends on the
request workload and the selected parallelism strategy, and they lack an
accurate model of the running time. As the LLM output lengths are unknown
before running, to estimate the model running time, we propose a
sampling-then-simulation method which first estimates the output lengths by
sampling from an empirical cumulative function we obtained from a large dataset
in advance, and then simulates the LLM inference process accordingly. Based on
the simulation, we estimate the per-iteration latencys to get the total
latency. A greedy method is proposed to optimize the scheduling of the LLMs in
the application across the GPUs. We then propose a framework SamuLLM which
contains two phases: planning, which calls the greedy method for an application
and running, which runs the application and dynamically adjust the model
scheduling based on the runtime information. Experiments on 3 applications and
a mixed application show that SamuLLM can achieve 1.0-2.4$\times$ end-to-end
speedups compared to the competitors.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 06:56:35 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Fang",
"Jingzhi",
""
],
[
"Shen",
"Yanyan",
""
],
[
"Wang",
"Yue",
""
],
[
"Chen",
"Lei",
""
]
] | TITLE: Improving the End-to-End Efficiency of Offline Inference for Multi-LLM
Applications Based on Sampling and Simulation
ABSTRACT: As large language models (LLMs) have shown great success in many tasks, they
are used in various applications. While a lot of works have focused on the
efficiency of single-LLM application (e.g., offloading, request scheduling,
parallelism strategy selection), multi-LLM applications receive less attention,
particularly in offline inference scenarios. In this work, we aim to improve
the offline end-to-end inference efficiency of multi-LLM applications in the
single-node multi-GPU environment. The problem involves two key decisions: (1)
determining which LLMs to run concurrently each time (we may not run all the
models at the same time), and (2) selecting a parallelism strategy to use for
each LLM. This problem is NP-hard. Naive solutions may not work well because
the running time for a model to complete a set of requests depends on the
request workload and the selected parallelism strategy, and they lack an
accurate model of the running time. As the LLM output lengths are unknown
before running, to estimate the model running time, we propose a
sampling-then-simulation method which first estimates the output lengths by
sampling from an empirical cumulative function we obtained from a large dataset
in advance, and then simulates the LLM inference process accordingly. Based on
the simulation, we estimate the per-iteration latencys to get the total
latency. A greedy method is proposed to optimize the scheduling of the LLMs in
the application across the GPUs. We then propose a framework SamuLLM which
contains two phases: planning, which calls the greedy method for an application
and running, which runs the application and dynamically adjust the model
scheduling based on the runtime information. Experiments on 3 applications and
a mixed application show that SamuLLM can achieve 1.0-2.4$\times$ end-to-end
speedups compared to the competitors.
|
2503.16904 | Omar Coser | Omar Coser, Christian Tamantini, Matteo Tortora, Leonardo Furia, Rosa
Sicilia, Loredana Zollo, Paolo Soda | Deep Learning for Human Locomotion Analysis in Lower-Limb Exoskeletons:
A Comparative Study | 26 pages, 6 figures | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Wearable robotics for lower-limb assistance have become a pivotal area of
research, aiming to enhance mobility for individuals with physical impairments
or augment the performance of able-bodied users. Accurate and adaptive control
systems are essential to ensure seamless interaction between the wearer and the
robotic device, particularly when navigating diverse and dynamic terrains.
Despite the recent advances in neural networks for time series analysis, no
attempts have been directed towards the classification of ground conditions,
categorized into five classes and subsequently determining the ramp's slope and
stair's height. In this respect, this paper presents an experimental comparison
between eight deep neural network backbones to predict high-level locomotion
parameters across diverse terrains.
All the models are trained on the publicly available CAMARGO 2021 dataset.
IMU-only data equally or outperformed IMU+EMG inputs, promoting a
cost-effective and efficient design. Indeeds, using three IMU sensors, the LSTM
achieved high terrain classification accuracy (0.94 +- 0.04) and precise ramp
slope (1.95 +- 0.58{\deg}) and the CNN-LSTM a stair height (15.65 +- 7.40 mm)
estimations. As a further contribution, SHAP analysis justified sensor
reduction without performance loss, ensuring a lightweight setup. The system
operates with ~2 ms inference time, supporting real-time applications. The code
is code available at
https://github.com/cosbidev/Human-Locomotion-Identification.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 07:12:44 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Coser",
"Omar",
""
],
[
"Tamantini",
"Christian",
""
],
[
"Tortora",
"Matteo",
""
],
[
"Furia",
"Leonardo",
""
],
[
"Sicilia",
"Rosa",
""
],
[
"Zollo",
"Loredana",
""
],
[
"Soda",
"Paolo",
""
]
] | TITLE: Deep Learning for Human Locomotion Analysis in Lower-Limb Exoskeletons:
A Comparative Study
ABSTRACT: Wearable robotics for lower-limb assistance have become a pivotal area of
research, aiming to enhance mobility for individuals with physical impairments
or augment the performance of able-bodied users. Accurate and adaptive control
systems are essential to ensure seamless interaction between the wearer and the
robotic device, particularly when navigating diverse and dynamic terrains.
Despite the recent advances in neural networks for time series analysis, no
attempts have been directed towards the classification of ground conditions,
categorized into five classes and subsequently determining the ramp's slope and
stair's height. In this respect, this paper presents an experimental comparison
between eight deep neural network backbones to predict high-level locomotion
parameters across diverse terrains.
All the models are trained on the publicly available CAMARGO 2021 dataset.
IMU-only data equally or outperformed IMU+EMG inputs, promoting a
cost-effective and efficient design. Indeeds, using three IMU sensors, the LSTM
achieved high terrain classification accuracy (0.94 +- 0.04) and precise ramp
slope (1.95 +- 0.58{\deg}) and the CNN-LSTM a stair height (15.65 +- 7.40 mm)
estimations. As a further contribution, SHAP analysis justified sensor
reduction without performance loss, ensuring a lightweight setup. The system
operates with ~2 ms inference time, supporting real-time applications. The code
is code available at
https://github.com/cosbidev/Human-Locomotion-Identification.
|
2503.16905 | Jian Zhang | Jian Zhang, Zhiyuan Wang, Zhangqi Wang, Xinyu Zhang, Fangzhi Xu, Qika
Lin, Rui Mao, Erik Cambria, Jun Liu | MAPS: A Multi-Agent Framework Based on Big Seven Personality and
Socratic Guidance for Multimodal Scientific Problem Solving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal scientific problems (MSPs) involve complex issues that require the
integration of multiple modalities, such as text and diagrams, presenting a
significant challenge in artificial intelligence. While progress has been made
in addressing traditional scientific problems, MSPs still face two primary
issues: the challenge of multi-modal comprehensive reasoning in scientific
problem-solving and the lack of reflective and rethinking capabilities. To
address these issues, we introduce a Multi-Agent framework based on the Big
Seven Personality and Socratic guidance (MAPS). This framework employs seven
distinct agents that leverage feedback mechanisms and the Socratic method to
guide the resolution of MSPs. To tackle the first issue, we propose a
progressive four-agent solving strategy, where each agent focuses on a specific
stage of the problem-solving process. For the second issue, we introduce a
Critic agent, inspired by Socratic questioning, which prompts critical thinking
and stimulates autonomous learning. We conduct extensive experiments on the
EMMA, Olympiad, and MathVista datasets, achieving promising results that
outperform the current SOTA model by 15.84% across all tasks. Meanwhile, the
additional analytical experiments also verify the model's progress as well as
generalization ability.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 07:13:45 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhang",
"Jian",
""
],
[
"Wang",
"Zhiyuan",
""
],
[
"Wang",
"Zhangqi",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Xu",
"Fangzhi",
""
],
[
"Lin",
"Qika",
""
],
[
"Mao",
"Rui",
""
],
[
"Cambria",
"Erik",
""
],
[
"Liu",
"Jun",
""
]
] | TITLE: MAPS: A Multi-Agent Framework Based on Big Seven Personality and
Socratic Guidance for Multimodal Scientific Problem Solving
ABSTRACT: Multimodal scientific problems (MSPs) involve complex issues that require the
integration of multiple modalities, such as text and diagrams, presenting a
significant challenge in artificial intelligence. While progress has been made
in addressing traditional scientific problems, MSPs still face two primary
issues: the challenge of multi-modal comprehensive reasoning in scientific
problem-solving and the lack of reflective and rethinking capabilities. To
address these issues, we introduce a Multi-Agent framework based on the Big
Seven Personality and Socratic guidance (MAPS). This framework employs seven
distinct agents that leverage feedback mechanisms and the Socratic method to
guide the resolution of MSPs. To tackle the first issue, we propose a
progressive four-agent solving strategy, where each agent focuses on a specific
stage of the problem-solving process. For the second issue, we introduce a
Critic agent, inspired by Socratic questioning, which prompts critical thinking
and stimulates autonomous learning. We conduct extensive experiments on the
EMMA, Olympiad, and MathVista datasets, achieving promising results that
outperform the current SOTA model by 15.84% across all tasks. Meanwhile, the
additional analytical experiments also verify the model's progress as well as
generalization ability.
|
2503.16910 | Jie Mei | Yu Qiu, Yuhang Sun, Jie Mei, Lin Xiao, Jing Xu | Salient Object Detection in Traffic Scene through the TSOD10K Dataset | 12 pages, 12 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic Salient Object Detection (TSOD) aims to segment the objects critical
to driving safety by combining semantic (e.g., collision risks) and visual
saliency. Unlike SOD in natural scene images (NSI-SOD), which prioritizes
visually distinctive regions, TSOD emphasizes the objects that demand immediate
driver attention due to their semantic impact, even with low visual contrast.
This dual criterion, i.e., bridging perception and contextual risk, re-defines
saliency for autonomous and assisted driving systems. To address the lack of
task-specific benchmarks, we collect the first large-scale TSOD dataset with
pixel-wise saliency annotations, named TSOD10K. TSOD10K covers the diverse
object categories in various real-world traffic scenes under various
challenging weather/illumination variations (e.g., fog, snowstorms,
low-contrast, and low-light). Methodologically, we propose a Mamba-based TSOD
model, termed Tramba. Considering the challenge of distinguishing inconspicuous
visual information from complex traffic backgrounds, Tramba introduces a novel
Dual-Frequency Visual State Space module equipped with shifted window
partitioning and dilated scanning to enhance the perception of fine details and
global structure by hierarchically decomposing high/low-frequency components.
To emphasize critical regions in traffic scenes, we propose a traffic-oriented
Helix 2D-Selective-Scan (Helix-SS2D) mechanism that injects driving attention
priors while effectively capturing global multi-direction spatial dependencies.
We establish a comprehensive benchmark by evaluating Tramba and 22 existing
NSI-SOD models on TSOD10K, demonstrating Tramba's superiority. Our research
establishes the first foundation for safety-aware saliency analysis in
intelligent transportation systems.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 07:21:24 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Qiu",
"Yu",
""
],
[
"Sun",
"Yuhang",
""
],
[
"Mei",
"Jie",
""
],
[
"Xiao",
"Lin",
""
],
[
"Xu",
"Jing",
""
]
] | TITLE: Salient Object Detection in Traffic Scene through the TSOD10K Dataset
ABSTRACT: Traffic Salient Object Detection (TSOD) aims to segment the objects critical
to driving safety by combining semantic (e.g., collision risks) and visual
saliency. Unlike SOD in natural scene images (NSI-SOD), which prioritizes
visually distinctive regions, TSOD emphasizes the objects that demand immediate
driver attention due to their semantic impact, even with low visual contrast.
This dual criterion, i.e., bridging perception and contextual risk, re-defines
saliency for autonomous and assisted driving systems. To address the lack of
task-specific benchmarks, we collect the first large-scale TSOD dataset with
pixel-wise saliency annotations, named TSOD10K. TSOD10K covers the diverse
object categories in various real-world traffic scenes under various
challenging weather/illumination variations (e.g., fog, snowstorms,
low-contrast, and low-light). Methodologically, we propose a Mamba-based TSOD
model, termed Tramba. Considering the challenge of distinguishing inconspicuous
visual information from complex traffic backgrounds, Tramba introduces a novel
Dual-Frequency Visual State Space module equipped with shifted window
partitioning and dilated scanning to enhance the perception of fine details and
global structure by hierarchically decomposing high/low-frequency components.
To emphasize critical regions in traffic scenes, we propose a traffic-oriented
Helix 2D-Selective-Scan (Helix-SS2D) mechanism that injects driving attention
priors while effectively capturing global multi-direction spatial dependencies.
We establish a comprehensive benchmark by evaluating Tramba and 22 existing
NSI-SOD models on TSOD10K, demonstrating Tramba's superiority. Our research
establishes the first foundation for safety-aware saliency analysis in
intelligent transportation systems.
|
2503.16916 | Xiaoyong Chen | Xiaoyong Chen, Yong Guo, Jiaming Liang, Sitong Zhuang, Runhao Zeng,
Xiping Hu | Temporal Action Detection Model Compression by Progressive Block Drop | Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal action detection (TAD) aims to identify and localize action
instances in untrimmed videos, which is essential for various video
understanding tasks. However, recent improvements in model performance, driven
by larger feature extractors and datasets, have led to increased computational
demands. This presents a challenge for applications like autonomous driving and
robotics, which rely on limited computational resources. While existing channel
pruning methods can compress these models, reducing the number of channels
often hinders the parallelization efficiency of GPU, due to the inefficient
multiplication between small matrices. Instead of pruning channels, we propose
a Progressive Block Drop method that reduces model depth while retaining layer
width. In this way, we still use large matrices for computation but reduce the
number of multiplications. Our approach iteratively removes redundant blocks in
two steps: first, we drop blocks with minimal impact on model performance; and
second, we employ a parameter-efficient cross-depth alignment technique,
fine-tuning the pruned model to restore model accuracy. Our method achieves a
25% reduction in computational overhead on two TAD benchmarks (THUMOS14 and
ActivityNet-1.3) to achieve lossless compression. More critically, we
empirically show that our method is orthogonal to channel pruning methods and
can be combined with it to yield further efficiency gains.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 07:26:55 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Chen",
"Xiaoyong",
""
],
[
"Guo",
"Yong",
""
],
[
"Liang",
"Jiaming",
""
],
[
"Zhuang",
"Sitong",
""
],
[
"Zeng",
"Runhao",
""
],
[
"Hu",
"Xiping",
""
]
] | TITLE: Temporal Action Detection Model Compression by Progressive Block Drop
ABSTRACT: Temporal action detection (TAD) aims to identify and localize action
instances in untrimmed videos, which is essential for various video
understanding tasks. However, recent improvements in model performance, driven
by larger feature extractors and datasets, have led to increased computational
demands. This presents a challenge for applications like autonomous driving and
robotics, which rely on limited computational resources. While existing channel
pruning methods can compress these models, reducing the number of channels
often hinders the parallelization efficiency of GPU, due to the inefficient
multiplication between small matrices. Instead of pruning channels, we propose
a Progressive Block Drop method that reduces model depth while retaining layer
width. In this way, we still use large matrices for computation but reduce the
number of multiplications. Our approach iteratively removes redundant blocks in
two steps: first, we drop blocks with minimal impact on model performance; and
second, we employ a parameter-efficient cross-depth alignment technique,
fine-tuning the pruned model to restore model accuracy. Our method achieves a
25% reduction in computational overhead on two TAD benchmarks (THUMOS14 and
ActivityNet-1.3) to achieve lossless compression. More critically, we
empirically show that our method is orthogonal to channel pruning methods and
can be combined with it to yield further efficiency gains.
|
2503.16921 | Lingfan Zhang | Lingfan Zhang, Chen Liu, Chengming Xu, Kai Hu, Donghao Luo, Chengjie
Wang, Yanwei Fu, Yuan Yao | When Preferences Diverge: Aligning Diffusion Models with Minority-Aware
Adaptive DPO | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the field of image generation has witnessed significant
advancements, particularly in fine-tuning methods that align models with
universal human preferences. This paper explores the critical role of
preference data in the training process of diffusion models, particularly in
the context of Diffusion-DPO and its subsequent adaptations. We investigate the
complexities surrounding universal human preferences in image generation,
highlighting the subjective nature of these preferences and the challenges
posed by minority samples in preference datasets. Through pilot experiments, we
demonstrate the existence of minority samples and their detrimental effects on
model performance. We propose Adaptive-DPO -- a novel approach that
incorporates a minority-instance-aware metric into the DPO objective. This
metric, which includes intra-annotator confidence and inter-annotator
stability, distinguishes between majority and minority samples. We introduce an
Adaptive-DPO loss function which improves the DPO loss in two ways: enhancing
the model's learning of majority labels while mitigating the negative impact of
minority samples. Our experiments demonstrate that this method effectively
handles both synthetic minority data and real-world preference data, paving the
way for more effective training methodologies in image generation tasks.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 07:33:44 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhang",
"Lingfan",
""
],
[
"Liu",
"Chen",
""
],
[
"Xu",
"Chengming",
""
],
[
"Hu",
"Kai",
""
],
[
"Luo",
"Donghao",
""
],
[
"Wang",
"Chengjie",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Yao",
"Yuan",
""
]
] | TITLE: When Preferences Diverge: Aligning Diffusion Models with Minority-Aware
Adaptive DPO
ABSTRACT: In recent years, the field of image generation has witnessed significant
advancements, particularly in fine-tuning methods that align models with
universal human preferences. This paper explores the critical role of
preference data in the training process of diffusion models, particularly in
the context of Diffusion-DPO and its subsequent adaptations. We investigate the
complexities surrounding universal human preferences in image generation,
highlighting the subjective nature of these preferences and the challenges
posed by minority samples in preference datasets. Through pilot experiments, we
demonstrate the existence of minority samples and their detrimental effects on
model performance. We propose Adaptive-DPO -- a novel approach that
incorporates a minority-instance-aware metric into the DPO objective. This
metric, which includes intra-annotator confidence and inter-annotator
stability, distinguishes between majority and minority samples. We introduce an
Adaptive-DPO loss function which improves the DPO loss in two ways: enhancing
the model's learning of majority labels while mitigating the negative impact of
minority samples. Our experiments demonstrate that this method effectively
handles both synthetic minority data and real-world preference data, paving the
way for more effective training methodologies in image generation tasks.
|
2503.16922 | Jing Gong | Linxi Liang, Jing Gong, Mingwei Liu, Chong Wang, Guangsheng Ou, Yanlin
Wang, Xin Peng, Zibin Zheng | RustEvo^2: An Evolving Benchmark for API Evolution in LLM-based Rust
Code Generation | null | null | null | null | cs.SE cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Large Language Models (LLMs) have become pivotal tools for automating code
generation in software development. However, these models face significant
challenges in producing version-aware code for rapidly evolving languages like
Rust, where frequent Application Programming Interfaces (API) changes across
versions lead to compatibility issues and correctness errors. Existing
benchmarks lack systematic evaluation of how models navigate API transitions,
relying on labor-intensive manual curation and offering limited
version-specific insights. To address this gap, we present RustEvo, a novel
framework for constructing dynamic benchmarks that evaluate the ability of LLMs
to adapt to evolving Rust APIs. RustEvo automates dataset creation by
synthesizing 588 API changes (380 from Rust standard libraries, 208 from 15
third-party crates) into programming tasks mirroring real-world challenges.
These tasks cover four API evolution categories: Stabilizations, Signature
Changes, Behavioral Changes, and Deprecations, reflecting their actual
distribution in the Rust ecosystem.
Experiments on state-of-the-art (SOTA) LLMs reveal significant performance
variations: models achieve a 65.8% average success rate on stabilized APIs but
only 38.0% on behavioral changes, highlighting difficulties in detecting
semantic shifts without signature alterations. Knowledge cutoff dates strongly
influence performance, with models scoring 56.1% on before-cutoff APIs versus
32.5% on after-cutoff tasks. Retrieval-Augmented Generation (RAG) mitigates
this gap, improving success rates by 13.5% on average for APIs released after
model training. Our findings underscore the necessity of our evolution-aware
benchmarks to advance the adaptability of LLMs in fast-paced software
ecosystems. The framework and the benchmarks are publicly released at
https://github.com/SYSUSELab/RustEvo.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 07:33:59 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Liang",
"Linxi",
""
],
[
"Gong",
"Jing",
""
],
[
"Liu",
"Mingwei",
""
],
[
"Wang",
"Chong",
""
],
[
"Ou",
"Guangsheng",
""
],
[
"Wang",
"Yanlin",
""
],
[
"Peng",
"Xin",
""
],
[
"Zheng",
"Zibin",
""
]
] | TITLE: RustEvo^2: An Evolving Benchmark for API Evolution in LLM-based Rust
Code Generation
ABSTRACT: Large Language Models (LLMs) have become pivotal tools for automating code
generation in software development. However, these models face significant
challenges in producing version-aware code for rapidly evolving languages like
Rust, where frequent Application Programming Interfaces (API) changes across
versions lead to compatibility issues and correctness errors. Existing
benchmarks lack systematic evaluation of how models navigate API transitions,
relying on labor-intensive manual curation and offering limited
version-specific insights. To address this gap, we present RustEvo, a novel
framework for constructing dynamic benchmarks that evaluate the ability of LLMs
to adapt to evolving Rust APIs. RustEvo automates dataset creation by
synthesizing 588 API changes (380 from Rust standard libraries, 208 from 15
third-party crates) into programming tasks mirroring real-world challenges.
These tasks cover four API evolution categories: Stabilizations, Signature
Changes, Behavioral Changes, and Deprecations, reflecting their actual
distribution in the Rust ecosystem.
Experiments on state-of-the-art (SOTA) LLMs reveal significant performance
variations: models achieve a 65.8% average success rate on stabilized APIs but
only 38.0% on behavioral changes, highlighting difficulties in detecting
semantic shifts without signature alterations. Knowledge cutoff dates strongly
influence performance, with models scoring 56.1% on before-cutoff APIs versus
32.5% on after-cutoff tasks. Retrieval-Augmented Generation (RAG) mitigates
this gap, improving success rates by 13.5% on average for APIs released after
model training. Our findings underscore the necessity of our evolution-aware
benchmarks to advance the adaptability of LLMs in fast-paced software
ecosystems. The framework and the benchmarks are publicly released at
https://github.com/SYSUSELab/RustEvo.
|
2503.16930 | Xiangming Wang | Haijin Zeng, Xiangming Wang, Yongyong Chen, Jingyong Su, Jie Liu | Vision-Language Gradient Descent-driven All-in-One Deep Unfolding
Networks | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic image degradations, including noise, blur and lighting
inconsistencies, pose significant challenges in image restoration, often due to
sensor limitations or adverse environmental conditions. Existing Deep Unfolding
Networks (DUNs) offer stable restoration performance but require manual
selection of degradation matrices for each degradation type, limiting their
adaptability across diverse scenarios. To address this issue, we propose the
Vision-Language-guided Unfolding Network (VLU-Net), a unified DUN framework for
handling multiple degradation types simultaneously. VLU-Net leverages a
Vision-Language Model (VLM) refined on degraded image-text pairs to align image
features with degradation descriptions, selecting the appropriate transform for
target degradation. By integrating an automatic VLM-based gradient estimation
strategy into the Proximal Gradient Descent (PGD) algorithm, VLU-Net
effectively tackles complex multi-degradation restoration tasks while
maintaining interpretability. Furthermore, we design a hierarchical feature
unfolding structure to enhance VLU-Net framework, efficiently synthesizing
degradation patterns across various levels. VLU-Net is the first all-in-one DUN
framework and outperforms current leading one-by-one and all-in-one end-to-end
methods by 3.74 dB on the SOTS dehazing dataset and 1.70 dB on the Rain100L
deraining dataset.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 08:02:48 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zeng",
"Haijin",
""
],
[
"Wang",
"Xiangming",
""
],
[
"Chen",
"Yongyong",
""
],
[
"Su",
"Jingyong",
""
],
[
"Liu",
"Jie",
""
]
] | TITLE: Vision-Language Gradient Descent-driven All-in-One Deep Unfolding
Networks
ABSTRACT: Dynamic image degradations, including noise, blur and lighting
inconsistencies, pose significant challenges in image restoration, often due to
sensor limitations or adverse environmental conditions. Existing Deep Unfolding
Networks (DUNs) offer stable restoration performance but require manual
selection of degradation matrices for each degradation type, limiting their
adaptability across diverse scenarios. To address this issue, we propose the
Vision-Language-guided Unfolding Network (VLU-Net), a unified DUN framework for
handling multiple degradation types simultaneously. VLU-Net leverages a
Vision-Language Model (VLM) refined on degraded image-text pairs to align image
features with degradation descriptions, selecting the appropriate transform for
target degradation. By integrating an automatic VLM-based gradient estimation
strategy into the Proximal Gradient Descent (PGD) algorithm, VLU-Net
effectively tackles complex multi-degradation restoration tasks while
maintaining interpretability. Furthermore, we design a hierarchical feature
unfolding structure to enhance VLU-Net framework, efficiently synthesizing
degradation patterns across various levels. VLU-Net is the first all-in-one DUN
framework and outperforms current leading one-by-one and all-in-one end-to-end
methods by 3.74 dB on the SOTS dehazing dataset and 1.70 dB on the Rain100L
deraining dataset.
|
2503.16943 | Daniel Brunner | Anas Skalli, Satoshi Sunada, Mirko Goldmann, Marcin Gebski, Stephan
Reitzenstein, James A. Lott, Tomasz Czyszanowski, Daniel Brunner | Model-free front-to-end training of a large high performance laser
neural network | null | null | null | null | cs.LG cs.ET | http://creativecommons.org/licenses/by/4.0/ | Artificial neural networks (ANNs), have become ubiquitous and revolutionized
many applications ranging from computer vision to medical diagnoses. However,
they offer a fundamentally connectionist and distributed approach to computing,
in stark contrast to classical computers that use the von Neumann architecture.
This distinction has sparked renewed interest in developing unconventional
hardware to support more efficient implementations of ANNs, rather than merely
emulating them on traditional systems. Photonics stands out as a particularly
promising platform, providing scalability, high speed, energy efficiency, and
the ability for parallel information processing. However, fully realized
autonomous optical neural networks (ONNs) with in-situ learning capabilities
are still rare. In this work, we demonstrate a fully autonomous and parallel
ONN using a multimode vertical cavity surface emitting laser (VCSEL) using
off-the-shelf components. Our ONN is highly efficient and is scalable both in
network size and inference bandwidth towards the GHz range. High performance
hardware-compatible optimization algorithms are necessary in order to minimize
reliance on external von Neumann computers to fully exploit the potential of
ONNs. As such we present and extensively study several algorithms which are
broadly compatible with a wide range of systems. We then apply these algorithms
to optimize our ONN, and benchmark them using the MNIST dataset. We show that
our ONN can achieve high accuracy and convergence efficiency, even under
limited hardware resources. Crucially, we compare these different algorithms in
terms of scaling and optimization efficiency in term of convergence time which
is crucial when working with limited external resources. Our work provides some
guidance for the design of future ONNs as well as a simple and flexible way to
train them.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 08:43:02 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Skalli",
"Anas",
""
],
[
"Sunada",
"Satoshi",
""
],
[
"Goldmann",
"Mirko",
""
],
[
"Gebski",
"Marcin",
""
],
[
"Reitzenstein",
"Stephan",
""
],
[
"Lott",
"James A.",
""
],
[
"Czyszanowski",
"Tomasz",
""
],
[
"Brunner",
"Daniel",
""
]
] | TITLE: Model-free front-to-end training of a large high performance laser
neural network
ABSTRACT: Artificial neural networks (ANNs), have become ubiquitous and revolutionized
many applications ranging from computer vision to medical diagnoses. However,
they offer a fundamentally connectionist and distributed approach to computing,
in stark contrast to classical computers that use the von Neumann architecture.
This distinction has sparked renewed interest in developing unconventional
hardware to support more efficient implementations of ANNs, rather than merely
emulating them on traditional systems. Photonics stands out as a particularly
promising platform, providing scalability, high speed, energy efficiency, and
the ability for parallel information processing. However, fully realized
autonomous optical neural networks (ONNs) with in-situ learning capabilities
are still rare. In this work, we demonstrate a fully autonomous and parallel
ONN using a multimode vertical cavity surface emitting laser (VCSEL) using
off-the-shelf components. Our ONN is highly efficient and is scalable both in
network size and inference bandwidth towards the GHz range. High performance
hardware-compatible optimization algorithms are necessary in order to minimize
reliance on external von Neumann computers to fully exploit the potential of
ONNs. As such we present and extensively study several algorithms which are
broadly compatible with a wide range of systems. We then apply these algorithms
to optimize our ONN, and benchmark them using the MNIST dataset. We show that
our ONN can achieve high accuracy and convergence efficiency, even under
limited hardware resources. Crucially, we compare these different algorithms in
terms of scaling and optimization efficiency in term of convergence time which
is crucial when working with limited external resources. Our work provides some
guidance for the design of future ONNs as well as a simple and flexible way to
train them.
|
2503.16945 | Ibtissam Saadi | Ibtissam Saadi, Abdenour Hadid, Douglas W. Cunningham, Abdelmalik
Taleb-Ahmed, and Yassin El Hillali | PE-CLIP: A Parameter-Efficient Fine-Tuning of Vision Language Models for
Dynamic Facial Expression Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-Language Models (VLMs) like CLIP offer promising solutions for Dynamic
Facial Expression Recognition (DFER) but face challenges such as inefficient
full fine-tuning, high complexity, and poor alignment between textual and
visual representations. Additionally, existing methods struggle with
ineffective temporal modeling. To address these issues, we propose PE-CLIP, a
parameter-efficient fine-tuning (PEFT) framework that adapts CLIP for DFER
while significantly reducing trainable parameters while maintaining high
accuracy. PE-CLIP introduces two specialized adapters: a Temporal Dynamic
Adapter (TDA) and a Shared Adapter (ShA). The TDA is a GRU-based module with
dynamic scaling that captures sequential dependencies while emphasizing
informative temporal features and suppressing irrelevant variations. The ShA is
a lightweight adapter that refines representations within both textual and
visual encoders, ensuring consistency and efficiency. Additionally, we
integrate Multi-modal Prompt Learning (MaPLe), introducing learnable prompts
for visual and action unit-based textual inputs, enhancing semantic alignment
between modalities and enabling efficient CLIP adaptation for dynamic tasks. We
evaluate PE-CLIP on two benchmark datasets, DFEW and FERV39K, achieving
competitive performance compared to state-of-the-art methods while requiring
fewer trainable parameters. By balancing efficiency and accuracy, PE-CLIP sets
a new benchmark in resource-efficient DFER. The source code of the proposed
PE-CLIP will be publicly available at https://github.com/Ibtissam-SAADI/PE-CLIP .
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 08:45:50 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Saadi",
"Ibtissam",
""
],
[
"Hadid",
"Abdenour",
""
],
[
"Cunningham",
"Douglas W.",
""
],
[
"Taleb-Ahmed",
"Abdelmalik",
""
],
[
"Hillali",
"Yassin El",
""
]
] | TITLE: PE-CLIP: A Parameter-Efficient Fine-Tuning of Vision Language Models for
Dynamic Facial Expression Recognition
ABSTRACT: Vision-Language Models (VLMs) like CLIP offer promising solutions for Dynamic
Facial Expression Recognition (DFER) but face challenges such as inefficient
full fine-tuning, high complexity, and poor alignment between textual and
visual representations. Additionally, existing methods struggle with
ineffective temporal modeling. To address these issues, we propose PE-CLIP, a
parameter-efficient fine-tuning (PEFT) framework that adapts CLIP for DFER
while significantly reducing trainable parameters while maintaining high
accuracy. PE-CLIP introduces two specialized adapters: a Temporal Dynamic
Adapter (TDA) and a Shared Adapter (ShA). The TDA is a GRU-based module with
dynamic scaling that captures sequential dependencies while emphasizing
informative temporal features and suppressing irrelevant variations. The ShA is
a lightweight adapter that refines representations within both textual and
visual encoders, ensuring consistency and efficiency. Additionally, we
integrate Multi-modal Prompt Learning (MaPLe), introducing learnable prompts
for visual and action unit-based textual inputs, enhancing semantic alignment
between modalities and enabling efficient CLIP adaptation for dynamic tasks. We
evaluate PE-CLIP on two benchmark datasets, DFEW and FERV39K, achieving
competitive performance compared to state-of-the-art methods while requiring
fewer trainable parameters. By balancing efficiency and accuracy, PE-CLIP sets
a new benchmark in resource-efficient DFER. The source code of the proposed
PE-CLIP will be publicly available at https://github.com/Ibtissam-SAADI/PE-CLIP .
|
2503.16948 | Yinhan Zhang | Yinhan Zhang, Yue Ma, Bingyuan Wang, Qifeng Chen, Zeyu Wang | MagicColor: Multi-Instance Sketch Colorization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present \textit{MagicColor}, a diffusion-based framework for
multi-instance sketch colorization. The production of multi-instance 2D line
art colorization adheres to an industry-standard workflow, which consists of
three crucial stages: the design of line art characters, the coloring of
individual objects, and the refinement process. The artists are required to
repeat the process of coloring each instance one by one, which is inaccurate
and inefficient. Meanwhile, current generative methods fail to solve this task
due to the challenge of multi-instance pair data collection. To tackle these
challenges, we incorporate three technical designs to ensure precise character
detail transcription and achieve multi-instance sketch colorization in a single
forward. Specifically, we first propose the self-play training strategy to
solve the lack of training data. Then we introduce an instance guider to feed
the color of the instance. To achieve accurate color matching, we present
fine-grained color matching with edge loss to enhance visual quality. Equipped
with the proposed modules, MagicColor enables automatically transforming
sketches into vividly-colored images with accurate consistency and
multi-instance control. Experiments on our collected datasets show that our
model outperforms existing methods regarding chromatic precision. Specifically,
our model critically automates the colorization process with zero manual
adjustments, so novice users can produce stylistically consistent artwork by
providing reference instances and the original line art. Our code and
additional details are available at https://yinhan-zhang.github.io/color
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 08:53:14 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhang",
"Yinhan",
""
],
[
"Ma",
"Yue",
""
],
[
"Wang",
"Bingyuan",
""
],
[
"Chen",
"Qifeng",
""
],
[
"Wang",
"Zeyu",
""
]
] | TITLE: MagicColor: Multi-Instance Sketch Colorization
ABSTRACT: We present \textit{MagicColor}, a diffusion-based framework for
multi-instance sketch colorization. The production of multi-instance 2D line
art colorization adheres to an industry-standard workflow, which consists of
three crucial stages: the design of line art characters, the coloring of
individual objects, and the refinement process. The artists are required to
repeat the process of coloring each instance one by one, which is inaccurate
and inefficient. Meanwhile, current generative methods fail to solve this task
due to the challenge of multi-instance pair data collection. To tackle these
challenges, we incorporate three technical designs to ensure precise character
detail transcription and achieve multi-instance sketch colorization in a single
forward. Specifically, we first propose the self-play training strategy to
solve the lack of training data. Then we introduce an instance guider to feed
the color of the instance. To achieve accurate color matching, we present
fine-grained color matching with edge loss to enhance visual quality. Equipped
with the proposed modules, MagicColor enables automatically transforming
sketches into vividly-colored images with accurate consistency and
multi-instance control. Experiments on our collected datasets show that our
model outperforms existing methods regarding chromatic precision. Specifically,
our model critically automates the colorization process with zero manual
adjustments, so novice users can produce stylistically consistent artwork by
providing reference instances and the original line art. Our code and
additional details are available at https://yinhan-zhang.github.io/color
|
2503.16953 | Jannis Brugger | Jannis Brugger, Mattia Cerrato, David Richter, Cedric Derstroff,
Daniel Maninger, Mira Mezini, Stefan Kramer | Neural-Guided Equation Discovery | 32 pages + 4 pages appendix, 9 figures, book chapter | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Deep learning approaches are becoming increasingly attractive for equation
discovery. We show the advantages and disadvantages of using neural-guided
equation discovery by giving an overview of recent papers and the results of
experiments using our modular equation discovery system MGMT
($\textbf{M}$ulti-Task $\textbf{G}$rammar-Guided $\textbf{M}$onte-Carlo
$\textbf{T}$ree Search for Equation Discovery). The system uses neural-guided
Monte-Carlo Tree Search (MCTS) and supports both supervised and reinforcement
learning, with a search space defined by a context-free grammar. We summarize
seven desirable properties of equation discovery systems, emphasizing the
importance of embedding tabular data sets for such learning approaches. Using
the modular structure of MGMT, we compare seven architectures (among them,
RNNs, CNNs, and Transformers) for embedding tabular datasets on the auxiliary
task of contrastive learning for tabular data sets on an equation discovery
task. For almost all combinations of modules, supervised learning outperforms
reinforcement learning. Moreover, our experiments indicate an advantage of
using grammar rules as action space instead of tokens. Two adaptations of MCTS
-- risk-seeking MCTS and AmEx-MCTS -- can improve equation discovery with that
kind of search.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 08:55:51 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Brugger",
"Jannis",
""
],
[
"Cerrato",
"Mattia",
""
],
[
"Richter",
"David",
""
],
[
"Derstroff",
"Cedric",
""
],
[
"Maninger",
"Daniel",
""
],
[
"Mezini",
"Mira",
""
],
[
"Kramer",
"Stefan",
""
]
] | TITLE: Neural-Guided Equation Discovery
ABSTRACT: Deep learning approaches are becoming increasingly attractive for equation
discovery. We show the advantages and disadvantages of using neural-guided
equation discovery by giving an overview of recent papers and the results of
experiments using our modular equation discovery system MGMT
($\textbf{M}$ulti-Task $\textbf{G}$rammar-Guided $\textbf{M}$onte-Carlo
$\textbf{T}$ree Search for Equation Discovery). The system uses neural-guided
Monte-Carlo Tree Search (MCTS) and supports both supervised and reinforcement
learning, with a search space defined by a context-free grammar. We summarize
seven desirable properties of equation discovery systems, emphasizing the
importance of embedding tabular data sets for such learning approaches. Using
the modular structure of MGMT, we compare seven architectures (among them,
RNNs, CNNs, and Transformers) for embedding tabular datasets on the auxiliary
task of contrastive learning for tabular data sets on an equation discovery
task. For almost all combinations of modules, supervised learning outperforms
reinforcement learning. Moreover, our experiments indicate an advantage of
using grammar rules as action space instead of tokens. Two adaptations of MCTS
-- risk-seeking MCTS and AmEx-MCTS -- can improve equation discovery with that
kind of search.
|
2503.16957 | Muhammad Risha | Muhammad Risha, Mohamed Elsaadany, Paul Liu | Uncertainty-Driven Modeling of Microporosity and Permeability in Clastic
Reservoirs Using Random Forest | 13 pages, 7 figures | null | null | null | physics.geo-ph cs.LG | http://creativecommons.org/licenses/by/4.0/ | Predicting microporosity and permeability in clastic reservoirs is a
challenge in reservoir quality assessment, especially in formations where
direct measurements are difficult or expensive. These reservoir properties are
fundamental in determining a reservoir's capacity for fluid storage and
transmission, yet conventional methods for evaluating them, such as Mercury
Injection Capillary Pressure (MICP) and Scanning Electron Microscopy (SEM), are
resource-intensive. The aim of this study is to develop a cost-effective
machine learning model to predict complex reservoir properties using readily
available field data and basic laboratory analyses. A Random Forest classifier
was employed, utilizing key geological parameters such as porosity, grain size
distribution, and spectral gamma-ray (SGR) measurements. An uncertainty
analysis was applied to account for natural variability, expanding the dataset,
and enhancing the model's robustness. The model achieved a high level of
accuracy in predicting microporosity (93%) and permeability levels (88%). By
using easily obtainable data, this model reduces the reliance on expensive
laboratory methods, making it a valuable tool for early-stage exploration,
especially in remote or offshore environments. The integration of machine
learning with uncertainty analysis provides a reliable and cost-effective
approach for evaluating key reservoir properties in siliciclastic formations.
This model offers a practical solution to improve reservoir quality
assessments, enabling more informed decision-making and optimizing exploration
efforts.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 09:05:04 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Risha",
"Muhammad",
""
],
[
"Elsaadany",
"Mohamed",
""
],
[
"Liu",
"Paul",
""
]
] | TITLE: Uncertainty-Driven Modeling of Microporosity and Permeability in Clastic
Reservoirs Using Random Forest
ABSTRACT: Predicting microporosity and permeability in clastic reservoirs is a
challenge in reservoir quality assessment, especially in formations where
direct measurements are difficult or expensive. These reservoir properties are
fundamental in determining a reservoir's capacity for fluid storage and
transmission, yet conventional methods for evaluating them, such as Mercury
Injection Capillary Pressure (MICP) and Scanning Electron Microscopy (SEM), are
resource-intensive. The aim of this study is to develop a cost-effective
machine learning model to predict complex reservoir properties using readily
available field data and basic laboratory analyses. A Random Forest classifier
was employed, utilizing key geological parameters such as porosity, grain size
distribution, and spectral gamma-ray (SGR) measurements. An uncertainty
analysis was applied to account for natural variability, expanding the dataset,
and enhancing the model's robustness. The model achieved a high level of
accuracy in predicting microporosity (93%) and permeability levels (88%). By
using easily obtainable data, this model reduces the reliance on expensive
laboratory methods, making it a valuable tool for early-stage exploration,
especially in remote or offshore environments. The integration of machine
learning with uncertainty analysis provides a reliable and cost-effective
approach for evaluating key reservoir properties in siliciclastic formations.
This model offers a practical solution to improve reservoir quality
assessments, enabling more informed decision-making and optimizing exploration
efforts.
|
2503.16963 | Yizhen Jiang | Wei Zhang, Mengting Ma, Yizhen Jiang, Rongrong Lian, Zhenkai Wu,
Kangning Cui, Xiaowen Ma | Center-guided Classifier for Semantic Segmentation of Remote Sensing
Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Compared with natural images, remote sensing images (RSIs) have the unique
characteristic. i.e., larger intraclass variance, which makes semantic
segmentation for remote sensing images more challenging. Moreover, existing
semantic segmentation models for remote sensing images usually employ a vanilla
softmax classifier, which has three drawbacks: (1) non-direct supervision for
the pixel representations during training; (2) inadequate modeling ability of
parametric softmax classifiers under large intraclass variance; and (3) opaque
process of classification decision. In this paper, we propose a novel
classifier (called CenterSeg) customized for RSI semantic segmentation, which
solves the abovementioned problems with multiple prototypes, direct supervision
under Grassmann manifold, and interpretability strategy. Specifically, for each
class, our CenterSeg obtains local class centers by aggregating corresponding
pixel features based on ground-truth masks, and generates multiple prototypes
through hard attention assignment and momentum updating. In addition, we
introduce the Grassmann manifold and constrain the joint embedding space of
pixel features and prototypes based on two additional regularization terms.
Especially, during the inference, CenterSeg can further provide
interpretability to the model by restricting the prototype as a sample of the
training set. Experimental results on three remote sensing segmentation
datasets validate the effectiveness of the model. Besides the superior
performance, CenterSeg has the advantages of simplicity, lightweight,
compatibility, and interpretability. Code is available at
https://github.com/xwmaxwma/rssegmentation.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 09:21:37 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhang",
"Wei",
""
],
[
"Ma",
"Mengting",
""
],
[
"Jiang",
"Yizhen",
""
],
[
"Lian",
"Rongrong",
""
],
[
"Wu",
"Zhenkai",
""
],
[
"Cui",
"Kangning",
""
],
[
"Ma",
"Xiaowen",
""
]
] | TITLE: Center-guided Classifier for Semantic Segmentation of Remote Sensing
Images
ABSTRACT: Compared with natural images, remote sensing images (RSIs) have the unique
characteristic. i.e., larger intraclass variance, which makes semantic
segmentation for remote sensing images more challenging. Moreover, existing
semantic segmentation models for remote sensing images usually employ a vanilla
softmax classifier, which has three drawbacks: (1) non-direct supervision for
the pixel representations during training; (2) inadequate modeling ability of
parametric softmax classifiers under large intraclass variance; and (3) opaque
process of classification decision. In this paper, we propose a novel
classifier (called CenterSeg) customized for RSI semantic segmentation, which
solves the abovementioned problems with multiple prototypes, direct supervision
under Grassmann manifold, and interpretability strategy. Specifically, for each
class, our CenterSeg obtains local class centers by aggregating corresponding
pixel features based on ground-truth masks, and generates multiple prototypes
through hard attention assignment and momentum updating. In addition, we
introduce the Grassmann manifold and constrain the joint embedding space of
pixel features and prototypes based on two additional regularization terms.
Especially, during the inference, CenterSeg can further provide
interpretability to the model by restricting the prototype as a sample of the
training set. Experimental results on three remote sensing segmentation
datasets validate the effectiveness of the model. Besides the superior
performance, CenterSeg has the advantages of simplicity, lightweight,
compatibility, and interpretability. Code is available at
https://github.com/xwmaxwma/rssegmentation.
|
2503.16964 | Jiadong Tang | Jiadong Tang, Yu Gao, Dianyi Yang, Liqi Yan, Yufeng Yue, Yi Yang | DroneSplat: 3D Gaussian Splatting for Robust 3D Reconstruction from
In-the-Wild Drone Imagery | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drones have become essential tools for reconstructing wild scenes due to
their outstanding maneuverability. Recent advances in radiance field methods
have achieved remarkable rendering quality, providing a new avenue for 3D
reconstruction from drone imagery. However, dynamic distractors in wild
environments challenge the static scene assumption in radiance fields, while
limited view constraints hinder the accurate capture of underlying scene
geometry. To address these challenges, we introduce DroneSplat, a novel
framework designed for robust 3D reconstruction from in-the-wild drone imagery.
Our method adaptively adjusts masking thresholds by integrating local-global
segmentation heuristics with statistical approaches, enabling precise
identification and elimination of dynamic distractors in static scenes. We
enhance 3D Gaussian Splatting with multi-view stereo predictions and a
voxel-guided optimization strategy, supporting high-quality rendering under
limited view constraints. For comprehensive evaluation, we provide a
drone-captured 3D reconstruction dataset encompassing both dynamic and static
scenes. Extensive experiments demonstrate that DroneSplat outperforms both 3DGS
and NeRF baselines in handling in-the-wild drone imagery.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 09:21:43 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Tang",
"Jiadong",
""
],
[
"Gao",
"Yu",
""
],
[
"Yang",
"Dianyi",
""
],
[
"Yan",
"Liqi",
""
],
[
"Yue",
"Yufeng",
""
],
[
"Yang",
"Yi",
""
]
] | TITLE: DroneSplat: 3D Gaussian Splatting for Robust 3D Reconstruction from
In-the-Wild Drone Imagery
ABSTRACT: Drones have become essential tools for reconstructing wild scenes due to
their outstanding maneuverability. Recent advances in radiance field methods
have achieved remarkable rendering quality, providing a new avenue for 3D
reconstruction from drone imagery. However, dynamic distractors in wild
environments challenge the static scene assumption in radiance fields, while
limited view constraints hinder the accurate capture of underlying scene
geometry. To address these challenges, we introduce DroneSplat, a novel
framework designed for robust 3D reconstruction from in-the-wild drone imagery.
Our method adaptively adjusts masking thresholds by integrating local-global
segmentation heuristics with statistical approaches, enabling precise
identification and elimination of dynamic distractors in static scenes. We
enhance 3D Gaussian Splatting with multi-view stereo predictions and a
voxel-guided optimization strategy, supporting high-quality rendering under
limited view constraints. For comprehensive evaluation, we provide a
drone-captured 3D reconstruction dataset encompassing both dynamic and static
scenes. Extensive experiments demonstrate that DroneSplat outperforms both 3DGS
and NeRF baselines in handling in-the-wild drone imagery.
|
2503.16970 | Yingping Liang | Yingping Liang, Yutao Hu, Wenqi Shao, Ying Fu | Distilling Monocular Foundation Model for Fine-grained Depth Completion | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Depth completion involves predicting dense depth maps from sparse LiDAR
inputs. However, sparse depth annotations from sensors limit the availability
of dense supervision, which is necessary for learning detailed geometric
features. In this paper, we propose a two-stage knowledge distillation
framework that leverages powerful monocular foundation models to provide dense
supervision for depth completion. In the first stage, we introduce a
pre-training strategy that generates diverse training data from natural images,
which distills geometric knowledge to depth completion. Specifically, we
simulate LiDAR scans by utilizing monocular depth and mesh reconstruction,
thereby creating training data without requiring ground-truth depth. Besides,
monocular depth estimation suffers from inherent scale ambiguity in real-world
settings. To address this, in the second stage, we employ a scale- and
shift-invariant loss (SSI Loss) to learn real-world scales when fine-tuning on
real-world datasets. Our two-stage distillation framework enables depth
completion models to harness the strengths of monocular foundation models.
Experimental results demonstrate that models trained with our two-stage
distillation framework achieve state-of-the-art performance, ranking
\textbf{first place} on the KITTI benchmark. Code is available at
https://github.com/Sharpiless/DMD3C
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 09:34:01 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Liang",
"Yingping",
""
],
[
"Hu",
"Yutao",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Fu",
"Ying",
""
]
] | TITLE: Distilling Monocular Foundation Model for Fine-grained Depth Completion
ABSTRACT: Depth completion involves predicting dense depth maps from sparse LiDAR
inputs. However, sparse depth annotations from sensors limit the availability
of dense supervision, which is necessary for learning detailed geometric
features. In this paper, we propose a two-stage knowledge distillation
framework that leverages powerful monocular foundation models to provide dense
supervision for depth completion. In the first stage, we introduce a
pre-training strategy that generates diverse training data from natural images,
which distills geometric knowledge to depth completion. Specifically, we
simulate LiDAR scans by utilizing monocular depth and mesh reconstruction,
thereby creating training data without requiring ground-truth depth. Besides,
monocular depth estimation suffers from inherent scale ambiguity in real-world
settings. To address this, in the second stage, we employ a scale- and
shift-invariant loss (SSI Loss) to learn real-world scales when fine-tuning on
real-world datasets. Our two-stage distillation framework enables depth
completion models to harness the strengths of monocular foundation models.
Experimental results demonstrate that models trained with our two-stage
distillation framework achieve state-of-the-art performance, ranking
\textbf{first place} on the KITTI benchmark. Code is available at
https://github.com/Sharpiless/DMD3C
|
2503.16976 | Weihao Yu | Weihao Yu, Xiaoqing Guo, Chenxin Li, Yifan Liu, Yixuan Yuan | GeoT: Geometry-guided Instance-dependent Transition Matrix for
Semi-supervised Tooth Point Cloud Segmentation | IPMI2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Achieving meticulous segmentation of tooth point clouds from intra-oral scans
stands as an indispensable prerequisite for various orthodontic applications.
Given the labor-intensive nature of dental annotation, a significant amount of
data remains unlabeled, driving increasing interest in semi-supervised
approaches. One primary challenge of existing semi-supervised medical
segmentation methods lies in noisy pseudo labels generated for unlabeled data.
To address this challenge, we propose GeoT, the first framework that employs
instance-dependent transition matrix (IDTM) to explicitly model noise in pseudo
labels for semi-supervised dental segmentation. Specifically, to handle the
extensive solution space of IDTM arising from tens of thousands of dental
points, we introduce tooth geometric priors through two key components:
point-level geometric regularization (PLGR) to enhance consistency between
point adjacency relationships in 3D and IDTM spaces, and class-level geometric
smoothing (CLGS) to leverage the fixed spatial distribution of tooth categories
for optimal IDTM estimation. Extensive experiments performed on the public
Teeth3DS dataset and private dataset demonstrate that our method can make full
utilization of unlabeled data to facilitate segmentation, achieving performance
comparable to fully supervised methods with only $20\%$ of the labeled data.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 09:43:57 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Yu",
"Weihao",
""
],
[
"Guo",
"Xiaoqing",
""
],
[
"Li",
"Chenxin",
""
],
[
"Liu",
"Yifan",
""
],
[
"Yuan",
"Yixuan",
""
]
] | TITLE: GeoT: Geometry-guided Instance-dependent Transition Matrix for
Semi-supervised Tooth Point Cloud Segmentation
ABSTRACT: Achieving meticulous segmentation of tooth point clouds from intra-oral scans
stands as an indispensable prerequisite for various orthodontic applications.
Given the labor-intensive nature of dental annotation, a significant amount of
data remains unlabeled, driving increasing interest in semi-supervised
approaches. One primary challenge of existing semi-supervised medical
segmentation methods lies in noisy pseudo labels generated for unlabeled data.
To address this challenge, we propose GeoT, the first framework that employs
instance-dependent transition matrix (IDTM) to explicitly model noise in pseudo
labels for semi-supervised dental segmentation. Specifically, to handle the
extensive solution space of IDTM arising from tens of thousands of dental
points, we introduce tooth geometric priors through two key components:
point-level geometric regularization (PLGR) to enhance consistency between
point adjacency relationships in 3D and IDTM spaces, and class-level geometric
smoothing (CLGS) to leverage the fixed spatial distribution of tooth categories
for optimal IDTM estimation. Extensive experiments performed on the public
Teeth3DS dataset and private dataset demonstrate that our method can make full
utilization of unlabeled data to facilitate segmentation, achieving performance
comparable to fully supervised methods with only $20\%$ of the labeled data.
|
2503.16991 | Yuze Li | Yuze Li and Wei Zhu | TRACE: Time SeRies PArameter EffiCient FinE-tuning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an efficient fine-tuning method for time series foundation models,
termed TRACE: Time Series Parameter Efficient Fine-tuning. While pretrained
time series foundation models are gaining popularity, they face the following
challenges: (1) Unlike natural language tasks, time series data vary in
frequency, channel numbers, historical/prediction lengths. For long-term
forecasting tasks in particular, tailored fine-tuning can significantly enhance
performance.(2) Existing parameter-efficient tuning methods like LoRA remain
applicable but require adaptation to temporal characteristics.
To address these challenges, our TRACE framework introduces two key
innovations: (1) Gated DSIC (Gated Dynamic Simulation Importance Calculation),
an unbiased LoRA module importance selection mechanism that ensures conditional
parameter consistency before and after masking. Experiments demonstrate that
Gated DSIC outperforms common fine-tuning. (2) Reconstructed prediction heads
for long-term forecasting tasks, which achieve comparable or superior
performance to linear probing heads while drastically reducing parameter
counts.
Extensive experiments on long-/short-term forecasting and anomaly detection
tasks across diverse datasets, coupled with ablation studies, validate the
effectiveness of our method.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 09:55:43 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Li",
"Yuze",
""
],
[
"Zhu",
"Wei",
""
]
] | TITLE: TRACE: Time SeRies PArameter EffiCient FinE-tuning
ABSTRACT: We propose an efficient fine-tuning method for time series foundation models,
termed TRACE: Time Series Parameter Efficient Fine-tuning. While pretrained
time series foundation models are gaining popularity, they face the following
challenges: (1) Unlike natural language tasks, time series data vary in
frequency, channel numbers, historical/prediction lengths. For long-term
forecasting tasks in particular, tailored fine-tuning can significantly enhance
performance.(2) Existing parameter-efficient tuning methods like LoRA remain
applicable but require adaptation to temporal characteristics.
To address these challenges, our TRACE framework introduces two key
innovations: (1) Gated DSIC (Gated Dynamic Simulation Importance Calculation),
an unbiased LoRA module importance selection mechanism that ensures conditional
parameter consistency before and after masking. Experiments demonstrate that
Gated DSIC outperforms common fine-tuning. (2) Reconstructed prediction heads
for long-term forecasting tasks, which achieve comparable or superior
performance to linear probing heads while drastically reducing parameter
counts.
Extensive experiments on long-/short-term forecasting and anomaly detection
tasks across diverse datasets, coupled with ablation studies, validate the
effectiveness of our method.
|
2503.16993 | Tobias Brudermueller | Tobias Brudermueller, Elgar Fleisch, Marina Gonz\'alez Vay\'a,
Thorsten Staake | HEAPO -- An Open Dataset for Heat Pump Optimization with Smart
Electricity Meter Data and On-Site Inspection Protocols | Please note that this manuscript on arXiv is a preprint. The dataset
and dataloader are already available in their initial version, but updates
may occur in future releases as the manuscript is currently under peer
review. If you use the dataset in its initial form, please cite this arXiv
paper. Related GitHub repository: https://github.com/tbrumue/heapo | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Heat pumps are essential for decarbonizing residential heating but consume
substantial electrical energy, impacting operational costs and grid demand.
Many systems run inefficiently due to planning flaws, operational faults, or
misconfigurations. While optimizing performance requires skilled professionals,
labor shortages hinder large-scale interventions. However, digital tools and
improved data availability create new service opportunities for energy
efficiency, predictive maintenance, and demand-side management. To support
research and practical solutions, we present an open-source dataset of
electricity consumption from 1,408 households with heat pumps and smart
electricity meters in the canton of Zurich, Switzerland, recorded at 15-minute
and daily resolutions between 2018-11-03 and 2024-03-21. The dataset includes
household metadata, weather data from 8 stations, and ground truth data from
410 field visit protocols collected by energy consultants during system
optimizations. Additionally, the dataset includes a Python-based data loader to
facilitate seamless data processing and exploration.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 09:58:01 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Brudermueller",
"Tobias",
""
],
[
"Fleisch",
"Elgar",
""
],
[
"Vayá",
"Marina González",
""
],
[
"Staake",
"Thorsten",
""
]
] | TITLE: HEAPO -- An Open Dataset for Heat Pump Optimization with Smart
Electricity Meter Data and On-Site Inspection Protocols
ABSTRACT: Heat pumps are essential for decarbonizing residential heating but consume
substantial electrical energy, impacting operational costs and grid demand.
Many systems run inefficiently due to planning flaws, operational faults, or
misconfigurations. While optimizing performance requires skilled professionals,
labor shortages hinder large-scale interventions. However, digital tools and
improved data availability create new service opportunities for energy
efficiency, predictive maintenance, and demand-side management. To support
research and practical solutions, we present an open-source dataset of
electricity consumption from 1,408 households with heat pumps and smart
electricity meters in the canton of Zurich, Switzerland, recorded at 15-minute
and daily resolutions between 2018-11-03 and 2024-03-21. The dataset includes
household metadata, weather data from 8 stations, and ground truth data from
410 field visit protocols collected by energy consultants during system
optimizations. Additionally, the dataset includes a Python-based data loader to
facilitate seamless data processing and exploration.
|
2503.16997 | Qinghe Ma | Qinghe Ma, Jian Zhang, Zekun Li, Lei Qi, Qian Yu and Yinghuan Shi | Steady Progress Beats Stagnation: Mutual Aid of Foundation and
Conventional Models in Mixed Domain Semi-Supervised Medical Image
Segmentation | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large pretrained visual foundation models exhibit impressive general
capabilities. However, the extensive prior knowledge inherent in these models
can sometimes be a double-edged sword when adapting them to downstream tasks in
specific domains. In the context of semi-supervised medical image segmentation
with domain shift, foundation models like MedSAM tend to make overconfident
predictions, some of which are incorrect. The error accumulation hinders the
effective utilization of unlabeled data and limits further improvements. In
this paper, we introduce a Synergistic training framework for Foundation and
Conventional models (SynFoC) to address the issue. We observe that a
conventional model trained from scratch has the ability to correct the
high-confidence mispredictions of the foundation model, while the foundation
model can supervise it with high-quality pseudo-labels in the early training
stages. Furthermore, to enhance the collaborative training effectiveness of
both models and promote reliable convergence towards optimization, the
consensus-divergence consistency regularization is proposed. We demonstrate the
superiority of our method across four public multi-domain datasets. In
particular, our method improves the Dice score by 10.31\% on the Prostate
dataset. Our code is available at https://github.com/MQinghe/SynFoC .
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 10:03:32 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Ma",
"Qinghe",
""
],
[
"Zhang",
"Jian",
""
],
[
"Li",
"Zekun",
""
],
[
"Qi",
"Lei",
""
],
[
"Yu",
"Qian",
""
],
[
"Shi",
"Yinghuan",
""
]
] | TITLE: Steady Progress Beats Stagnation: Mutual Aid of Foundation and
Conventional Models in Mixed Domain Semi-Supervised Medical Image
Segmentation
ABSTRACT: Large pretrained visual foundation models exhibit impressive general
capabilities. However, the extensive prior knowledge inherent in these models
can sometimes be a double-edged sword when adapting them to downstream tasks in
specific domains. In the context of semi-supervised medical image segmentation
with domain shift, foundation models like MedSAM tend to make overconfident
predictions, some of which are incorrect. The error accumulation hinders the
effective utilization of unlabeled data and limits further improvements. In
this paper, we introduce a Synergistic training framework for Foundation and
Conventional models (SynFoC) to address the issue. We observe that a
conventional model trained from scratch has the ability to correct the
high-confidence mispredictions of the foundation model, while the foundation
model can supervise it with high-quality pseudo-labels in the early training
stages. Furthermore, to enhance the collaborative training effectiveness of
both models and promote reliable convergence towards optimization, the
consensus-divergence consistency regularization is proposed. We demonstrate the
superiority of our method across four public multi-domain datasets. In
particular, our method improves the Dice score by 10.31\% on the Prostate
dataset. Our code is available at https://github.com/MQinghe/SynFoC .
|
2503.17002 | Weimin Wang | Weimin Wang, Yu Du, Ting Yang, Yu Liu | Targetless 6DoF Calibration of LiDAR and 2D Scanning Radar Based on
Cylindrical Occupancy | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Owing to the capability for reliable and all-weather long-range sensing, the
fusion of LiDAR and Radar has been widely applied to autonomous vehicles for
robust perception. In practical operation, well manually calibrated extrinsic
parameters, which are crucial for the fusion of multi-modal sensors, may drift
due to the vibration. To address this issue, we present a novel targetless
calibration approach, termed LiRaCo, for the extrinsic 6DoF calibration of
LiDAR and Radar sensors. Although both types of sensors can obtain geometric
information, bridging the geometric correspondences between multi-modal data
without any clues of explicit artificial markers is nontrivial, mainly due to
the low vertical resolution of scanning Radar. To achieve the targetless
calibration, LiRaCo leverages a spatial occupancy consistency between LiDAR
point clouds and Radar scans in a common cylindrical representation,
considering the increasing data sparsity with distance for both sensors.
Specifically, LiRaCo expands the valid Radar scanned pixels into 3D occupancy
grids to constrain LiDAR point clouds based on spatial consistency.
Consequently, a cost function involving extrinsic calibration parameters is
formulated based on the spatial overlap of 3D grids and LiDAR points. Extrinsic
parameters are finally estimated by optimizing the cost function. Comprehensive
quantitative and qualitative experiments on two real outdoor datasets with
different LiDAR sensors demonstrate the feasibility and accuracy of the
proposed method. The source code will be publicly available.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 10:09:04 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Wang",
"Weimin",
""
],
[
"Du",
"Yu",
""
],
[
"Yang",
"Ting",
""
],
[
"Liu",
"Yu",
""
]
] | TITLE: Targetless 6DoF Calibration of LiDAR and 2D Scanning Radar Based on
Cylindrical Occupancy
ABSTRACT: Owing to the capability for reliable and all-weather long-range sensing, the
fusion of LiDAR and Radar has been widely applied to autonomous vehicles for
robust perception. In practical operation, well manually calibrated extrinsic
parameters, which are crucial for the fusion of multi-modal sensors, may drift
due to the vibration. To address this issue, we present a novel targetless
calibration approach, termed LiRaCo, for the extrinsic 6DoF calibration of
LiDAR and Radar sensors. Although both types of sensors can obtain geometric
information, bridging the geometric correspondences between multi-modal data
without any clues of explicit artificial markers is nontrivial, mainly due to
the low vertical resolution of scanning Radar. To achieve the targetless
calibration, LiRaCo leverages a spatial occupancy consistency between LiDAR
point clouds and Radar scans in a common cylindrical representation,
considering the increasing data sparsity with distance for both sensors.
Specifically, LiRaCo expands the valid Radar scanned pixels into 3D occupancy
grids to constrain LiDAR point clouds based on spatial consistency.
Consequently, a cost function involving extrinsic calibration parameters is
formulated based on the spatial overlap of 3D grids and LiDAR points. Extrinsic
parameters are finally estimated by optimizing the cost function. Comprehensive
quantitative and qualitative experiments on two real outdoor datasets with
different LiDAR sensors demonstrate the feasibility and accuracy of the
proposed method. The source code will be publicly available.
|
2503.17012 | Ziqi Ji | Ziqi Ji, Gang Du, Penghao Duan | Learning Non-Ideal Single Vortex Flows Using the Differentiable Vortex
Particle Method | null | null | null | null | physics.flu-dyn physics.comp-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This study extends the differentiable vortex particle method (DVPM) beyond
idealized flow scenarios to encompass more realistic, non-ideal conditions,
including viscous flow and flow subjected to non-conservative body forces. We
establish the Lamb-Oseen vortex as a benchmark case, representing a fundamental
viscous single vortex flow in fluid mechanics. This selection offers
significant analytical advantages, as the Lamb-Oseen vortex possesses an exact
analytical solution derived from the Navier-Stokes (NS) equations, thereby
providing definitive ground truth data for training and validation purposes.
Through rigorous evaluation across a spectrum of Reynolds numbers, we
demonstrate that DVPM achieves superior accuracy in modeling the Lamb-Oseen
vortex compared to conventional convolutional neural networks (CNNs) and
physics-informed neural networks (PINNs). Our results substantiate DVPM's
robust capabilities in modeling non-ideal single vortex flows, establishing its
distinct advantages over contemporary deep learning methodologies in fluid
dynamics applications. The dataset and source code are publicly available on
GitHub at the following link:
https://github.com/jh36714753/Learning_Non-Ideal_Single_Vortex_Flows.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 10:22:34 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Ji",
"Ziqi",
""
],
[
"Du",
"Gang",
""
],
[
"Duan",
"Penghao",
""
]
] | TITLE: Learning Non-Ideal Single Vortex Flows Using the Differentiable Vortex
Particle Method
ABSTRACT: This study extends the differentiable vortex particle method (DVPM) beyond
idealized flow scenarios to encompass more realistic, non-ideal conditions,
including viscous flow and flow subjected to non-conservative body forces. We
establish the Lamb-Oseen vortex as a benchmark case, representing a fundamental
viscous single vortex flow in fluid mechanics. This selection offers
significant analytical advantages, as the Lamb-Oseen vortex possesses an exact
analytical solution derived from the Navier-Stokes (NS) equations, thereby
providing definitive ground truth data for training and validation purposes.
Through rigorous evaluation across a spectrum of Reynolds numbers, we
demonstrate that DVPM achieves superior accuracy in modeling the Lamb-Oseen
vortex compared to conventional convolutional neural networks (CNNs) and
physics-informed neural networks (PINNs). Our results substantiate DVPM's
robust capabilities in modeling non-ideal single vortex flows, establishing its
distinct advantages over contemporary deep learning methodologies in fluid
dynamics applications. The dataset and source code are publicly available on
GitHub at the following link:
https://github.com/jh36714753/Learning_Non-Ideal_Single_Vortex_Flows.
|
2503.17015 | Sonali Parbhoo | Haoyang Hong, Ioanna Papanikolaou, Sonali Parbhoo | Do regularization methods for shortcut mitigation work as intended? | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-sa/4.0/ | Mitigating shortcuts, where models exploit spurious correlations in training
data, remains a significant challenge for improving generalization.
Regularization methods have been proposed to address this issue by enhancing
model generalizability. However, we demonstrate that these methods can
sometimes overregularize, inadvertently suppressing causal features along with
spurious ones. In this work, we analyze the theoretical mechanisms by which
regularization mitigates shortcuts and explore the limits of its effectiveness.
Additionally, we identify the conditions under which regularization can
successfully eliminate shortcuts without compromising causal features. Through
experiments on synthetic and real-world datasets, our comprehensive analysis
provides valuable insights into the strengths and limitations of regularization
techniques for addressing shortcuts, offering guidance for developing more
robust models.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 10:24:43 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Hong",
"Haoyang",
""
],
[
"Papanikolaou",
"Ioanna",
""
],
[
"Parbhoo",
"Sonali",
""
]
] | TITLE: Do regularization methods for shortcut mitigation work as intended?
ABSTRACT: Mitigating shortcuts, where models exploit spurious correlations in training
data, remains a significant challenge for improving generalization.
Regularization methods have been proposed to address this issue by enhancing
model generalizability. However, we demonstrate that these methods can
sometimes overregularize, inadvertently suppressing causal features along with
spurious ones. In this work, we analyze the theoretical mechanisms by which
regularization mitigates shortcuts and explore the limits of its effectiveness.
Additionally, we identify the conditions under which regularization can
successfully eliminate shortcuts without compromising causal features. Through
experiments on synthetic and real-world datasets, our comprehensive analysis
provides valuable insights into the strengths and limitations of regularization
techniques for addressing shortcuts, offering guidance for developing more
robust models.
|
2503.17024 | Paul Hager | David Mildenberger, Paul Hager, Daniel Rueckert, Martin J Menten | A Tale of Two Classes: Adapting Supervised Contrastive Learning to
Binary Imbalanced Datasets | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised contrastive learning (SupCon) has proven to be a powerful
alternative to the standard cross-entropy loss for classification of
multi-class balanced datasets. However, it struggles to learn well-conditioned
representations of datasets with long-tailed class distributions. This problem
is potentially exacerbated for binary imbalanced distributions, which are
commonly encountered during many real-world problems such as medical diagnosis.
In experiments on seven binary datasets of natural and medical images, we show
that the performance of SupCon decreases with increasing class imbalance. To
substantiate these findings, we introduce two novel metrics that evaluate the
quality of the learned representation space. By measuring the class
distribution in local neighborhoods, we are able to uncover structural
deficiencies of the representation space that classical metrics cannot detect.
Informed by these insights, we propose two new supervised contrastive learning
strategies tailored to binary imbalanced datasets that improve the structure of
the representation space and increase downstream classification accuracy over
standard SupCon by up to 35%. We make our code available.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 10:34:51 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Mildenberger",
"David",
""
],
[
"Hager",
"Paul",
""
],
[
"Rueckert",
"Daniel",
""
],
[
"Menten",
"Martin J",
""
]
] | TITLE: A Tale of Two Classes: Adapting Supervised Contrastive Learning to
Binary Imbalanced Datasets
ABSTRACT: Supervised contrastive learning (SupCon) has proven to be a powerful
alternative to the standard cross-entropy loss for classification of
multi-class balanced datasets. However, it struggles to learn well-conditioned
representations of datasets with long-tailed class distributions. This problem
is potentially exacerbated for binary imbalanced distributions, which are
commonly encountered during many real-world problems such as medical diagnosis.
In experiments on seven binary datasets of natural and medical images, we show
that the performance of SupCon decreases with increasing class imbalance. To
substantiate these findings, we introduce two novel metrics that evaluate the
quality of the learned representation space. By measuring the class
distribution in local neighborhoods, we are able to uncover structural
deficiencies of the representation space that classical metrics cannot detect.
Informed by these insights, we propose two new supervised contrastive learning
strategies tailored to binary imbalanced datasets that improve the structure of
the representation space and increase downstream classification accuracy over
standard SupCon by up to 35%. We make our code available.
|
2503.17029 | Hu Junjie | Junjie Hu, Shuyong Gao, Qianyu Guo, Yan Wang, Qishan Wang, Yuang Feng,
Wenqiang Zhang | AnimatePainter: A Self-Supervised Rendering Framework for Reconstructing
Painting Process | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Humans can intuitively decompose an image into a sequence of strokes to
create a painting, yet existing methods for generating drawing processes are
limited to specific data types and often rely on expensive human-annotated
datasets. We propose a novel self-supervised framework for generating drawing
processes from any type of image, treating the task as a video generation
problem. Our approach reverses the drawing process by progressively removing
strokes from a reference image, simulating a human-like creation sequence.
Crucially, our method does not require costly datasets of real human drawing
processes; instead, we leverage depth estimation and stroke rendering to
construct a self-supervised dataset. We model human drawings as "refinement"
and "layering" processes and introduce depth fusion layers to enable video
generation models to learn and replicate human drawing behavior. Extensive
experiments validate the effectiveness of our approach, demonstrating its
ability to generate realistic drawings without the need for real drawing
process data.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 10:39:04 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Hu",
"Junjie",
""
],
[
"Gao",
"Shuyong",
""
],
[
"Guo",
"Qianyu",
""
],
[
"Wang",
"Yan",
""
],
[
"Wang",
"Qishan",
""
],
[
"Feng",
"Yuang",
""
],
[
"Zhang",
"Wenqiang",
""
]
] | TITLE: AnimatePainter: A Self-Supervised Rendering Framework for Reconstructing
Painting Process
ABSTRACT: Humans can intuitively decompose an image into a sequence of strokes to
create a painting, yet existing methods for generating drawing processes are
limited to specific data types and often rely on expensive human-annotated
datasets. We propose a novel self-supervised framework for generating drawing
processes from any type of image, treating the task as a video generation
problem. Our approach reverses the drawing process by progressively removing
strokes from a reference image, simulating a human-like creation sequence.
Crucially, our method does not require costly datasets of real human drawing
processes; instead, we leverage depth estimation and stroke rendering to
construct a self-supervised dataset. We model human drawings as "refinement"
and "layering" processes and introduce depth fusion layers to enable video
generation models to learn and replicate human drawing behavior. Extensive
experiments validate the effectiveness of our approach, demonstrating its
ability to generate realistic drawings without the need for real drawing
process data.
|
2503.17034 | Stephen Lloyd-Brown | Stephen Lloyd-Brown, Susan Francis, Caroline Hoad, Penny Gowland,
Karen Mullinger, Andrew French and Xin Chen | An Attentive Representative Sample Selection Strategy Combined with
Balanced Batch Training for Skin Lesion Segmentation | Accepted to ISBI 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An often overlooked problem in medical image segmentation research is the
effective selection of training subsets to annotate from a complete set of
unlabelled data. Many studies select their training sets at random, which may
lead to suboptimal model performance, especially in the minimal supervision
setting where each training image has a profound effect on performance
outcomes. This work aims to address this issue. We use prototypical contrasting
learning and clustering to extract representative and diverse samples for
annotation. We improve upon prior works with a bespoke cluster-based image
selection process. Additionally, we introduce the concept of unsupervised
balanced batch dataloading to medical image segmentation, which aims to improve
model learning with minimally annotated data. We evaluated our method on a
public skin lesion dataset (ISIC 2018) and compared it to another
state-of-the-art data sampling method. Our method achieved superior performance
in a low annotation budget scenario.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 10:42:22 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Lloyd-Brown",
"Stephen",
""
],
[
"Francis",
"Susan",
""
],
[
"Hoad",
"Caroline",
""
],
[
"Gowland",
"Penny",
""
],
[
"Mullinger",
"Karen",
""
],
[
"French",
"Andrew",
""
],
[
"Chen",
"Xin",
""
]
] | TITLE: An Attentive Representative Sample Selection Strategy Combined with
Balanced Batch Training for Skin Lesion Segmentation
ABSTRACT: An often overlooked problem in medical image segmentation research is the
effective selection of training subsets to annotate from a complete set of
unlabelled data. Many studies select their training sets at random, which may
lead to suboptimal model performance, especially in the minimal supervision
setting where each training image has a profound effect on performance
outcomes. This work aims to address this issue. We use prototypical contrasting
learning and clustering to extract representative and diverse samples for
annotation. We improve upon prior works with a bespoke cluster-based image
selection process. Additionally, we introduce the concept of unsupervised
balanced batch dataloading to medical image segmentation, which aims to improve
model learning with minimally annotated data. We evaluated our method on a
public skin lesion dataset (ISIC 2018) and compared it to another
state-of-the-art data sampling method. Our method achieved superior performance
in a low annotation budget scenario.
|
2503.17039 | Jeremy Barnes | Jeremy Barnes, Naiara Perez, Alba Bonet-Jover, Bego\~na Altuna | Summarization Metrics for Spanish and Basque: Do Automatic Scores and
LLM-Judges Correlate with Humans? | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Studies on evaluation metrics and LLM-as-a-Judge models for automatic text
summarization have largely been focused on English, limiting our understanding
of their effectiveness in other languages. Through our new dataset BASSE
(BAsque and Spanish Summarization Evaluation), we address this situation by
collecting human judgments on 2,040 abstractive summaries in Basque and
Spanish, generated either manually or by five LLMs with four different prompts.
For each summary, annotators evaluated five criteria on a 5-point Likert scale:
coherence, consistency, fluency, relevance, and 5W1H. We use these data to
reevaluate traditional automatic metrics used for evaluating summaries, as well
as several LLM-as-a-Judge models that show strong performance on this task in
English. Our results show that currently proprietary judge LLMs have the
highest correlation with human judgments, followed by criteria-specific
automatic metrics, while open-sourced judge LLMs perform poorly. We release
BASSE and our code publicly, along with the first large-scale Basque
summarization dataset containing 22,525 news articles with their subheads.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 10:52:20 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Barnes",
"Jeremy",
""
],
[
"Perez",
"Naiara",
""
],
[
"Bonet-Jover",
"Alba",
""
],
[
"Altuna",
"Begoña",
""
]
] | TITLE: Summarization Metrics for Spanish and Basque: Do Automatic Scores and
LLM-Judges Correlate with Humans?
ABSTRACT: Studies on evaluation metrics and LLM-as-a-Judge models for automatic text
summarization have largely been focused on English, limiting our understanding
of their effectiveness in other languages. Through our new dataset BASSE
(BAsque and Spanish Summarization Evaluation), we address this situation by
collecting human judgments on 2,040 abstractive summaries in Basque and
Spanish, generated either manually or by five LLMs with four different prompts.
For each summary, annotators evaluated five criteria on a 5-point Likert scale:
coherence, consistency, fluency, relevance, and 5W1H. We use these data to
reevaluate traditional automatic metrics used for evaluating summaries, as well
as several LLM-as-a-Judge models that show strong performance on this task in
English. Our results show that currently proprietary judge LLMs have the
highest correlation with human judgments, followed by criteria-specific
automatic metrics, while open-sourced judge LLMs perform poorly. We release
BASSE and our code publicly, along with the first large-scale Basque
summarization dataset containing 22,525 news articles with their subheads.
|
2503.17044 | Chandan Yeshwanth | Chandan Yeshwanth, David Rozenberszki, Angela Dai | ExCap3D: Expressive 3D Scene Understanding via Object Captioning with
Varying Detail | Project page: https://cy94.github.io/excap3d/, Video:
https://www.youtube.com/watch?v=SQRV1l_0oY0 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generating text descriptions of objects in 3D indoor scenes is an important
building block of embodied understanding. Existing methods do this by
describing objects at a single level of detail, which often does not capture
fine-grained details such as varying textures, materials, and shapes of the
parts of objects. We propose the task of expressive 3D captioning: given an
input 3D scene, describe objects at multiple levels of detail: a high-level
object description, and a low-level description of the properties of its parts.
To produce such captions, we present ExCap3D, an expressive 3D captioning model
which takes as input a 3D scan, and for each detected object in the scan,
generates a fine-grained collective description of the parts of the object,
along with an object-level description conditioned on the part-level
description. We design ExCap3D to encourage semantic consistency between the
generated text descriptions, as well as textual similarity in the latent space,
to further increase the quality of the generated captions. To enable this task,
we generated the ExCap3D Dataset by leveraging a visual-language model (VLM)
for multi-view captioning. The ExCap3D Dataset contains captions on the
ScanNet++ dataset with varying levels of detail, comprising 190k text
descriptions of 34k 3D objects in 947 indoor scenes. Our experiments show that
the object- and part-level of detail captions generated by ExCap3D are of
higher quality than those produced by state-of-the-art methods, with a Cider
score improvement of 17% and 124% for object- and part-level details
respectively. Our code, dataset and models will be made publicly available.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 11:00:12 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Yeshwanth",
"Chandan",
""
],
[
"Rozenberszki",
"David",
""
],
[
"Dai",
"Angela",
""
]
] | TITLE: ExCap3D: Expressive 3D Scene Understanding via Object Captioning with
Varying Detail
ABSTRACT: Generating text descriptions of objects in 3D indoor scenes is an important
building block of embodied understanding. Existing methods do this by
describing objects at a single level of detail, which often does not capture
fine-grained details such as varying textures, materials, and shapes of the
parts of objects. We propose the task of expressive 3D captioning: given an
input 3D scene, describe objects at multiple levels of detail: a high-level
object description, and a low-level description of the properties of its parts.
To produce such captions, we present ExCap3D, an expressive 3D captioning model
which takes as input a 3D scan, and for each detected object in the scan,
generates a fine-grained collective description of the parts of the object,
along with an object-level description conditioned on the part-level
description. We design ExCap3D to encourage semantic consistency between the
generated text descriptions, as well as textual similarity in the latent space,
to further increase the quality of the generated captions. To enable this task,
we generated the ExCap3D Dataset by leveraging a visual-language model (VLM)
for multi-view captioning. The ExCap3D Dataset contains captions on the
ScanNet++ dataset with varying levels of detail, comprising 190k text
descriptions of 34k 3D objects in 947 indoor scenes. Our experiments show that
the object- and part-level of detail captions generated by ExCap3D are of
higher quality than those produced by state-of-the-art methods, with a Cider
score improvement of 17% and 124% for object- and part-level details
respectively. Our code, dataset and models will be made publicly available.
|
2503.17050 | Yuang Feng | Yuang Feng, Shuyong Gao, Fuzhen Yan, Yicheng Song, Lingyi Hong, Junjie
Hu, Wenqiang Zhang | Scoring, Remember, and Reference: Catching Camouflaged Objects in Videos | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Video Camouflaged Object Detection (VCOD) aims to segment objects whose
appearances closely resemble their surroundings, posing a challenging and
emerging task. Existing vision models often struggle in such scenarios due to
the indistinguishable appearance of camouflaged objects and the insufficient
exploitation of dynamic information in videos. To address these challenges, we
propose an end-to-end VCOD framework inspired by human memory-recognition,
which leverages historical video information by integrating memory reference
frames for camouflaged sequence processing. Specifically, we design a
dual-purpose decoder that simultaneously generates predicted masks and scores,
enabling reference frame selection based on scores while introducing auxiliary
supervision to enhance feature extraction.Furthermore, this study introduces a
novel reference-guided multilevel asymmetric attention mechanism, effectively
integrating long-term reference information with short-term motion cues for
comprehensive feature extraction. By combining these modules, we develop the
Scoring, Remember, and Reference (SRR) framework, which efficiently extracts
information to locate targets and employs memory guidance to improve subsequent
processing. With its optimized module design and effective utilization of video
data, our model achieves significant performance improvements, surpassing
existing approaches by 10% on benchmark datasets while requiring fewer
parameters (54M) and only a single pass through the video. The code will be
made publicly available.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 11:08:14 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Feng",
"Yuang",
""
],
[
"Gao",
"Shuyong",
""
],
[
"Yan",
"Fuzhen",
""
],
[
"Song",
"Yicheng",
""
],
[
"Hong",
"Lingyi",
""
],
[
"Hu",
"Junjie",
""
],
[
"Zhang",
"Wenqiang",
""
]
] | TITLE: Scoring, Remember, and Reference: Catching Camouflaged Objects in Videos
ABSTRACT: Video Camouflaged Object Detection (VCOD) aims to segment objects whose
appearances closely resemble their surroundings, posing a challenging and
emerging task. Existing vision models often struggle in such scenarios due to
the indistinguishable appearance of camouflaged objects and the insufficient
exploitation of dynamic information in videos. To address these challenges, we
propose an end-to-end VCOD framework inspired by human memory-recognition,
which leverages historical video information by integrating memory reference
frames for camouflaged sequence processing. Specifically, we design a
dual-purpose decoder that simultaneously generates predicted masks and scores,
enabling reference frame selection based on scores while introducing auxiliary
supervision to enhance feature extraction.Furthermore, this study introduces a
novel reference-guided multilevel asymmetric attention mechanism, effectively
integrating long-term reference information with short-term motion cues for
comprehensive feature extraction. By combining these modules, we develop the
Scoring, Remember, and Reference (SRR) framework, which efficiently extracts
information to locate targets and employs memory guidance to improve subsequent
processing. With its optimized module design and effective utilization of video
data, our model achieves significant performance improvements, surpassing
existing approaches by 10% on benchmark datasets while requiring fewer
parameters (54M) and only a single pass through the video. The code will be
made publicly available.
|
2503.17060 | Zhibin Gao | Yujie Liu, Xiaoying Wang, Yuzhou Hao, Xuejie Li, Jun Sun, Turab
Lookman, Xiangdong Ding, Zhibin Gao | PINK: physical-informed machine learning for lattice thermal
conductivity | 21 pages, 10 figures | null | 10.20517/jmi.2024.86 | null | cond-mat.mtrl-sci cond-mat.mes-hall physics.app-ph physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Lattice thermal conductivity ($\kappa_L$) is crucial for efficient thermal
management in electronics and energy conversion technologies. Traditional
methods for predicting \k{appa}L are often computationally expensive, limiting
their scalability for large-scale material screening. Empirical models, such as
the Slack model, offer faster alternatives but require time-consuming
calculations for key parameters such as sound velocity and the Gruneisen
parameter. This work presents a high-throughput framework, physical-informed
kappa (PINK), which combines the predictive power of crystal graph
convolutional neural networks (CGCNNs) with the physical interpretability of
the Slack model to predict \k{appa}L directly from crystallographic information
files (CIFs). Unlike previous approaches, PINK enables rapid, batch predictions
by extracting material properties such as bulk and shear modulus from CIFs
using a well-trained CGCNN model. These properties are then used to compute the
necessary parameters for $\kappa_L$ calculation through a simplified physical
formula. PINK was applied to a dataset of 377,221 stable materials, enabling
the efficient identification of promising candidates with ultralow $\kappa_L$
values, such as Ag$_3$Te$_4$W and Ag$_3$Te$_4$Ta. The platform, accessible via
a user-friendly interface, offers an unprecedented combination of speed,
accuracy, and scalability, significantly accelerating material discovery for
thermal management and energy conversion applications.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 11:27:28 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Liu",
"Yujie",
""
],
[
"Wang",
"Xiaoying",
""
],
[
"Hao",
"Yuzhou",
""
],
[
"Li",
"Xuejie",
""
],
[
"Sun",
"Jun",
""
],
[
"Lookman",
"Turab",
""
],
[
"Ding",
"Xiangdong",
""
],
[
"Gao",
"Zhibin",
""
]
] | TITLE: PINK: physical-informed machine learning for lattice thermal
conductivity
ABSTRACT: Lattice thermal conductivity ($\kappa_L$) is crucial for efficient thermal
management in electronics and energy conversion technologies. Traditional
methods for predicting \k{appa}L are often computationally expensive, limiting
their scalability for large-scale material screening. Empirical models, such as
the Slack model, offer faster alternatives but require time-consuming
calculations for key parameters such as sound velocity and the Gruneisen
parameter. This work presents a high-throughput framework, physical-informed
kappa (PINK), which combines the predictive power of crystal graph
convolutional neural networks (CGCNNs) with the physical interpretability of
the Slack model to predict \k{appa}L directly from crystallographic information
files (CIFs). Unlike previous approaches, PINK enables rapid, batch predictions
by extracting material properties such as bulk and shear modulus from CIFs
using a well-trained CGCNN model. These properties are then used to compute the
necessary parameters for $\kappa_L$ calculation through a simplified physical
formula. PINK was applied to a dataset of 377,221 stable materials, enabling
the efficient identification of promising candidates with ultralow $\kappa_L$
values, such as Ag$_3$Te$_4$W and Ag$_3$Te$_4$Ta. The platform, accessible via
a user-friendly interface, offers an unprecedented combination of speed,
accuracy, and scalability, significantly accelerating material discovery for
thermal management and energy conversion applications.
|
2503.17061 | Rachmad Vidya Wicaksana Putra | Mishal Fatima Minhas, Rachmad Vidya Wicaksana Putra, Falah Awwad,
Osman Hasan, Muhammad Shafique | Replay4NCL: An Efficient Memory Replay-based Methodology for
Neuromorphic Continual Learning in Embedded AI Systems | Accepted at the 62th Design Automation Conference (DAC) 2025, June
2025, San Francisco, CA, USA | null | null | null | cs.NE cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuromorphic Continual Learning (NCL) paradigm leverages Spiking Neural
Networks (SNNs) to enable continual learning (CL) capabilities for AI systems
to adapt to dynamically changing environments. Currently, the state-of-the-art
employ a memory replay-based method to maintain the old knowledge. However,
this technique relies on long timesteps and compression-decompression steps,
thereby incurring significant latency and energy overheads, which are not
suitable for tightly-constrained embedded AI systems (e.g., mobile
agents/robotics). To address this, we propose Replay4NCL, a novel efficient
memory replay-based methodology for enabling NCL in embedded AI systems.
Specifically, Replay4NCL compresses the latent data (old knowledge), then
replays them during the NCL training phase with small timesteps, to minimize
the processing latency and energy consumption. To compensate the information
loss from reduced spikes, we adjust the neuron threshold potential and learning
rate settings. Experimental results on the class-incremental scenario with the
Spiking Heidelberg Digits (SHD) dataset show that Replay4NCL can preserve old
knowledge with Top-1 accuracy of 90.43% compared to 86.22% from the
state-of-the-art, while effectively learning new tasks, achieving 4.88x latency
speed-up, 20% latent memory saving, and 36.43% energy saving. These results
highlight the potential of our Replay4NCL methodology to further advances NCL
capabilities for embedded AI systems.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 11:33:22 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Minhas",
"Mishal Fatima",
""
],
[
"Putra",
"Rachmad Vidya Wicaksana",
""
],
[
"Awwad",
"Falah",
""
],
[
"Hasan",
"Osman",
""
],
[
"Shafique",
"Muhammad",
""
]
] | TITLE: Replay4NCL: An Efficient Memory Replay-based Methodology for
Neuromorphic Continual Learning in Embedded AI Systems
ABSTRACT: Neuromorphic Continual Learning (NCL) paradigm leverages Spiking Neural
Networks (SNNs) to enable continual learning (CL) capabilities for AI systems
to adapt to dynamically changing environments. Currently, the state-of-the-art
employ a memory replay-based method to maintain the old knowledge. However,
this technique relies on long timesteps and compression-decompression steps,
thereby incurring significant latency and energy overheads, which are not
suitable for tightly-constrained embedded AI systems (e.g., mobile
agents/robotics). To address this, we propose Replay4NCL, a novel efficient
memory replay-based methodology for enabling NCL in embedded AI systems.
Specifically, Replay4NCL compresses the latent data (old knowledge), then
replays them during the NCL training phase with small timesteps, to minimize
the processing latency and energy consumption. To compensate the information
loss from reduced spikes, we adjust the neuron threshold potential and learning
rate settings. Experimental results on the class-incremental scenario with the
Spiking Heidelberg Digits (SHD) dataset show that Replay4NCL can preserve old
knowledge with Top-1 accuracy of 90.43% compared to 86.22% from the
state-of-the-art, while effectively learning new tasks, achieving 4.88x latency
speed-up, 20% latent memory saving, and 36.43% energy saving. These results
highlight the potential of our Replay4NCL methodology to further advances NCL
capabilities for embedded AI systems.
|
2503.17067 | Kai Wang | Kai Wang, Zhen Sun, Bailing Wang, Qilin Fan, Ming Li, Hongke Zhang | ATHENA: An In-vehicle CAN Intrusion Detection Framework Based on
Physical Characteristics of Vehicle Systems | 13 pages, 9 figures, 4 tables | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing interconnection between In-Vehicle Networks (IVNs) and
external environments, intelligent vehicles are increasingly vulnerable to
sophisticated external network attacks. This paper proposes ATHENA, the first
IVN intrusion detection framework that adopts a vehicle-cloud integrated
architecture to achieve better security performance for the
resource-constrained vehicular environment. Specifically, in the cloud with
sufficient resources, ATHENA uses the clustering method of multi-distribution
mixture model combined with deep data mining technology to generate the raw
Payload Rule Bank of IVN CAN messages, and then improves the rule quality with
the help of exploitation on the first-principled physical knowledge of the
vehicle system, after which the payload rules are periodically sent to the
vehicle terminal. At the vehicle terminal, a simple LSTM component is used to
generate the Time Rule Bank representing the long-term time series dependencies
and the periodic characteristics of CAN messages, but not for any detection
tasks as in traditional usage scenarios, where only the generated time rules
are the candidates for further IVN intrusion detection tasks. Based on both the
payload and time rules generated from cloud and vehicle terminal, ATHENA can
achieve efficient intrusion detection capability by simple rule-base matching
operations, rather than using complex black-box reasoning of resource-intensive
neural network models, which is in fact only used for rule logic generation
phase instead of the actual intrusion detection phase in our framework.
Comparative experimental results on the ROAD dataset, which is current the most
outstanding real-world in-vehicle CAN dataset covering new instances of
sophisticated and stealthy masquerade attacks, demonstrate ATHENA significantly
outperforms the state-of-the-art IVN intrusion detection methods in detecting
complex attacks.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 11:49:08 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Wang",
"Kai",
""
],
[
"Sun",
"Zhen",
""
],
[
"Wang",
"Bailing",
""
],
[
"Fan",
"Qilin",
""
],
[
"Li",
"Ming",
""
],
[
"Zhang",
"Hongke",
""
]
] | TITLE: ATHENA: An In-vehicle CAN Intrusion Detection Framework Based on
Physical Characteristics of Vehicle Systems
ABSTRACT: With the growing interconnection between In-Vehicle Networks (IVNs) and
external environments, intelligent vehicles are increasingly vulnerable to
sophisticated external network attacks. This paper proposes ATHENA, the first
IVN intrusion detection framework that adopts a vehicle-cloud integrated
architecture to achieve better security performance for the
resource-constrained vehicular environment. Specifically, in the cloud with
sufficient resources, ATHENA uses the clustering method of multi-distribution
mixture model combined with deep data mining technology to generate the raw
Payload Rule Bank of IVN CAN messages, and then improves the rule quality with
the help of exploitation on the first-principled physical knowledge of the
vehicle system, after which the payload rules are periodically sent to the
vehicle terminal. At the vehicle terminal, a simple LSTM component is used to
generate the Time Rule Bank representing the long-term time series dependencies
and the periodic characteristics of CAN messages, but not for any detection
tasks as in traditional usage scenarios, where only the generated time rules
are the candidates for further IVN intrusion detection tasks. Based on both the
payload and time rules generated from cloud and vehicle terminal, ATHENA can
achieve efficient intrusion detection capability by simple rule-base matching
operations, rather than using complex black-box reasoning of resource-intensive
neural network models, which is in fact only used for rule logic generation
phase instead of the actual intrusion detection phase in our framework.
Comparative experimental results on the ROAD dataset, which is current the most
outstanding real-world in-vehicle CAN dataset covering new instances of
sophisticated and stealthy masquerade attacks, demonstrate ATHENA significantly
outperforms the state-of-the-art IVN intrusion detection methods in detecting
complex attacks.
|
2503.17069 | Yufei Shi | Yufei Shi, Weilong Yan, Gang Xu, Yumeng Li, Yuchen Li, Zhenxi Li, Fei
Richard Yu, Ming Li, Si Yong Yeo | PVChat: Personalized Video Chat with One-Shot Learning | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video large language models (ViLLMs) excel in general video understanding,
e.g., recognizing activities like talking and eating, but struggle with
identity-aware comprehension, such as "Wilson is receiving chemotherapy" or
"Tom is discussing with Sarah", limiting their applicability in smart
healthcare and smart home environments. To address this limitation, we propose
a one-shot learning framework PVChat, the first personalized ViLLM that enables
subject-aware question answering (QA) from a single video for each subject. Our
approach optimizes a Mixture-of-Heads (MoH) enhanced ViLLM on a synthetically
augmented video-QA dataset, leveraging a progressive image-to-video learning
strategy. Specifically, we introduce an automated augmentation pipeline that
synthesizes identity-preserving positive samples and retrieves hard negatives
from existing video corpora, generating a diverse training dataset with four QA
types: existence, appearance, action, and location inquiries. To enhance
subject-specific learning, we propose a ReLU Routing MoH attention mechanism,
alongside two novel objectives: (1) Smooth Proximity Regularization for
progressive learning through exponential distance scaling and (2) Head
Activation Enhancement for balanced attention routing. Finally, we adopt a
two-stage training strategy, transitioning from image pre-training to video
fine-tuning, enabling a gradual learning process from static attributes to
dynamic representations. We evaluate PVChat on diverse datasets covering
medical scenarios, TV series, anime, and real-world footage, demonstrating its
superiority in personalized feature understanding after learning from a single
video, compared to state-of-the-art ViLLMs.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 11:50:06 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Shi",
"Yufei",
""
],
[
"Yan",
"Weilong",
""
],
[
"Xu",
"Gang",
""
],
[
"Li",
"Yumeng",
""
],
[
"Li",
"Yuchen",
""
],
[
"Li",
"Zhenxi",
""
],
[
"Yu",
"Fei Richard",
""
],
[
"Li",
"Ming",
""
],
[
"Yeo",
"Si Yong",
""
]
] | TITLE: PVChat: Personalized Video Chat with One-Shot Learning
ABSTRACT: Video large language models (ViLLMs) excel in general video understanding,
e.g., recognizing activities like talking and eating, but struggle with
identity-aware comprehension, such as "Wilson is receiving chemotherapy" or
"Tom is discussing with Sarah", limiting their applicability in smart
healthcare and smart home environments. To address this limitation, we propose
a one-shot learning framework PVChat, the first personalized ViLLM that enables
subject-aware question answering (QA) from a single video for each subject. Our
approach optimizes a Mixture-of-Heads (MoH) enhanced ViLLM on a synthetically
augmented video-QA dataset, leveraging a progressive image-to-video learning
strategy. Specifically, we introduce an automated augmentation pipeline that
synthesizes identity-preserving positive samples and retrieves hard negatives
from existing video corpora, generating a diverse training dataset with four QA
types: existence, appearance, action, and location inquiries. To enhance
subject-specific learning, we propose a ReLU Routing MoH attention mechanism,
alongside two novel objectives: (1) Smooth Proximity Regularization for
progressive learning through exponential distance scaling and (2) Head
Activation Enhancement for balanced attention routing. Finally, we adopt a
two-stage training strategy, transitioning from image pre-training to video
fine-tuning, enabling a gradual learning process from static attributes to
dynamic representations. We evaluate PVChat on diverse datasets covering
medical scenarios, TV series, anime, and real-world footage, demonstrating its
superiority in personalized feature understanding after learning from a single
video, compared to state-of-the-art ViLLMs.
|
2503.17071 | Pablo Garcia-Fernandez | Pablo Garcia-Fernandez, Lorenzo Vaquero, Mingxuan Liu, Feng Xue,
Daniel Cores, Nicu Sebe, Manuel Mucientes, Elisa Ricci | Superpowering Open-Vocabulary Object Detectors for X-ray Vision | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Open-vocabulary object detection (OvOD) is set to revolutionize security
screening by enabling systems to recognize any item in X-ray scans. However,
developing effective OvOD models for X-ray imaging presents unique challenges
due to data scarcity and the modality gap that prevents direct adoption of
RGB-based solutions. To overcome these limitations, we propose RAXO, a
training-free framework that repurposes off-the-shelf RGB OvOD detectors for
robust X-ray detection. RAXO builds high-quality X-ray class descriptors using
a dual-source retrieval strategy. It gathers relevant RGB images from the web
and enriches them via a novel X-ray material transfer mechanism, eliminating
the need for labeled databases. These visual descriptors replace text-based
classification in OvOD, leveraging intra-modal feature distances for robust
detection. Extensive experiments demonstrate that RAXO consistently improves
OvOD performance, providing an average mAP increase of up to 17.0 points over
base detectors. To further support research in this emerging field, we also
introduce DET-COMPASS, a new benchmark featuring bounding box annotations for
over 300 object categories, enabling large-scale evaluation of OvOD in X-ray.
Code and dataset available at: https://github.com/PAGF188/RAXO.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 11:54:16 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Garcia-Fernandez",
"Pablo",
""
],
[
"Vaquero",
"Lorenzo",
""
],
[
"Liu",
"Mingxuan",
""
],
[
"Xue",
"Feng",
""
],
[
"Cores",
"Daniel",
""
],
[
"Sebe",
"Nicu",
""
],
[
"Mucientes",
"Manuel",
""
],
[
"Ricci",
"Elisa",
""
]
] | TITLE: Superpowering Open-Vocabulary Object Detectors for X-ray Vision
ABSTRACT: Open-vocabulary object detection (OvOD) is set to revolutionize security
screening by enabling systems to recognize any item in X-ray scans. However,
developing effective OvOD models for X-ray imaging presents unique challenges
due to data scarcity and the modality gap that prevents direct adoption of
RGB-based solutions. To overcome these limitations, we propose RAXO, a
training-free framework that repurposes off-the-shelf RGB OvOD detectors for
robust X-ray detection. RAXO builds high-quality X-ray class descriptors using
a dual-source retrieval strategy. It gathers relevant RGB images from the web
and enriches them via a novel X-ray material transfer mechanism, eliminating
the need for labeled databases. These visual descriptors replace text-based
classification in OvOD, leveraging intra-modal feature distances for robust
detection. Extensive experiments demonstrate that RAXO consistently improves
OvOD performance, providing an average mAP increase of up to 17.0 points over
base detectors. To further support research in this emerging field, we also
introduce DET-COMPASS, a new benchmark featuring bounding box annotations for
over 300 object categories, enabling large-scale evaluation of OvOD in X-ray.
Code and dataset available at: https://github.com/PAGF188/RAXO.
|
2503.17076 | Victor Besnier | Victor Besnier, Mickael Chen, David Hurych, Eduardo Valle, Matthieu
Cord | Halton Scheduler For Masked Generative Image Transformer | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Masked Generative Image Transformers (MaskGIT) have emerged as a scalable and
efficient image generation framework, able to deliver high-quality visuals with
low inference costs. However, MaskGIT's token unmasking scheduler, an essential
component of the framework, has not received the attention it deserves. We
analyze the sampling objective in MaskGIT, based on the mutual information
between tokens, and elucidate its shortcomings. We then propose a new sampling
strategy based on our Halton scheduler instead of the original Confidence
scheduler. More precisely, our method selects the token's position according to
a quasi-random, low-discrepancy Halton sequence. Intuitively, that method
spreads the tokens spatially, progressively covering the image uniformly at
each step. Our analysis shows that it allows reducing non-recoverable sampling
errors, leading to simpler hyper-parameters tuning and better quality images.
Our scheduler does not require retraining or noise injection and may serve as a
simple drop-in replacement for the original sampling strategy. Evaluation of
both class-to-image synthesis on ImageNet and text-to-image generation on the
COCO dataset demonstrates that the Halton scheduler outperforms the Confidence
scheduler quantitatively by reducing the FID and qualitatively by generating
more diverse and more detailed images. Our code is at
https://github.com/valeoai/Halton-MaskGIT.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:00:59 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Besnier",
"Victor",
""
],
[
"Chen",
"Mickael",
""
],
[
"Hurych",
"David",
""
],
[
"Valle",
"Eduardo",
""
],
[
"Cord",
"Matthieu",
""
]
] | TITLE: Halton Scheduler For Masked Generative Image Transformer
ABSTRACT: Masked Generative Image Transformers (MaskGIT) have emerged as a scalable and
efficient image generation framework, able to deliver high-quality visuals with
low inference costs. However, MaskGIT's token unmasking scheduler, an essential
component of the framework, has not received the attention it deserves. We
analyze the sampling objective in MaskGIT, based on the mutual information
between tokens, and elucidate its shortcomings. We then propose a new sampling
strategy based on our Halton scheduler instead of the original Confidence
scheduler. More precisely, our method selects the token's position according to
a quasi-random, low-discrepancy Halton sequence. Intuitively, that method
spreads the tokens spatially, progressively covering the image uniformly at
each step. Our analysis shows that it allows reducing non-recoverable sampling
errors, leading to simpler hyper-parameters tuning and better quality images.
Our scheduler does not require retraining or noise injection and may serve as a
simple drop-in replacement for the original sampling strategy. Evaluation of
both class-to-image synthesis on ImageNet and text-to-image generation on the
COCO dataset demonstrates that the Halton scheduler outperforms the Confidence
scheduler quantitatively by reducing the FID and qualitatively by generating
more diverse and more detailed images. Our code is at
https://github.com/valeoai/Halton-MaskGIT.
|
2503.17089 | Tiarna Lee | Tiarna Lee, Esther Puyol-Ant\'on, Bram Ruijsink, Miaojing Shi, Andrew
P. King | Does a Rising Tide Lift All Boats? Bias Mitigation for AI-based CMR
Segmentation | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Artificial intelligence (AI) is increasingly being used for medical imaging
tasks. However, there can be biases in the resulting models, particularly when
they were trained using imbalanced training datasets. One such example has been
the strong race bias effect in cardiac magnetic resonance (CMR) image
segmentation models. Although this phenomenon has been reported in a number of
publications, little is known about the effectiveness of bias mitigation
algorithms in this domain. We aim to investigate the impact of common bias
mitigation methods to address bias between Black and White subjects in AI-based
CMR segmentation models. Specifically, we use oversampling, importance
reweighing and Group DRO as well as combinations of these techniques to
mitigate the race bias. Furthermore, motivated by recent findings on the root
causes of AI-based CMR segmentation bias, we evaluate the same methods using
models trained and evaluated on cropped CMR images. We find that bias can be
mitigated using oversampling, significantly improving performance for the
underrepresented Black subjects whilst not significantly reducing the majority
White subjects' performance. Group DRO also improves performance for Black
subjects but not significantly, while reweighing decreases performance for
Black subjects. Using a combination of oversampling and Group DRO also improves
performance for Black subjects but not significantly. Using cropped images
increases performance for both races and reduces the bias, whilst adding
oversampling as a bias mitigation technique with cropped images reduces the
bias further.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:17:43 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Lee",
"Tiarna",
""
],
[
"Puyol-Antón",
"Esther",
""
],
[
"Ruijsink",
"Bram",
""
],
[
"Shi",
"Miaojing",
""
],
[
"King",
"Andrew P.",
""
]
] | TITLE: Does a Rising Tide Lift All Boats? Bias Mitigation for AI-based CMR
Segmentation
ABSTRACT: Artificial intelligence (AI) is increasingly being used for medical imaging
tasks. However, there can be biases in the resulting models, particularly when
they were trained using imbalanced training datasets. One such example has been
the strong race bias effect in cardiac magnetic resonance (CMR) image
segmentation models. Although this phenomenon has been reported in a number of
publications, little is known about the effectiveness of bias mitigation
algorithms in this domain. We aim to investigate the impact of common bias
mitigation methods to address bias between Black and White subjects in AI-based
CMR segmentation models. Specifically, we use oversampling, importance
reweighing and Group DRO as well as combinations of these techniques to
mitigate the race bias. Furthermore, motivated by recent findings on the root
causes of AI-based CMR segmentation bias, we evaluate the same methods using
models trained and evaluated on cropped CMR images. We find that bias can be
mitigated using oversampling, significantly improving performance for the
underrepresented Black subjects whilst not significantly reducing the majority
White subjects' performance. Group DRO also improves performance for Black
subjects but not significantly, while reweighing decreases performance for
Black subjects. Using a combination of oversampling and Group DRO also improves
performance for Black subjects but not significantly. Using cropped images
increases performance for both races and reduces the bias, whilst adding
oversampling as a bias mitigation technique with cropped images reduces the
bias further.
|
2503.17093 | Johan Edstedt | Johan Edstedt, Andr\'e Mateus, Alberto Jaenal | ColabSfM: Collaborative Structure-from-Motion by Point Cloud
Registration | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structure-from-Motion (SfM) is the task of estimating 3D structure and camera
poses from images. We define Collaborative SfM (ColabSfM) as sharing
distributed SfM reconstructions. Sharing maps requires estimating a joint
reference frame, which is typically referred to as registration. However, there
is a lack of scalable methods and training datasets for registering SfM
reconstructions. In this paper, we tackle this challenge by proposing the
scalable task of point cloud registration for SfM reconstructions. We find that
current registration methods cannot register SfM point clouds when trained on
existing datasets. To this end, we propose a SfM registration dataset
generation pipeline, leveraging partial reconstructions from synthetically
generated camera trajectories for each scene. Finally, we propose a simple but
impactful neural refiner on top of the SotA registration method RoITr that
yields significant improvements, which we call RefineRoITr. Our extensive
experimental evaluation shows that our proposed pipeline and model enables
ColabSfM. Code is available at https://github.com/EricssonResearch/ColabSfM
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:21:48 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Edstedt",
"Johan",
""
],
[
"Mateus",
"André",
""
],
[
"Jaenal",
"Alberto",
""
]
] | TITLE: ColabSfM: Collaborative Structure-from-Motion by Point Cloud
Registration
ABSTRACT: Structure-from-Motion (SfM) is the task of estimating 3D structure and camera
poses from images. We define Collaborative SfM (ColabSfM) as sharing
distributed SfM reconstructions. Sharing maps requires estimating a joint
reference frame, which is typically referred to as registration. However, there
is a lack of scalable methods and training datasets for registering SfM
reconstructions. In this paper, we tackle this challenge by proposing the
scalable task of point cloud registration for SfM reconstructions. We find that
current registration methods cannot register SfM point clouds when trained on
existing datasets. To this end, we propose a SfM registration dataset
generation pipeline, leveraging partial reconstructions from synthetically
generated camera trajectories for each scene. Finally, we propose a simple but
impactful neural refiner on top of the SotA registration method RoITr that
yields significant improvements, which we call RefineRoITr. Our extensive
experimental evaluation shows that our proposed pipeline and model enables
ColabSfM. Code is available at https://github.com/EricssonResearch/ColabSfM
|
2503.17095 | Kwan Yun | Kwan Yun, Chaelin Kim, Hangyeul Shin, and Junyong Noh | FFaceNeRF: Few-shot Face Editing in Neural Radiance Fields | CVPR2025, 11 pages, 14 figures | null | null | null | cs.GR cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent 3D face editing methods using masks have produced high-quality edited
images by leveraging Neural Radiance Fields (NeRF). Despite their impressive
performance, existing methods often provide limited user control due to the use
of pre-trained segmentation masks. To utilize masks with a desired layout, an
extensive training dataset is required, which is challenging to gather. We
present FFaceNeRF, a NeRF-based face editing technique that can overcome the
challenge of limited user control due to the use of fixed mask layouts. Our
method employs a geometry adapter with feature injection, allowing for
effective manipulation of geometry attributes. Additionally, we adopt latent
mixing for tri-plane augmentation, which enables training with a few samples.
This facilitates rapid model adaptation to desired mask layouts, crucial for
applications in fields like personalized medical imaging or creative face
editing. Our comparative evaluations demonstrate that FFaceNeRF surpasses
existing mask based face editing methods in terms of flexibility, control, and
generated image quality, paving the way for future advancements in customized
and high-fidelity 3D face editing. The code is available on the
{\href{https://kwanyun.github.io/FFaceNeRF_page/}{project-page}}.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:24:58 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Yun",
"Kwan",
""
],
[
"Kim",
"Chaelin",
""
],
[
"Shin",
"Hangyeul",
""
],
[
"Noh",
"Junyong",
""
]
] | TITLE: FFaceNeRF: Few-shot Face Editing in Neural Radiance Fields
ABSTRACT: Recent 3D face editing methods using masks have produced high-quality edited
images by leveraging Neural Radiance Fields (NeRF). Despite their impressive
performance, existing methods often provide limited user control due to the use
of pre-trained segmentation masks. To utilize masks with a desired layout, an
extensive training dataset is required, which is challenging to gather. We
present FFaceNeRF, a NeRF-based face editing technique that can overcome the
challenge of limited user control due to the use of fixed mask layouts. Our
method employs a geometry adapter with feature injection, allowing for
effective manipulation of geometry attributes. Additionally, we adopt latent
mixing for tri-plane augmentation, which enables training with a few samples.
This facilitates rapid model adaptation to desired mask layouts, crucial for
applications in fields like personalized medical imaging or creative face
editing. Our comparative evaluations demonstrate that FFaceNeRF surpasses
existing mask based face editing methods in terms of flexibility, control, and
generated image quality, paving the way for future advancements in customized
and high-fidelity 3D face editing. The code is available on the
{\href{https://kwanyun.github.io/FFaceNeRF_page/}{project-page}}.
|
2503.17097 | Boyuan Zheng | Boyuan Zheng, Shouyi Lu, Renbo Huang, Minqing Huang, Fan Lu, Wei Tian,
Guirong Zhuo and Lu Xiong | R2LDM: An Efficient 4D Radar Super-Resolution Framework Leveraging
Diffusion Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce R2LDM, an innovative approach for generating dense and accurate
4D radar point clouds, guided by corresponding LiDAR point clouds. Instead of
utilizing range images or bird's eye view (BEV) images, we represent both LiDAR
and 4D radar point clouds using voxel features, which more effectively capture
3D shape information. Subsequently, we propose the Latent Voxel Diffusion Model
(LVDM), which performs the diffusion process in the latent space. Additionally,
a novel Latent Point Cloud Reconstruction (LPCR) module is utilized to
reconstruct point clouds from high-dimensional latent voxel features. As a
result, R2LDM effectively generates LiDAR-like point clouds from paired raw
radar data. We evaluate our approach on two different datasets, and the
experimental results demonstrate that our model achieves 6- to 10-fold
densification of radar point clouds, outperforming state-of-the-art baselines
in 4D radar point cloud super-resolution. Furthermore, the enhanced radar point
clouds generated by our method significantly improve downstream tasks,
achieving up to 31.7% improvement in point cloud registration recall rate and
24.9% improvement in object detection accuracy.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:30:33 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zheng",
"Boyuan",
""
],
[
"Lu",
"Shouyi",
""
],
[
"Huang",
"Renbo",
""
],
[
"Huang",
"Minqing",
""
],
[
"Lu",
"Fan",
""
],
[
"Tian",
"Wei",
""
],
[
"Zhuo",
"Guirong",
""
],
[
"Xiong",
"Lu",
""
]
] | TITLE: R2LDM: An Efficient 4D Radar Super-Resolution Framework Leveraging
Diffusion Model
ABSTRACT: We introduce R2LDM, an innovative approach for generating dense and accurate
4D radar point clouds, guided by corresponding LiDAR point clouds. Instead of
utilizing range images or bird's eye view (BEV) images, we represent both LiDAR
and 4D radar point clouds using voxel features, which more effectively capture
3D shape information. Subsequently, we propose the Latent Voxel Diffusion Model
(LVDM), which performs the diffusion process in the latent space. Additionally,
a novel Latent Point Cloud Reconstruction (LPCR) module is utilized to
reconstruct point clouds from high-dimensional latent voxel features. As a
result, R2LDM effectively generates LiDAR-like point clouds from paired raw
radar data. We evaluate our approach on two different datasets, and the
experimental results demonstrate that our model achieves 6- to 10-fold
densification of radar point clouds, outperforming state-of-the-art baselines
in 4D radar point cloud super-resolution. Furthermore, the enhanced radar point
clouds generated by our method significantly improve downstream tasks,
achieving up to 31.7% improvement in point cloud registration recall rate and
24.9% improvement in object detection accuracy.
|
2503.17101 | Jun Lu | Jun Lu, Tianyi Xu, Bill Ding, David Li, Yu Kang | Large Language Model Compression via the Nested Activation-Aware
Decomposition | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we tackle the critical challenge of compressing large language
models (LLMs) to facilitate their practical deployment and broader adoption. We
introduce a novel post-training compression paradigm that focuses on low-rank
decomposition of LLM weights. Our analysis identifies two main challenges in
this task: the variability in LLM activation distributions and handling unseen
activations from different datasets and models.
To address these challenges, we propose a nested activation-aware framework
(NSVD) for LLMs, a training-free approach designed to enhance the accuracy of
low-rank decompositions by managing activation outliers through transforming
the weight matrix based on activation distribution and the original weight
matrix. This method allows for the absorption of outliers into the transformed
weight matrix, improving decomposition accuracy. Our comprehensive evaluation
across eight datasets and six models from three distinct LLM families
demonstrates the superiority of NSVD over current state-of-the-art methods,
especially at medium to large compression ratios or in multilingual and
multitask settings.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:39:16 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Lu",
"Jun",
""
],
[
"Xu",
"Tianyi",
""
],
[
"Ding",
"Bill",
""
],
[
"Li",
"David",
""
],
[
"Kang",
"Yu",
""
]
] | TITLE: Large Language Model Compression via the Nested Activation-Aware
Decomposition
ABSTRACT: In this paper, we tackle the critical challenge of compressing large language
models (LLMs) to facilitate their practical deployment and broader adoption. We
introduce a novel post-training compression paradigm that focuses on low-rank
decomposition of LLM weights. Our analysis identifies two main challenges in
this task: the variability in LLM activation distributions and handling unseen
activations from different datasets and models.
To address these challenges, we propose a nested activation-aware framework
(NSVD) for LLMs, a training-free approach designed to enhance the accuracy of
low-rank decompositions by managing activation outliers through transforming
the weight matrix based on activation distribution and the original weight
matrix. This method allows for the absorption of outliers into the transformed
weight matrix, improving decomposition accuracy. Our comprehensive evaluation
across eight datasets and six models from three distinct LLM families
demonstrates the superiority of NSVD over current state-of-the-art methods,
especially at medium to large compression ratios or in multilingual and
multitask settings.
|
2503.17105 | Andrea Loddo | Marco Usai, Andrea Loddo, Alessandra Perniciano, Maurizio Atzori,
Cecilia Di Ruberto | A Comparative Analysis of Image Descriptors for Histopathological
Classification of Gastric Cancer | null | null | null | ITADATA/2024/14 | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gastric cancer ranks as the fifth most common and fourth most lethal cancer
globally, with a dismal 5-year survival rate of approximately 20%. Despite
extensive research on its pathobiology, the prognostic predictability remains
inadequate, compounded by pathologists' high workload and potential diagnostic
errors. Thus, automated, accurate histopathological diagnosis tools are
crucial. This study employs Machine Learning and Deep Learning techniques to
classify histopathological images into healthy and cancerous categories. Using
handcrafted and deep features with shallow learning classifiers on the
GasHisSDB dataset, we offer a comparative analysis and insights into the most
robust and high-performing combinations of features and classifiers for
distinguishing between normal and abnormal histopathological images without
fine-tuning strategies. With the RF classifier, our approach can reach F1 of
93.4%, demonstrating its validity.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:46:22 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Usai",
"Marco",
""
],
[
"Loddo",
"Andrea",
""
],
[
"Perniciano",
"Alessandra",
""
],
[
"Atzori",
"Maurizio",
""
],
[
"Di Ruberto",
"Cecilia",
""
]
] | TITLE: A Comparative Analysis of Image Descriptors for Histopathological
Classification of Gastric Cancer
ABSTRACT: Gastric cancer ranks as the fifth most common and fourth most lethal cancer
globally, with a dismal 5-year survival rate of approximately 20%. Despite
extensive research on its pathobiology, the prognostic predictability remains
inadequate, compounded by pathologists' high workload and potential diagnostic
errors. Thus, automated, accurate histopathological diagnosis tools are
crucial. This study employs Machine Learning and Deep Learning techniques to
classify histopathological images into healthy and cancerous categories. Using
handcrafted and deep features with shallow learning classifiers on the
GasHisSDB dataset, we offer a comparative analysis and insights into the most
robust and high-performing combinations of features and classifiers for
distinguishing between normal and abnormal histopathological images without
fine-tuning strategies. With the RF classifier, our approach can reach F1 of
93.4%, demonstrating its validity.
|
2503.17106 | Yizhe Liu | Yizhe Liu, Tong Jia, Da Cai, Hao Wang, Dongyue Chen | GAA-TSO: Geometry-Aware Assisted Depth Completion for Transparent and
Specular Objects | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transparent and specular objects are frequently encountered in daily life,
factories, and laboratories. However, due to the unique optical properties, the
depth information on these objects is usually incomplete and inaccurate, which
poses significant challenges for downstream robotics tasks. Therefore, it is
crucial to accurately restore the depth information of transparent and specular
objects. Previous depth completion methods for these objects usually use RGB
information as an additional channel of the depth image to perform depth
prediction. Due to the poor-texture characteristics of transparent and specular
objects, these methods that rely heavily on color information tend to generate
structure-less depth predictions. Moreover, these 2D methods cannot effectively
explore the 3D structure hidden in the depth channel, resulting in depth
ambiguity. To this end, we propose a geometry-aware assisted depth completion
method for transparent and specular objects, which focuses on exploring the 3D
structural cues of the scene. Specifically, besides extracting 2D features from
RGB-D input, we back-project the input depth to a point cloud and build the 3D
branch to extract hierarchical scene-level 3D structural features. To exploit
3D geometric information, we design several gated cross-modal fusion modules to
effectively propagate multi-level 3D geometric features to the image branch. In
addition, we propose an adaptive correlation aggregation strategy to
appropriately assign 3D features to the corresponding 2D features. Extensive
experiments on ClearGrasp, OOD, TransCG, and STD datasets show that our method
outperforms other state-of-the-art methods. We further demonstrate that our
method significantly enhances the performance of downstream robotic grasping
tasks.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:46:38 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Liu",
"Yizhe",
""
],
[
"Jia",
"Tong",
""
],
[
"Cai",
"Da",
""
],
[
"Wang",
"Hao",
""
],
[
"Chen",
"Dongyue",
""
]
] | TITLE: GAA-TSO: Geometry-Aware Assisted Depth Completion for Transparent and
Specular Objects
ABSTRACT: Transparent and specular objects are frequently encountered in daily life,
factories, and laboratories. However, due to the unique optical properties, the
depth information on these objects is usually incomplete and inaccurate, which
poses significant challenges for downstream robotics tasks. Therefore, it is
crucial to accurately restore the depth information of transparent and specular
objects. Previous depth completion methods for these objects usually use RGB
information as an additional channel of the depth image to perform depth
prediction. Due to the poor-texture characteristics of transparent and specular
objects, these methods that rely heavily on color information tend to generate
structure-less depth predictions. Moreover, these 2D methods cannot effectively
explore the 3D structure hidden in the depth channel, resulting in depth
ambiguity. To this end, we propose a geometry-aware assisted depth completion
method for transparent and specular objects, which focuses on exploring the 3D
structural cues of the scene. Specifically, besides extracting 2D features from
RGB-D input, we back-project the input depth to a point cloud and build the 3D
branch to extract hierarchical scene-level 3D structural features. To exploit
3D geometric information, we design several gated cross-modal fusion modules to
effectively propagate multi-level 3D geometric features to the image branch. In
addition, we propose an adaptive correlation aggregation strategy to
appropriately assign 3D features to the corresponding 2D features. Extensive
experiments on ClearGrasp, OOD, TransCG, and STD datasets show that our method
outperforms other state-of-the-art methods. We further demonstrate that our
method significantly enhances the performance of downstream robotic grasping
tasks.
|
2503.17107 | Andrea Loddo | Davide Antonio Mura, Michela Pinna, Lorenzo Putzu, Andrea Loddo,
Alessandra Perniciano, Olga Mulas, Cecilia Di Ruberto | Exploring Few-Shot Object Detection on Blood Smear Images: A Case Study
of Leukocytes and Schistocytes | null | null | null | ITADATA/2024/15 | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The detection of blood disorders often hinges upon the quantification of
specific blood cell types. Variations in cell counts may indicate the presence
of pathological conditions. Thus, the significance of developing precise
automatic systems for blood cell enumeration is underscored. The investigation
focuses on a novel approach termed DE-ViT. This methodology is employed in a
Few-Shot paradigm, wherein training relies on a limited number of images. Two
distinct datasets are utilised for experimental purposes: the Raabin-WBC
dataset for Leukocyte detection and a local dataset for Schistocyte
identification. In addition to the DE-ViT model, two baseline models, Faster
R-CNN 50 and Faster R-CNN X 101, are employed, with their outcomes being
compared against those of the proposed model. While DE-ViT has demonstrated
state-of-the-art performance on the COCO and LVIS datasets, both baseline
models surpassed its performance on the Raabin-WBC dataset. Moreover, only
Faster R-CNN X 101 yielded satisfactory results on the SC-IDB. The observed
disparities in performance may possibly be attributed to domain shift
phenomena.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:46:49 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Mura",
"Davide Antonio",
""
],
[
"Pinna",
"Michela",
""
],
[
"Putzu",
"Lorenzo",
""
],
[
"Loddo",
"Andrea",
""
],
[
"Perniciano",
"Alessandra",
""
],
[
"Mulas",
"Olga",
""
],
[
"Di Ruberto",
"Cecilia",
""
]
] | TITLE: Exploring Few-Shot Object Detection on Blood Smear Images: A Case Study
of Leukocytes and Schistocytes
ABSTRACT: The detection of blood disorders often hinges upon the quantification of
specific blood cell types. Variations in cell counts may indicate the presence
of pathological conditions. Thus, the significance of developing precise
automatic systems for blood cell enumeration is underscored. The investigation
focuses on a novel approach termed DE-ViT. This methodology is employed in a
Few-Shot paradigm, wherein training relies on a limited number of images. Two
distinct datasets are utilised for experimental purposes: the Raabin-WBC
dataset for Leukocyte detection and a local dataset for Schistocyte
identification. In addition to the DE-ViT model, two baseline models, Faster
R-CNN 50 and Faster R-CNN X 101, are employed, with their outcomes being
compared against those of the proposed model. While DE-ViT has demonstrated
state-of-the-art performance on the COCO and LVIS datasets, both baseline
models surpassed its performance on the Raabin-WBC dataset. Moreover, only
Faster R-CNN X 101 yielded satisfactory results on the SC-IDB. The observed
disparities in performance may possibly be attributed to domain shift
phenomena.
|
2503.17110 | Robin Hesse | Robin Hesse, Do\u{g}ukan Ba\u{g}c{\i}, Bernt Schiele, Simone
Schaub-Meyer, Stefan Roth | Beyond Accuracy: What Matters in Designing Well-Behaved Models? | Code: https://github.com/visinf/beyond-accuracy | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has become an essential part of computer vision, with deep
neural networks (DNNs) excelling in predictive performance. However, they often
fall short in other critical quality dimensions, such as robustness,
calibration, or fairness. While existing studies have focused on a subset of
these quality dimensions, none have explored a more general form of
"well-behavedness" of DNNs. With this work, we address this gap by
simultaneously studying nine different quality dimensions for image
classification. Through a large-scale study, we provide a bird's-eye view by
analyzing 326 backbone models and how different training paradigms and model
architectures affect the quality dimensions. We reveal various new insights
such that (i) vision-language models exhibit high fairness on ImageNet-1k
classification and strong robustness against domain changes; (ii)
self-supervised learning is an effective training paradigm to improve almost
all considered quality dimensions; and (iii) the training dataset size is a
major driver for most of the quality dimensions. We conclude our study by
introducing the QUBA score (Quality Understanding Beyond Accuracy), a novel
metric that ranks models across multiple dimensions of quality, enabling
tailored recommendations based on specific user needs.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:54:18 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Hesse",
"Robin",
""
],
[
"Bağcı",
"Doğukan",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Schaub-Meyer",
"Simone",
""
],
[
"Roth",
"Stefan",
""
]
] | TITLE: Beyond Accuracy: What Matters in Designing Well-Behaved Models?
ABSTRACT: Deep learning has become an essential part of computer vision, with deep
neural networks (DNNs) excelling in predictive performance. However, they often
fall short in other critical quality dimensions, such as robustness,
calibration, or fairness. While existing studies have focused on a subset of
these quality dimensions, none have explored a more general form of
"well-behavedness" of DNNs. With this work, we address this gap by
simultaneously studying nine different quality dimensions for image
classification. Through a large-scale study, we provide a bird's-eye view by
analyzing 326 backbone models and how different training paradigms and model
architectures affect the quality dimensions. We reveal various new insights
such that (i) vision-language models exhibit high fairness on ImageNet-1k
classification and strong robustness against domain changes; (ii)
self-supervised learning is an effective training paradigm to improve almost
all considered quality dimensions; and (iii) the training dataset size is a
major driver for most of the quality dimensions. We conclude our study by
introducing the QUBA score (Quality Understanding Beyond Accuracy), a novel
metric that ranks models across multiple dimensions of quality, enabling
tailored recommendations based on specific user needs.
|
2503.17111 | Mikhail Kiselev | Mikhail Kiselev | A Digital Machine Learning Algorithm Simulating Spiking Neural Network
CoLaNET | null | null | null | null | cs.NE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | During last several years, our research team worked on development of a
spiking neural network (SNN) architecture, which could be used in the wide
range of supervised learning classification tasks. It should work under the
condition, that all participating signals (the classified object description,
correct class label and SNN decision) should have spiking nature. As a result,
the CoLaNET (columnar layered network) SNN architecture was invented. The
distinctive feature of this architecture is a combination of prototypical
network structures corresponding to different classes and significantly
distinctive instances of one class (=columns) and functionally differing
populations of neurons inside columns (=layers). The other distinctive feature
is a novel combination of anti-Hebbian and dopamine-modulated plasticity. While
CoLaNET is relatively simple, it includes several hyperparameters. Their choice
for particular classification tasks is not trivial. Besides that, specific
features of the data classified (e.g. classification of separate pictures like
in MNIST dataset vs. classifying objects in a continuous video stream) require
certain modifications of CoLaNET structure. To solve these problems, the deep
mathematical exploration of CoLaNET should be carried out. However, SNNs, being
stochastic discrete systems, are usually very hard for exact mathematical
analysis. To make it easier, I developed a continuous numeric (non-spiking)
machine learning algorithm which approximates CoLaNET behavior with
satisfactory accuracy. It is described in the paper. At present, it is being
studied by exact analytic methods. We hope that the results of this study could
be applied to direct calculation of CoLaNET hyperparameters and optimization of
its structure.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 12:55:24 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Kiselev",
"Mikhail",
""
]
] | TITLE: A Digital Machine Learning Algorithm Simulating Spiking Neural Network
CoLaNET
ABSTRACT: During last several years, our research team worked on development of a
spiking neural network (SNN) architecture, which could be used in the wide
range of supervised learning classification tasks. It should work under the
condition, that all participating signals (the classified object description,
correct class label and SNN decision) should have spiking nature. As a result,
the CoLaNET (columnar layered network) SNN architecture was invented. The
distinctive feature of this architecture is a combination of prototypical
network structures corresponding to different classes and significantly
distinctive instances of one class (=columns) and functionally differing
populations of neurons inside columns (=layers). The other distinctive feature
is a novel combination of anti-Hebbian and dopamine-modulated plasticity. While
CoLaNET is relatively simple, it includes several hyperparameters. Their choice
for particular classification tasks is not trivial. Besides that, specific
features of the data classified (e.g. classification of separate pictures like
in MNIST dataset vs. classifying objects in a continuous video stream) require
certain modifications of CoLaNET structure. To solve these problems, the deep
mathematical exploration of CoLaNET should be carried out. However, SNNs, being
stochastic discrete systems, are usually very hard for exact mathematical
analysis. To make it easier, I developed a continuous numeric (non-spiking)
machine learning algorithm which approximates CoLaNET behavior with
satisfactory accuracy. It is described in the paper. At present, it is being
studied by exact analytic methods. We hope that the results of this study could
be applied to direct calculation of CoLaNET hyperparameters and optimization of
its structure.
|
2503.17116 | Luca Rossetto PhD | Luca Rossetto, Werner Bailer, Duc-Tien Dang-Nguyen, Graham Healy,
Bj\"orn {\TH}\'or J\'onsson, Onanong Kongmeesub, Hoang-Bao Le, Stevan
Rudinac, Klaus Sch\"offmann, Florian Spiess, Allie Tran, Minh-Triet Tran,
Quang-Linh Tran, Cathal Gurrin | The CASTLE 2024 Dataset: Advancing the Art of Multimodal Understanding | 7 pages, 6 figures, dataset available via
https://castle-dataset.github.io/ | null | null | null | cs.MM cs.AI cs.CV cs.IR | http://creativecommons.org/licenses/by/4.0/ | Egocentric video has seen increased interest in recent years, as it is used
in a range of areas. However, most existing datasets are limited to a single
perspective. In this paper, we present the CASTLE 2024 dataset, a multimodal
collection containing ego- and exo-centric (i.e., first- and third-person
perspective) video and audio from 15 time-aligned sources, as well as other
sensor streams and auxiliary data. The dataset was recorded by volunteer
participants over four days in a fixed location and includes the point of view
of 10 participants, with an additional 5 fixed cameras providing an exocentric
perspective. The entire dataset contains over 600 hours of UHD video recorded
at 50 frames per second. In contrast to other datasets, CASTLE 2024 does not
contain any partial censoring, such as blurred faces or distorted audio. The
dataset is available via https://castle-dataset.github.io/.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 13:01:07 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Rossetto",
"Luca",
""
],
[
"Bailer",
"Werner",
""
],
[
"Dang-Nguyen",
"Duc-Tien",
""
],
[
"Healy",
"Graham",
""
],
[
"Jónsson",
"Björn Þór",
""
],
[
"Kongmeesub",
"Onanong",
""
],
[
"Le",
"Hoang-Bao",
""
],
[
"Rudinac",
"Stevan",
""
],
[
"Schöffmann",
"Klaus",
""
],
[
"Spiess",
"Florian",
""
],
[
"Tran",
"Allie",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Tran",
"Quang-Linh",
""
],
[
"Gurrin",
"Cathal",
""
]
] | TITLE: The CASTLE 2024 Dataset: Advancing the Art of Multimodal Understanding
ABSTRACT: Egocentric video has seen increased interest in recent years, as it is used
in a range of areas. However, most existing datasets are limited to a single
perspective. In this paper, we present the CASTLE 2024 dataset, a multimodal
collection containing ego- and exo-centric (i.e., first- and third-person
perspective) video and audio from 15 time-aligned sources, as well as other
sensor streams and auxiliary data. The dataset was recorded by volunteer
participants over four days in a fixed location and includes the point of view
of 10 participants, with an additional 5 fixed cameras providing an exocentric
perspective. The entire dataset contains over 600 hours of UHD video recorded
at 50 frames per second. In contrast to other datasets, CASTLE 2024 does not
contain any partial censoring, such as blurred faces or distorted audio. The
dataset is available via https://castle-dataset.github.io/.
|
2503.17117 | Th\'eo Bodrito | Th\'eo Bodrito, Olivier Flasseur, Julien Mairal, Jean Ponce, Maud
Langlois, Anne-Marie Lagrange | A New Statistical Model of Star Speckles for Learning to Detect and
Characterize Exoplanets in Direct Imaging Observations | Accepted to CVPR 2025 | null | null | null | astro-ph.IM astro-ph.EP cs.CV cs.LG stat.AP | http://creativecommons.org/licenses/by/4.0/ | The search for exoplanets is an active field in astronomy, with direct
imaging as one of the most challenging methods due to faint exoplanet signals
buried within stronger residual starlight. Successful detection requires
advanced image processing to separate the exoplanet signal from this nuisance
component. This paper presents a novel statistical model that captures nuisance
fluctuations using a multi-scale approach, leveraging problem symmetries and a
joint spectral channel representation grounded in physical principles. Our
model integrates into an interpretable, end-to-end learnable framework for
simultaneous exoplanet detection and flux estimation. The proposed algorithm is
evaluated against the state of the art using datasets from the SPHERE
instrument operating at the Very Large Telescope (VLT). It significantly
improves the precision-recall trade-off, notably on challenging datasets that
are otherwise unusable by astronomers. The proposed approach is computationally
efficient, robust to varying data quality, and well suited for large-scale
observational surveys.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 13:07:55 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Bodrito",
"Théo",
""
],
[
"Flasseur",
"Olivier",
""
],
[
"Mairal",
"Julien",
""
],
[
"Ponce",
"Jean",
""
],
[
"Langlois",
"Maud",
""
],
[
"Lagrange",
"Anne-Marie",
""
]
] | TITLE: A New Statistical Model of Star Speckles for Learning to Detect and
Characterize Exoplanets in Direct Imaging Observations
ABSTRACT: The search for exoplanets is an active field in astronomy, with direct
imaging as one of the most challenging methods due to faint exoplanet signals
buried within stronger residual starlight. Successful detection requires
advanced image processing to separate the exoplanet signal from this nuisance
component. This paper presents a novel statistical model that captures nuisance
fluctuations using a multi-scale approach, leveraging problem symmetries and a
joint spectral channel representation grounded in physical principles. Our
model integrates into an interpretable, end-to-end learnable framework for
simultaneous exoplanet detection and flux estimation. The proposed algorithm is
evaluated against the state of the art using datasets from the SPHERE
instrument operating at the Very Large Telescope (VLT). It significantly
improves the precision-recall trade-off, notably on challenging datasets that
are otherwise unusable by astronomers. The proposed approach is computationally
efficient, robust to varying data quality, and well suited for large-scale
observational surveys.
|
2503.17126 | John Chung | John Joon Young Chung, Vishakh Padmakumar, Melissa Roemmele, Yuqian
Sun, Max Kreminski | Modifying Large Language Model Post-Training for Diverse Creative
Writing | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As creative writing tasks do not have singular correct answers, large
language models (LLMs) trained to perform these tasks should be able to
generate diverse valid outputs. However, LLM post-training often focuses on
improving generation quality but neglects to facilitate output diversity.
Hence, in creative writing generation, we investigate post-training approaches
to promote both output diversity and quality. Our core idea is to include
deviation -- the degree of difference between a training sample and all other
samples with the same prompt -- in the training objective to facilitate
learning from rare high-quality instances. By adopting our approach to direct
preference optimization (DPO) and odds ratio preference optimization (ORPO), we
demonstrate that we can promote the output diversity of trained models while
minimally decreasing quality. Our best model with 8B parameters could achieve
on-par diversity as a human-created dataset while having output quality similar
to the best instruction-tuned models we examined, GPT-4o and DeepSeek-R1. We
further validate our approaches with a human evaluation, an ablation, and a
comparison to an existing diversification approach, DivPO.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 13:21:45 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Chung",
"John Joon Young",
""
],
[
"Padmakumar",
"Vishakh",
""
],
[
"Roemmele",
"Melissa",
""
],
[
"Sun",
"Yuqian",
""
],
[
"Kreminski",
"Max",
""
]
] | TITLE: Modifying Large Language Model Post-Training for Diverse Creative
Writing
ABSTRACT: As creative writing tasks do not have singular correct answers, large
language models (LLMs) trained to perform these tasks should be able to
generate diverse valid outputs. However, LLM post-training often focuses on
improving generation quality but neglects to facilitate output diversity.
Hence, in creative writing generation, we investigate post-training approaches
to promote both output diversity and quality. Our core idea is to include
deviation -- the degree of difference between a training sample and all other
samples with the same prompt -- in the training objective to facilitate
learning from rare high-quality instances. By adopting our approach to direct
preference optimization (DPO) and odds ratio preference optimization (ORPO), we
demonstrate that we can promote the output diversity of trained models while
minimally decreasing quality. Our best model with 8B parameters could achieve
on-par diversity as a human-created dataset while having output quality similar
to the best instruction-tuned models we examined, GPT-4o and DeepSeek-R1. We
further validate our approaches with a human evaluation, an ablation, and a
comparison to an existing diversification approach, DivPO.
|
2503.17136 | Haw-Shiuan Chang | Brihi Joshi, Sriram Venkatapathy, Mohit Bansal, Nanyun Peng,
Haw-Shiuan Chang | CoKe: Customizable Fine-Grained Story Evaluation via Chain-of-Keyword
Rationalization | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Evaluating creative text such as human-written stories using language models
has always been a challenging task -- owing to the subjectivity of
multi-annotator ratings. To mimic the thinking process of humans, chain of
thought (CoT) generates free-text explanations that help guide a model's
predictions and Self-Consistency (SC) marginalizes predictions over multiple
generated explanations. In this study, we discover that the widely-used
self-consistency reasoning methods cause suboptimal results due to an objective
mismatch between generating 'fluent-looking' explanations vs. actually leading
to a good rating prediction for an aspect of a story. To overcome this
challenge, we propose $\textbf{C}$hain-$\textbf{o}$f-$\textbf{Ke}$ywords
(CoKe), that generates a sequence of keywords $\textit{before}$ generating a
free-text rationale, that guide the rating prediction of our evaluation
language model. Then, we generate a diverse set of such keywords, and aggregate
the scores corresponding to these generations. On the StoryER dataset, CoKe
based on our small fine-tuned evaluation models not only reach human-level
performance and significantly outperform GPT-4 with a 2x boost in correlation
with human annotators, but also requires drastically less number of parameters.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 13:37:46 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Joshi",
"Brihi",
""
],
[
"Venkatapathy",
"Sriram",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Chang",
"Haw-Shiuan",
""
]
] | TITLE: CoKe: Customizable Fine-Grained Story Evaluation via Chain-of-Keyword
Rationalization
ABSTRACT: Evaluating creative text such as human-written stories using language models
has always been a challenging task -- owing to the subjectivity of
multi-annotator ratings. To mimic the thinking process of humans, chain of
thought (CoT) generates free-text explanations that help guide a model's
predictions and Self-Consistency (SC) marginalizes predictions over multiple
generated explanations. In this study, we discover that the widely-used
self-consistency reasoning methods cause suboptimal results due to an objective
mismatch between generating 'fluent-looking' explanations vs. actually leading
to a good rating prediction for an aspect of a story. To overcome this
challenge, we propose $\textbf{C}$hain-$\textbf{o}$f-$\textbf{Ke}$ywords
(CoKe), that generates a sequence of keywords $\textit{before}$ generating a
free-text rationale, that guide the rating prediction of our evaluation
language model. Then, we generate a diverse set of such keywords, and aggregate
the scores corresponding to these generations. On the StoryER dataset, CoKe
based on our small fine-tuned evaluation models not only reach human-level
performance and significantly outperform GPT-4 with a 2x boost in correlation
with human annotators, but also requires drastically less number of parameters.
|
2503.17153 | Fouad Makiyeh | Fouad Makiyeh, Huy-Dung Nguyen, Patrick Chareyre, Ramin Hasani, Marc
Blanchon, Daniela Rus | Enhancing Steering Estimation with Semantic-Aware GNNs | Submitted to ICCV 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Steering estimation is a critical task in autonomous driving, traditionally
relying on 2D image-based models. In this work, we explore the advantages of
incorporating 3D spatial information through hybrid architectures that combine
3D neural network models with recurrent neural networks (RNNs) for temporal
modeling, using LiDAR-based point clouds as input. We systematically evaluate
four hybrid 3D models, all of which outperform the 2D-only baseline, with the
Graph Neural Network (GNN) - RNN model yielding the best results.
To reduce reliance on LiDAR, we leverage a pretrained unified model to
estimate depth from monocular images, reconstructing pseudo-3D point clouds. We
then adapt the GNN-RNN model, originally designed for LiDAR-based point clouds,
to work with these pseudo-3D representations, achieving comparable or even
superior performance compared to the LiDAR-based model. Additionally, the
unified model provides semantic labels for each point, enabling a more
structured scene representation. To further optimize graph construction, we
introduce an efficient connectivity strategy where connections are
predominantly formed between points of the same semantic class, with only 20\%
of inter-class connections retained. This targeted approach reduces graph
complexity and computational cost while preserving critical spatial
relationships.
Finally, we validate our approach on the KITTI dataset, achieving a 71%
improvement over 2D-only models. Our findings highlight the advantages of 3D
spatial information and efficient graph construction for steering estimation,
while maintaining the cost-effectiveness of monocular images and avoiding the
expense of LiDAR-based systems.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 13:58:08 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Makiyeh",
"Fouad",
""
],
[
"Nguyen",
"Huy-Dung",
""
],
[
"Chareyre",
"Patrick",
""
],
[
"Hasani",
"Ramin",
""
],
[
"Blanchon",
"Marc",
""
],
[
"Rus",
"Daniela",
""
]
] | TITLE: Enhancing Steering Estimation with Semantic-Aware GNNs
ABSTRACT: Steering estimation is a critical task in autonomous driving, traditionally
relying on 2D image-based models. In this work, we explore the advantages of
incorporating 3D spatial information through hybrid architectures that combine
3D neural network models with recurrent neural networks (RNNs) for temporal
modeling, using LiDAR-based point clouds as input. We systematically evaluate
four hybrid 3D models, all of which outperform the 2D-only baseline, with the
Graph Neural Network (GNN) - RNN model yielding the best results.
To reduce reliance on LiDAR, we leverage a pretrained unified model to
estimate depth from monocular images, reconstructing pseudo-3D point clouds. We
then adapt the GNN-RNN model, originally designed for LiDAR-based point clouds,
to work with these pseudo-3D representations, achieving comparable or even
superior performance compared to the LiDAR-based model. Additionally, the
unified model provides semantic labels for each point, enabling a more
structured scene representation. To further optimize graph construction, we
introduce an efficient connectivity strategy where connections are
predominantly formed between points of the same semantic class, with only 20\%
of inter-class connections retained. This targeted approach reduces graph
complexity and computational cost while preserving critical spatial
relationships.
Finally, we validate our approach on the KITTI dataset, achieving a 71%
improvement over 2D-only models. Our findings highlight the advantages of 3D
spatial information and efficient graph construction for steering estimation,
while maintaining the cost-effectiveness of monocular images and avoiding the
expense of LiDAR-based systems.
|
2503.17156 | Dominik Peters | Th\'eo Delemazure and Rupert Freeman and J\'er\^ome Lang and
Jean-Fran\c{c}ois Laslier and Dominik Peters | Reallocating Wasted Votes in Proportional Parliamentary Elections with
Thresholds | 37 pages | null | null | null | cs.GT econ.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many proportional parliamentary elections, electoral thresholds (typically
3-5%) are used to promote stability and governability by preventing the
election of parties with very small representation. However, these thresholds
often result in a significant number of "wasted votes" cast for parties that
fail to meet the threshold, which reduces representativeness. One proposal is
to allow voters to specify replacement votes, by either indicating a second
choice party or by ranking a subset of the parties, but there are several ways
of deciding on the scores of the parties (and thus the composition of the
parliament) given those votes. We introduce a formal model of party voting with
thresholds, and compare a variety of party selection rules axiomatically, and
experimentally using a dataset we collected during the 2024 European election
in France. We identify three particularly attractive rules, called Direct
Winners Only (DO), Single Transferable Vote (STV) and Greedy Plurality (GP).
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 13:59:49 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Delemazure",
"Théo",
""
],
[
"Freeman",
"Rupert",
""
],
[
"Lang",
"Jérôme",
""
],
[
"Laslier",
"Jean-François",
""
],
[
"Peters",
"Dominik",
""
]
] | TITLE: Reallocating Wasted Votes in Proportional Parliamentary Elections with
Thresholds
ABSTRACT: In many proportional parliamentary elections, electoral thresholds (typically
3-5%) are used to promote stability and governability by preventing the
election of parties with very small representation. However, these thresholds
often result in a significant number of "wasted votes" cast for parties that
fail to meet the threshold, which reduces representativeness. One proposal is
to allow voters to specify replacement votes, by either indicating a second
choice party or by ranking a subset of the parties, but there are several ways
of deciding on the scores of the parties (and thus the composition of the
parliament) given those votes. We introduce a formal model of party voting with
thresholds, and compare a variety of party selection rules axiomatically, and
experimentally using a dataset we collected during the 2024 European election
in France. We identify three particularly attractive rules, called Direct
Winners Only (DO), Single Transferable Vote (STV) and Greedy Plurality (GP).
|
2503.17172 | Gaojie Jin | Gaojie Jin, Tianjin Huang, Ronghui Mu, Xiaowei Huang | Principal Eigenvalue Regularization for Improved Worst-Class Certified
Robustness of Smoothed Classifiers | Under Review | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies have identified a critical challenge in deep neural networks
(DNNs) known as ``robust fairness", where models exhibit significant
disparities in robust accuracy across different classes. While prior work has
attempted to address this issue in adversarial robustness, the study of
worst-class certified robustness for smoothed classifiers remains unexplored.
Our work bridges this gap by developing a PAC-Bayesian bound for the
worst-class error of smoothed classifiers. Through theoretical analysis, we
demonstrate that the largest eigenvalue of the smoothed confusion matrix
fundamentally influences the worst-class error of smoothed classifiers. Based
on this insight, we introduce a regularization method that optimizes the
largest eigenvalue of smoothed confusion matrix to enhance worst-class accuracy
of the smoothed classifier and further improve its worst-class certified
robustness. We provide extensive experimental validation across multiple
datasets and model architectures to demonstrate the effectiveness of our
approach.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:18:18 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Jin",
"Gaojie",
""
],
[
"Huang",
"Tianjin",
""
],
[
"Mu",
"Ronghui",
""
],
[
"Huang",
"Xiaowei",
""
]
] | TITLE: Principal Eigenvalue Regularization for Improved Worst-Class Certified
Robustness of Smoothed Classifiers
ABSTRACT: Recent studies have identified a critical challenge in deep neural networks
(DNNs) known as ``robust fairness", where models exhibit significant
disparities in robust accuracy across different classes. While prior work has
attempted to address this issue in adversarial robustness, the study of
worst-class certified robustness for smoothed classifiers remains unexplored.
Our work bridges this gap by developing a PAC-Bayesian bound for the
worst-class error of smoothed classifiers. Through theoretical analysis, we
demonstrate that the largest eigenvalue of the smoothed confusion matrix
fundamentally influences the worst-class error of smoothed classifiers. Based
on this insight, we introduce a regularization method that optimizes the
largest eigenvalue of smoothed confusion matrix to enhance worst-class accuracy
of the smoothed classifier and further improve its worst-class certified
robustness. We provide extensive experimental validation across multiple
datasets and model architectures to demonstrate the effectiveness of our
approach.
|
2503.17173 | Sanjif Shanmugavelu Mr. | Sanjif Shanmugavelu, Mathieu Taillefumier, Christopher Culver, Vijay
Ganesh, Oscar Hernandez, Ada Sedova | Robustness of deep learning classification to adversarial input on GPUs:
asynchronous parallel accumulation is a source of vulnerability | Under review at EuroPar 2025 | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by/4.0/ | The ability of machine learning (ML) classification models to resist small,
targeted input perturbations - known as adversarial attacks - is a key measure
of their safety and reliability. We show that floating-point non associativity
(FPNA) coupled with asynchronous parallel programming on GPUs is sufficient to
result in misclassification, without any perturbation to the input.
Additionally, we show this misclassification is particularly significant for
inputs close to the decision boundary and that standard adversarial robustness
results may be overestimated up to 4.6% when not considering machine-level
details. We first study a linear classifier, before focusing on standard Graph
Neural Network (GNN) architectures and datasets. We present a novel black-box
attack using Bayesian optimization to determine external workloads that bias
the output of reductions on GPUs and reliably lead to misclassification.
Motivated by these results, we present a new learnable permutation (LP)
gradient-based approach, to learn floating point operation orderings that lead
to misclassifications, making the assumption that any reduction or permutation
ordering is possible. This LP approach provides a worst-case estimate in a
computationally efficient manner, avoiding the need to run identical
experiments tens of thousands of times over a potentially large set of possible
GPU states or architectures. Finally, we investigate parallel reduction
ordering across different GPU architectures for a reduction under three
conditions: (1) executing external background workloads, (2) utilizing
multi-GPU virtualization, and (3) applying power capping. Our results
demonstrate that parallel reduction ordering varies significantly across
architectures under the first two conditions. The results and methods developed
here can help to include machine-level considerations into adversarial
robustness assessments.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:19:45 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Shanmugavelu",
"Sanjif",
""
],
[
"Taillefumier",
"Mathieu",
""
],
[
"Culver",
"Christopher",
""
],
[
"Ganesh",
"Vijay",
""
],
[
"Hernandez",
"Oscar",
""
],
[
"Sedova",
"Ada",
""
]
] | TITLE: Robustness of deep learning classification to adversarial input on GPUs:
asynchronous parallel accumulation is a source of vulnerability
ABSTRACT: The ability of machine learning (ML) classification models to resist small,
targeted input perturbations - known as adversarial attacks - is a key measure
of their safety and reliability. We show that floating-point non associativity
(FPNA) coupled with asynchronous parallel programming on GPUs is sufficient to
result in misclassification, without any perturbation to the input.
Additionally, we show this misclassification is particularly significant for
inputs close to the decision boundary and that standard adversarial robustness
results may be overestimated up to 4.6% when not considering machine-level
details. We first study a linear classifier, before focusing on standard Graph
Neural Network (GNN) architectures and datasets. We present a novel black-box
attack using Bayesian optimization to determine external workloads that bias
the output of reductions on GPUs and reliably lead to misclassification.
Motivated by these results, we present a new learnable permutation (LP)
gradient-based approach, to learn floating point operation orderings that lead
to misclassifications, making the assumption that any reduction or permutation
ordering is possible. This LP approach provides a worst-case estimate in a
computationally efficient manner, avoiding the need to run identical
experiments tens of thousands of times over a potentially large set of possible
GPU states or architectures. Finally, we investigate parallel reduction
ordering across different GPU architectures for a reduction under three
conditions: (1) executing external background workloads, (2) utilizing
multi-GPU virtualization, and (3) applying power capping. Our results
demonstrate that parallel reduction ordering varies significantly across
architectures under the first two conditions. The results and methods developed
here can help to include machine-level considerations into adversarial
robustness assessments.
|
2503.17182 | Patrick Rim | Patrick Rim, Hyoungseob Park, Vadim Ezhov, Jeffrey Moon, Alex Wong | Radar-Guided Polynomial Fitting for Metric Depth Estimation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose PolyRad, a novel radar-guided depth estimation method that
introduces polynomial fitting to transform scaleless depth predictions from
pretrained monocular depth estimation (MDE) models into metric depth maps.
Unlike existing approaches that rely on complex architectures or expensive
sensors, our method is grounded in a simple yet fundamental insight: using
polynomial coefficients predicted from cheap, ubiquitous radar data to
adaptively adjust depth predictions non-uniformly across depth ranges. Although
MDE models often infer reasonably accurate local depth structure within each
object or local region, they may misalign these regions relative to one
another, making a linear scale-and-shift transformation insufficient given
three or more of these regions. In contrast, PolyRad generalizes beyond linear
transformations and is able to correct such misalignments by introducing
inflection points. Importantly, our polynomial fitting framework preserves
structural consistency through a novel training objective that enforces
monotonicity via first-derivative regularization. PolyRad achieves
state-of-the-art performance on the nuScenes, ZJU-4DRadarCam, and View-of-Delft
datasets, outperforming existing methods by 30.3% in MAE and 37.2% in RMSE.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:29:42 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Rim",
"Patrick",
""
],
[
"Park",
"Hyoungseob",
""
],
[
"Ezhov",
"Vadim",
""
],
[
"Moon",
"Jeffrey",
""
],
[
"Wong",
"Alex",
""
]
] | TITLE: Radar-Guided Polynomial Fitting for Metric Depth Estimation
ABSTRACT: We propose PolyRad, a novel radar-guided depth estimation method that
introduces polynomial fitting to transform scaleless depth predictions from
pretrained monocular depth estimation (MDE) models into metric depth maps.
Unlike existing approaches that rely on complex architectures or expensive
sensors, our method is grounded in a simple yet fundamental insight: using
polynomial coefficients predicted from cheap, ubiquitous radar data to
adaptively adjust depth predictions non-uniformly across depth ranges. Although
MDE models often infer reasonably accurate local depth structure within each
object or local region, they may misalign these regions relative to one
another, making a linear scale-and-shift transformation insufficient given
three or more of these regions. In contrast, PolyRad generalizes beyond linear
transformations and is able to correct such misalignments by introducing
inflection points. Importantly, our polynomial fitting framework preserves
structural consistency through a novel training objective that enforces
monotonicity via first-derivative regularization. PolyRad achieves
state-of-the-art performance on the nuScenes, ZJU-4DRadarCam, and View-of-Delft
datasets, outperforming existing methods by 30.3% in MAE and 37.2% in RMSE.
|
2503.17184 | Xueqi Qiu | Xueqi Qiu, Xingyu Miao, Fan Wan, Haoran Duan, Tejal Shah, Varun Ojhab,
Yang Longa, Rajiv Ranjan | D2Fusion: Dual-domain Fusion with Feature Superposition for Deepfake
Detection | null | null | 10.1016/j.inffus.2025.103087 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deepfake detection is crucial for curbing the harm it causes to society.
However, current Deepfake detection methods fail to thoroughly explore artifact
information across different domains due to insufficient intrinsic
interactions. These interactions refer to the fusion and coordination after
feature extraction processes across different domains, which are crucial for
recognizing complex forgery clues. Focusing on more generalized Deepfake
detection, in this work, we introduce a novel bi-directional attention module
to capture the local positional information of artifact clues from the spatial
domain. This enables accurate artifact localization, thus addressing the coarse
processing with artifact features. To further address the limitation that the
proposed bi-directional attention module may not well capture global subtle
forgery information in the artifact feature (e.g., textures or edges), we
employ a fine-grained frequency attention module in the frequency domain. By
doing so, we can obtain high-frequency information in the fine-grained
features, which contains the global and subtle forgery information. Although
these features from the diverse domains can be effectively and independently
improved, fusing them directly does not effectively improve the detection
performance. Therefore, we propose a feature superposition strategy that
complements information from spatial and frequency domains. This strategy turns
the feature components into the form of wave-like tokens, which are updated
based on their phase, such that the distinctions between authentic and artifact
features can be amplified. Our method demonstrates significant improvements
over state-of-the-art (SOTA) methods on five public Deepfake datasets in
capturing abnormalities across different manipulated operations and real-life.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:31:33 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Qiu",
"Xueqi",
""
],
[
"Miao",
"Xingyu",
""
],
[
"Wan",
"Fan",
""
],
[
"Duan",
"Haoran",
""
],
[
"Shah",
"Tejal",
""
],
[
"Ojhab",
"Varun",
""
],
[
"Longa",
"Yang",
""
],
[
"Ranjan",
"Rajiv",
""
]
] | TITLE: D2Fusion: Dual-domain Fusion with Feature Superposition for Deepfake
Detection
ABSTRACT: Deepfake detection is crucial for curbing the harm it causes to society.
However, current Deepfake detection methods fail to thoroughly explore artifact
information across different domains due to insufficient intrinsic
interactions. These interactions refer to the fusion and coordination after
feature extraction processes across different domains, which are crucial for
recognizing complex forgery clues. Focusing on more generalized Deepfake
detection, in this work, we introduce a novel bi-directional attention module
to capture the local positional information of artifact clues from the spatial
domain. This enables accurate artifact localization, thus addressing the coarse
processing with artifact features. To further address the limitation that the
proposed bi-directional attention module may not well capture global subtle
forgery information in the artifact feature (e.g., textures or edges), we
employ a fine-grained frequency attention module in the frequency domain. By
doing so, we can obtain high-frequency information in the fine-grained
features, which contains the global and subtle forgery information. Although
these features from the diverse domains can be effectively and independently
improved, fusing them directly does not effectively improve the detection
performance. Therefore, we propose a feature superposition strategy that
complements information from spatial and frequency domains. This strategy turns
the feature components into the form of wave-like tokens, which are updated
based on their phase, such that the distinctions between authentic and artifact
features can be amplified. Our method demonstrates significant improvements
over state-of-the-art (SOTA) methods on five public Deepfake datasets in
capturing abnormalities across different manipulated operations and real-life.
|
2503.17193 | Shibing Chu | Xiaojin Lu, Taoran yue, Jiaxi cai, Shibing Chu | MSCA-Net:Multi-Scale Context Aggregation Network for Infrared Small
Target Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting infrared small targets in complex backgrounds remains a challenging
task because of the low contrast and high noise levels inherent in infrared
images. These factors often lead to the loss of crucial details during feature
extraction. Moreover, existing detection methods have limitations in adequately
integrating global and local information, which constrains the efficiency and
accuracy of infrared small target detection. To address these challenges, this
paper proposes a novel network architecture named MSCA-Net, which integrates
three key components: Multi-Scale Enhanced Detection Attention
mechanism(MSEDA), Positional Convolutional Block Attention Module (PCBAM), and
Channel Aggregation Block (CAB). Specifically, MSEDA employs a multi-scale
feature fusion attention mechanism to adaptively aggregate information across
different scales, enriching feature representation. PCBAM captures the
correlation between global and local features through a correlation
matrix-based strategy, enabling deep feature interaction. Moreover, CAB
redistributes input feature channels, facilitating the efficient transmission
of beneficial features and further enhancing the model detection capability in
complex backgrounds. The experimental results demonstrate that MSCA-Net
achieves outstanding small target detection performance in complex backgrounds.
Specifically, it attains mIoU scores of 78.43\%, 94.56\%, and 67.08\% on the
NUAA-SIRST, NUDT-SIRST, and IRTSD-1K datasets, respectively, underscoring its
effectiveness and strong potential for real-world applications.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:42:31 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Lu",
"Xiaojin",
""
],
[
"yue",
"Taoran",
""
],
[
"cai",
"Jiaxi",
""
],
[
"Chu",
"Shibing",
""
]
] | TITLE: MSCA-Net:Multi-Scale Context Aggregation Network for Infrared Small
Target Detection
ABSTRACT: Detecting infrared small targets in complex backgrounds remains a challenging
task because of the low contrast and high noise levels inherent in infrared
images. These factors often lead to the loss of crucial details during feature
extraction. Moreover, existing detection methods have limitations in adequately
integrating global and local information, which constrains the efficiency and
accuracy of infrared small target detection. To address these challenges, this
paper proposes a novel network architecture named MSCA-Net, which integrates
three key components: Multi-Scale Enhanced Detection Attention
mechanism(MSEDA), Positional Convolutional Block Attention Module (PCBAM), and
Channel Aggregation Block (CAB). Specifically, MSEDA employs a multi-scale
feature fusion attention mechanism to adaptively aggregate information across
different scales, enriching feature representation. PCBAM captures the
correlation between global and local features through a correlation
matrix-based strategy, enabling deep feature interaction. Moreover, CAB
redistributes input feature channels, facilitating the efficient transmission
of beneficial features and further enhancing the model detection capability in
complex backgrounds. The experimental results demonstrate that MSCA-Net
achieves outstanding small target detection performance in complex backgrounds.
Specifically, it attains mIoU scores of 78.43\%, 94.56\%, and 67.08\% on the
NUAA-SIRST, NUDT-SIRST, and IRTSD-1K datasets, respectively, underscoring its
effectiveness and strong potential for real-world applications.
|
2503.17195 | Sheng Wang | Sheng Wang, Pengan Chen, Jingqi Zhou, Qintong Li, Jingwei Dong, Jiahui
Gao, Boyang Xue, Jiyue Jiang, Lingpeng Kong, Chuan Wu | TreeSynth: Synthesizing Diverse Data from Scratch via Tree-Guided
Subspace Partitioning | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model customization requires high-quality and diverse datasets, but acquiring
such data remains challenging and costly. Although large language models (LLMs)
can synthesize training data, current approaches are constrained by limited
seed data, model bias and insufficient control over the generation process,
resulting in limited diversity and biased distribution with the increase of
data scales. To tackle this challenge, we present TreeSynth, a tree-guided
subspace-based data synthesis framework that recursively partitions the entire
data space into hierar-chical subspaces, enabling comprehensive and diverse
scaling of data synthesis. Briefly, given a task-specific description, we
construct a data space partitioning tree by iteratively executing criteria
determination and subspace coverage steps. This hierarchically divides the
whole space (i.e., root node) into mutually exclusive and complementary atomic
subspaces (i.e., leaf nodes). By collecting synthesized data according to the
attributes of each leaf node, we obtain a diverse dataset that fully covers the
data space. Empirically, our extensive experiments demonstrate that TreeSynth
surpasses both human-designed datasets and the state-of-the-art data synthesis
baselines, achieving maximum improvements of 45.2% in data diversity and 17.6%
in downstream task performance across various models and tasks. Hopefully,
TreeSynth provides a scalable solution to synthesize diverse and comprehensive
datasets from scratch without human intervention.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:43:23 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Wang",
"Sheng",
""
],
[
"Chen",
"Pengan",
""
],
[
"Zhou",
"Jingqi",
""
],
[
"Li",
"Qintong",
""
],
[
"Dong",
"Jingwei",
""
],
[
"Gao",
"Jiahui",
""
],
[
"Xue",
"Boyang",
""
],
[
"Jiang",
"Jiyue",
""
],
[
"Kong",
"Lingpeng",
""
],
[
"Wu",
"Chuan",
""
]
] | TITLE: TreeSynth: Synthesizing Diverse Data from Scratch via Tree-Guided
Subspace Partitioning
ABSTRACT: Model customization requires high-quality and diverse datasets, but acquiring
such data remains challenging and costly. Although large language models (LLMs)
can synthesize training data, current approaches are constrained by limited
seed data, model bias and insufficient control over the generation process,
resulting in limited diversity and biased distribution with the increase of
data scales. To tackle this challenge, we present TreeSynth, a tree-guided
subspace-based data synthesis framework that recursively partitions the entire
data space into hierar-chical subspaces, enabling comprehensive and diverse
scaling of data synthesis. Briefly, given a task-specific description, we
construct a data space partitioning tree by iteratively executing criteria
determination and subspace coverage steps. This hierarchically divides the
whole space (i.e., root node) into mutually exclusive and complementary atomic
subspaces (i.e., leaf nodes). By collecting synthesized data according to the
attributes of each leaf node, we obtain a diverse dataset that fully covers the
data space. Empirically, our extensive experiments demonstrate that TreeSynth
surpasses both human-designed datasets and the state-of-the-art data synthesis
baselines, achieving maximum improvements of 45.2% in data diversity and 17.6%
in downstream task performance across various models and tasks. Hopefully,
TreeSynth provides a scalable solution to synthesize diverse and comprehensive
datasets from scratch without human intervention.
|
2503.17201 | Masoud Mansoury | Raoul Kalisvaart, Masoud Mansoury, Alan Hanjalic, Elvin Isufi | Towards Carbon Footprint-Aware Recommender Systems for Greener Item
Recommendation | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | The commodity and widespread use of online shopping are having an
unprecedented impact on climate, with emission figures from key actors that are
easily comparable to those of a large-scale metropolis. Despite online shopping
being fueled by recommender systems (RecSys) algorithms, the role and potential
of the latter in promoting more sustainable choices is little studied. One of
the main reasons for this could be attributed to the lack of a dataset
containing carbon footprint emissions for the items. While building such a
dataset is a rather challenging task, its presence is pivotal for opening the
doors to novel perspectives, evaluations, and methods for RecSys research. In
this paper, we target this bottleneck and study the environmental role of
RecSys algorithms. First, we mine a dataset that includes carbon footprint
emissions for its items. Then, we benchmark conventional RecSys algorithms in
terms of accuracy and sustainability as two faces of the same coin. We find
that RecSys algorithms optimized for accuracy overlook greenness and that
longer recommendation lists are greener but less accurate. Then, we show that a
simple reranking approach that accounts for the item's carbon footprint can
establish a better trade-off between accuracy and greenness. This reranking
approach is modular, ready to use, and can be applied to any RecSys algorithm
without the need to alter the underlying mechanisms or retrain models. Our
results show that a small sacrifice of accuracy can lead to significant
improvements of recommendation greenness across all algorithms and list
lengths. Arguably, this accuracy-greenness trade-off could even be seen as an
enhancement of user satisfaction, particularly for purpose-driven users who
prioritize the environmental impact of their choices. We anticipate this work
will serve as the starting point for studying RecSys for more sustainable
recommendations.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 14:58:47 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Kalisvaart",
"Raoul",
""
],
[
"Mansoury",
"Masoud",
""
],
[
"Hanjalic",
"Alan",
""
],
[
"Isufi",
"Elvin",
""
]
] | TITLE: Towards Carbon Footprint-Aware Recommender Systems for Greener Item
Recommendation
ABSTRACT: The commodity and widespread use of online shopping are having an
unprecedented impact on climate, with emission figures from key actors that are
easily comparable to those of a large-scale metropolis. Despite online shopping
being fueled by recommender systems (RecSys) algorithms, the role and potential
of the latter in promoting more sustainable choices is little studied. One of
the main reasons for this could be attributed to the lack of a dataset
containing carbon footprint emissions for the items. While building such a
dataset is a rather challenging task, its presence is pivotal for opening the
doors to novel perspectives, evaluations, and methods for RecSys research. In
this paper, we target this bottleneck and study the environmental role of
RecSys algorithms. First, we mine a dataset that includes carbon footprint
emissions for its items. Then, we benchmark conventional RecSys algorithms in
terms of accuracy and sustainability as two faces of the same coin. We find
that RecSys algorithms optimized for accuracy overlook greenness and that
longer recommendation lists are greener but less accurate. Then, we show that a
simple reranking approach that accounts for the item's carbon footprint can
establish a better trade-off between accuracy and greenness. This reranking
approach is modular, ready to use, and can be applied to any RecSys algorithm
without the need to alter the underlying mechanisms or retrain models. Our
results show that a small sacrifice of accuracy can lead to significant
improvements of recommendation greenness across all algorithms and list
lengths. Arguably, this accuracy-greenness trade-off could even be seen as an
enhancement of user satisfaction, particularly for purpose-driven users who
prioritize the environmental impact of their choices. We anticipate this work
will serve as the starting point for studying RecSys for more sustainable
recommendations.
|
2503.17211 | Zilin Dai | Zilin Dai, Lehong Wang, Fangzhou Lin, Yidong Wang, Zhigang Li,
Kazunori D Yamada, Ziming Zhang, Wang Lu | A Language Anchor-Guided Method for Robust Noisy Domain Generalization | null | null | null | null | cs.CL cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Real-world machine learning applications often struggle with two major
challenges: distribution shift and label noise. Models tend to overfit by
focusing on redundant and uninformative features in the training data, which
makes it hard for them to generalize to the target domain. Noisy data worsens
this problem by causing further overfitting to the noise, meaning that existing
methods often fail to tell the difference between true, invariant features and
misleading, spurious ones. To tackle these issues, we introduce Anchor
Alignment and Adaptive Weighting (A3W). This new algorithm uses sample
reweighting guided by natural language processing (NLP) anchors to extract more
representative features. In simple terms, A3W leverages semantic
representations from natural language models as a source of domain-invariant
prior knowledge. Additionally, it employs a weighted loss function that adjusts
each sample's contribution based on its similarity to the corresponding NLP
anchor. This adjustment makes the model more robust to noisy labels. Extensive
experiments on standard benchmark datasets show that A3W consistently
outperforms state-of-the-art domain generalization methods, offering
significant improvements in both accuracy and robustness across different
datasets and noise levels.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 15:20:28 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Dai",
"Zilin",
""
],
[
"Wang",
"Lehong",
""
],
[
"Lin",
"Fangzhou",
""
],
[
"Wang",
"Yidong",
""
],
[
"Li",
"Zhigang",
""
],
[
"Yamada",
"Kazunori D",
""
],
[
"Zhang",
"Ziming",
""
],
[
"Lu",
"Wang",
""
]
] | TITLE: A Language Anchor-Guided Method for Robust Noisy Domain Generalization
ABSTRACT: Real-world machine learning applications often struggle with two major
challenges: distribution shift and label noise. Models tend to overfit by
focusing on redundant and uninformative features in the training data, which
makes it hard for them to generalize to the target domain. Noisy data worsens
this problem by causing further overfitting to the noise, meaning that existing
methods often fail to tell the difference between true, invariant features and
misleading, spurious ones. To tackle these issues, we introduce Anchor
Alignment and Adaptive Weighting (A3W). This new algorithm uses sample
reweighting guided by natural language processing (NLP) anchors to extract more
representative features. In simple terms, A3W leverages semantic
representations from natural language models as a source of domain-invariant
prior knowledge. Additionally, it employs a weighted loss function that adjusts
each sample's contribution based on its similarity to the corresponding NLP
anchor. This adjustment makes the model more robust to noisy labels. Extensive
experiments on standard benchmark datasets show that A3W consistently
outperforms state-of-the-art domain generalization methods, offering
significant improvements in both accuracy and robustness across different
datasets and noise levels.
|
2503.17212 | Matthew Kenely | Matthew Kenely, Dylan Seychell, Carl James Debono, Chris Porter | A Deep Learning Framework for Visual Attention Prediction and Analysis
of News Interfaces | This is a preprint submitted to the 2025 IEEE Conference on
Artificial Intelligence (CAI) | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | News outlets' competition for attention in news interfaces has highlighted
the need for demographically-aware saliency prediction models. Despite recent
advancements in saliency detection applied to user interfaces (UI), existing
datasets are limited in size and demographic representation. We present a deep
learning framework that enhances the SaRa (Saliency Ranking) model with
DeepGaze IIE, improving Salient Object Ranking (SOR) performance by 10.7%. Our
framework optimizes three key components: saliency map generation, grid segment
scoring, and map normalization. Through a two-fold experiment using
eye-tracking (30 participants) and mouse-tracking (375 participants aged
13--70), we analyze attention patterns across demographic groups. Statistical
analysis reveals significant age-based variations (p < 0.05, {\epsilon^2} =
0.042), with older users (36--70) engaging more with textual content and
younger users (13--35) interacting more with images. Mouse-tracking data
closely approximates eye-tracking behavior (sAUC = 0.86) and identifies UI
elements that immediately stand out, validating its use in large-scale studies.
We conclude that saliency studies should prioritize gathering data from a
larger, demographically representative sample and report exact demographic
distributions.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 15:20:29 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Kenely",
"Matthew",
""
],
[
"Seychell",
"Dylan",
""
],
[
"Debono",
"Carl James",
""
],
[
"Porter",
"Chris",
""
]
] | TITLE: A Deep Learning Framework for Visual Attention Prediction and Analysis
of News Interfaces
ABSTRACT: News outlets' competition for attention in news interfaces has highlighted
the need for demographically-aware saliency prediction models. Despite recent
advancements in saliency detection applied to user interfaces (UI), existing
datasets are limited in size and demographic representation. We present a deep
learning framework that enhances the SaRa (Saliency Ranking) model with
DeepGaze IIE, improving Salient Object Ranking (SOR) performance by 10.7%. Our
framework optimizes three key components: saliency map generation, grid segment
scoring, and map normalization. Through a two-fold experiment using
eye-tracking (30 participants) and mouse-tracking (375 participants aged
13--70), we analyze attention patterns across demographic groups. Statistical
analysis reveals significant age-based variations (p < 0.05, {\epsilon^2} =
0.042), with older users (36--70) engaging more with textual content and
younger users (13--35) interacting more with images. Mouse-tracking data
closely approximates eye-tracking behavior (sAUC = 0.86) and identifies UI
elements that immediately stand out, validating its use in large-scale studies.
We conclude that saliency studies should prioritize gathering data from a
larger, demographically representative sample and report exact demographic
distributions.
|
2503.17224 | Eugenio Lomurno | Giacomo Savazzi, Eugenio Lomurno, Cristian Sbrolli, Agnese Chiatti,
Matteo Matteucci | Neuro-Symbolic Scene Graph Conditioning for Synthetic Image Dataset
Generation | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | As machine learning models increase in scale and complexity, obtaining
sufficient training data has become a critical bottleneck due to acquisition
costs, privacy constraints, and data scarcity in specialised domains. While
synthetic data generation has emerged as a promising alternative, a notable
performance gap remains compared to models trained on real data, particularly
as task complexity grows. Concurrently, Neuro-Symbolic methods, which combine
neural networks' learning strengths with symbolic reasoning's structured
representations, have demonstrated significant potential across various
cognitive tasks. This paper explores the utility of Neuro-Symbolic conditioning
for synthetic image dataset generation, focusing specifically on improving the
performance of Scene Graph Generation models. The research investigates whether
structured symbolic representations in the form of scene graphs can enhance
synthetic data quality through explicit encoding of relational constraints. The
results demonstrate that Neuro-Symbolic conditioning yields significant
improvements of up to +2.59% in standard Recall metrics and +2.83% in No Graph
Constraint Recall metrics when used for dataset augmentation. These findings
establish that merging Neuro-Symbolic and generative approaches produces
synthetic data with complementary structural information that enhances model
performance when combined with real data, providing a novel approach to
overcome data scarcity limitations even for complex visual reasoning tasks.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 15:26:16 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Savazzi",
"Giacomo",
""
],
[
"Lomurno",
"Eugenio",
""
],
[
"Sbrolli",
"Cristian",
""
],
[
"Chiatti",
"Agnese",
""
],
[
"Matteucci",
"Matteo",
""
]
] | TITLE: Neuro-Symbolic Scene Graph Conditioning for Synthetic Image Dataset
Generation
ABSTRACT: As machine learning models increase in scale and complexity, obtaining
sufficient training data has become a critical bottleneck due to acquisition
costs, privacy constraints, and data scarcity in specialised domains. While
synthetic data generation has emerged as a promising alternative, a notable
performance gap remains compared to models trained on real data, particularly
as task complexity grows. Concurrently, Neuro-Symbolic methods, which combine
neural networks' learning strengths with symbolic reasoning's structured
representations, have demonstrated significant potential across various
cognitive tasks. This paper explores the utility of Neuro-Symbolic conditioning
for synthetic image dataset generation, focusing specifically on improving the
performance of Scene Graph Generation models. The research investigates whether
structured symbolic representations in the form of scene graphs can enhance
synthetic data quality through explicit encoding of relational constraints. The
results demonstrate that Neuro-Symbolic conditioning yields significant
improvements of up to +2.59% in standard Recall metrics and +2.83% in No Graph
Constraint Recall metrics when used for dataset augmentation. These findings
establish that merging Neuro-Symbolic and generative approaches produces
synthetic data with complementary structural information that enhances model
performance when combined with real data, providing a novel approach to
overcome data scarcity limitations even for complex visual reasoning tasks.
|
2503.17226 | Aryan Yazdan Parast | Aryan Yazdan Parast, Basim Azam, Naveed Akhtar | Leveraging Text-to-Image Generation for Handling Spurious Correlation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks trained with Empirical Risk Minimization (ERM) perform
well when both training and test data come from the same domain, but they often
fail to generalize to out-of-distribution samples. In image classification,
these models may rely on spurious correlations that often exist between labels
and irrelevant features of images, making predictions unreliable when those
features do not exist. We propose a technique to generate training samples with
text-to-image (T2I) diffusion models for addressing the spurious correlation
problem. First, we compute the best describing token for the visual features
pertaining to the causal components of samples by a textual inversion
mechanism. Then, leveraging a language segmentation method and a diffusion
model, we generate new samples by combining the causal component with the
elements from other classes. We also meticulously prune the generated samples
based on the prediction probabilities and attribution scores of the ERM model
to ensure their correct composition for our objective. Finally, we retrain the
ERM model on our augmented dataset. This process reduces the model's reliance
on spurious correlations by learning from carefully crafted samples for in
which this correlation does not exist. Our experiments show that across
different benchmarks, our technique achieves better worst-group accuracy than
the existing state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 15:28:22 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Parast",
"Aryan Yazdan",
""
],
[
"Azam",
"Basim",
""
],
[
"Akhtar",
"Naveed",
""
]
] | TITLE: Leveraging Text-to-Image Generation for Handling Spurious Correlation
ABSTRACT: Deep neural networks trained with Empirical Risk Minimization (ERM) perform
well when both training and test data come from the same domain, but they often
fail to generalize to out-of-distribution samples. In image classification,
these models may rely on spurious correlations that often exist between labels
and irrelevant features of images, making predictions unreliable when those
features do not exist. We propose a technique to generate training samples with
text-to-image (T2I) diffusion models for addressing the spurious correlation
problem. First, we compute the best describing token for the visual features
pertaining to the causal components of samples by a textual inversion
mechanism. Then, leveraging a language segmentation method and a diffusion
model, we generate new samples by combining the causal component with the
elements from other classes. We also meticulously prune the generated samples
based on the prediction probabilities and attribution scores of the ERM model
to ensure their correct composition for our objective. Finally, we retrain the
ERM model on our augmented dataset. This process reduces the model's reliance
on spurious correlations by learning from carefully crafted samples for in
which this correlation does not exist. Our experiments show that across
different benchmarks, our technique achieves better worst-group accuracy than
the existing state-of-the-art methods.
|
2503.17231 | Li Zhang | Li Zhang, Chaochao Chen, Zhongxuan Han, Qiyong Zhong, Xiaolin Zheng | LoGoFair: Post-Processing for Local and Global Fairness in Federated
Learning | Accepted by AAAI2025 | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning (FL) has garnered considerable interest for its capability
to learn from decentralized data sources. Given the increasing application of
FL in decision-making scenarios, addressing fairness issues across different
sensitive groups (e.g., female, male) in FL is crucial. Current research often
focuses on facilitating fairness at each client's data (local fairness) or
within the entire dataset across all clients (global fairness). However,
existing approaches that focus exclusively on either local or global fairness
fail to address two key challenges: (\textbf{CH1}) Under statistical
heterogeneity, global fairness does not imply local fairness, and vice versa.
(\textbf{CH2}) Achieving fairness under model-agnostic setting. To tackle the
aforementioned challenges, this paper proposes a novel post-processing
framework for achieving both Local and Global Fairness in the FL context,
namely LoGoFair. To address CH1, LoGoFair endeavors to seek the Bayes optimal
classifier under local and global fairness constraints, which strikes the
optimal accuracy-fairness balance in the probabilistic sense. To address CH2,
LoGoFair employs a model-agnostic federated post-processing procedure that
enables clients to collaboratively optimize global fairness while ensuring
local fairness, thereby achieving the optimal fair classifier within FL.
Experimental results on three real-world datasets further illustrate the
effectiveness of the proposed LoGoFair framework.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 15:33:09 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhang",
"Li",
""
],
[
"Chen",
"Chaochao",
""
],
[
"Han",
"Zhongxuan",
""
],
[
"Zhong",
"Qiyong",
""
],
[
"Zheng",
"Xiaolin",
""
]
] | TITLE: LoGoFair: Post-Processing for Local and Global Fairness in Federated
Learning
ABSTRACT: Federated learning (FL) has garnered considerable interest for its capability
to learn from decentralized data sources. Given the increasing application of
FL in decision-making scenarios, addressing fairness issues across different
sensitive groups (e.g., female, male) in FL is crucial. Current research often
focuses on facilitating fairness at each client's data (local fairness) or
within the entire dataset across all clients (global fairness). However,
existing approaches that focus exclusively on either local or global fairness
fail to address two key challenges: (\textbf{CH1}) Under statistical
heterogeneity, global fairness does not imply local fairness, and vice versa.
(\textbf{CH2}) Achieving fairness under model-agnostic setting. To tackle the
aforementioned challenges, this paper proposes a novel post-processing
framework for achieving both Local and Global Fairness in the FL context,
namely LoGoFair. To address CH1, LoGoFair endeavors to seek the Bayes optimal
classifier under local and global fairness constraints, which strikes the
optimal accuracy-fairness balance in the probabilistic sense. To address CH2,
LoGoFair employs a model-agnostic federated post-processing procedure that
enables clients to collaboratively optimize global fairness while ensuring
local fairness, thereby achieving the optimal fair classifier within FL.
Experimental results on three real-world datasets further illustrate the
effectiveness of the proposed LoGoFair framework.
|
2503.17238 | Behzad Bozorgtabar | Devavrat Tomar, Guillaume Vray, Dwarikanath Mahapatra, Sudipta Roy,
Jean-Philippe Thiran, Behzad Bozorgtabar | Slide-Level Prompt Learning with Vision Language Models for Few-Shot
Multiple Instance Learning in Histopathology | Accepted to ISBI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we address the challenge of few-shot classification in
histopathology whole slide images (WSIs) by utilizing foundational
vision-language models (VLMs) and slide-level prompt learning. Given the
gigapixel scale of WSIs, conventional multiple instance learning (MIL) methods
rely on aggregation functions to derive slide-level (bag-level) predictions
from patch representations, which require extensive bag-level labels for
training. In contrast, VLM-based approaches excel at aligning visual embeddings
of patches with candidate class text prompts but lack essential pathological
prior knowledge. Our method distinguishes itself by utilizing pathological
prior knowledge from language models to identify crucial local tissue types
(patches) for WSI classification, integrating this within a VLM-based MIL
framework. Our approach effectively aligns patch images with tissue types, and
we fine-tune our model via prompt learning using only a few labeled WSIs per
category. Experimentation on real-world pathological WSI datasets and ablation
studies highlight our method's superior performance over existing MIL- and
VLM-based methods in few-shot WSI classification tasks. Our code is publicly
available at https://github.com/LTS5/SLIP.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 15:40:37 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Tomar",
"Devavrat",
""
],
[
"Vray",
"Guillaume",
""
],
[
"Mahapatra",
"Dwarikanath",
""
],
[
"Roy",
"Sudipta",
""
],
[
"Thiran",
"Jean-Philippe",
""
],
[
"Bozorgtabar",
"Behzad",
""
]
] | TITLE: Slide-Level Prompt Learning with Vision Language Models for Few-Shot
Multiple Instance Learning in Histopathology
ABSTRACT: In this paper, we address the challenge of few-shot classification in
histopathology whole slide images (WSIs) by utilizing foundational
vision-language models (VLMs) and slide-level prompt learning. Given the
gigapixel scale of WSIs, conventional multiple instance learning (MIL) methods
rely on aggregation functions to derive slide-level (bag-level) predictions
from patch representations, which require extensive bag-level labels for
training. In contrast, VLM-based approaches excel at aligning visual embeddings
of patches with candidate class text prompts but lack essential pathological
prior knowledge. Our method distinguishes itself by utilizing pathological
prior knowledge from language models to identify crucial local tissue types
(patches) for WSI classification, integrating this within a VLM-based MIL
framework. Our approach effectively aligns patch images with tissue types, and
we fine-tune our model via prompt learning using only a few labeled WSIs per
category. Experimentation on real-world pathological WSI datasets and ablation
studies highlight our method's superior performance over existing MIL- and
VLM-based methods in few-shot WSI classification tasks. Our code is publicly
available at https://github.com/LTS5/SLIP.
|
2503.17239 | Aladin Djuhera | Aladin Djuhera, Swanand Ravindra Kadhe, Farhan Ahmed, Syed Zawad,
Holger Boche | SafeMERGE: Preserving Safety Alignment in Fine-Tuned Large Language
Models via Selective Layer-Wise Model Merging | null | ICLR 2025 Workshop on Building Trust in Language Models and
Applications | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Fine-tuning large language models (LLMs) on downstream tasks can
inadvertently erode their safety alignment, even for benign fine-tuning
datasets. We address this challenge by proposing SafeMERGE, a post-fine-tuning
framework that preserves safety while maintaining task utility. It achieves
this by selectively merging fine-tuned and safety-aligned model layers only
when those deviate from safe behavior, measured by a cosine similarity
criterion. We evaluate SafeMERGE against other fine-tuning- and
post-fine-tuning-stage approaches for Llama-2-7B-Chat and Qwen-2-7B-Instruct
models on GSM8K and PubMedQA tasks while exploring different merging
strategies. We find that SafeMERGE consistently reduces harmful outputs
compared to other baselines without significantly sacrificing performance,
sometimes even enhancing it. The results suggest that our selective,
subspace-guided, and per-layer merging method provides an effective safeguard
against the inadvertent loss of safety in fine-tuned LLMs while outperforming
simpler post-fine-tuning-stage defenses.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 15:44:09 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Djuhera",
"Aladin",
""
],
[
"Kadhe",
"Swanand Ravindra",
""
],
[
"Ahmed",
"Farhan",
""
],
[
"Zawad",
"Syed",
""
],
[
"Boche",
"Holger",
""
]
] | TITLE: SafeMERGE: Preserving Safety Alignment in Fine-Tuned Large Language
Models via Selective Layer-Wise Model Merging
ABSTRACT: Fine-tuning large language models (LLMs) on downstream tasks can
inadvertently erode their safety alignment, even for benign fine-tuning
datasets. We address this challenge by proposing SafeMERGE, a post-fine-tuning
framework that preserves safety while maintaining task utility. It achieves
this by selectively merging fine-tuned and safety-aligned model layers only
when those deviate from safe behavior, measured by a cosine similarity
criterion. We evaluate SafeMERGE against other fine-tuning- and
post-fine-tuning-stage approaches for Llama-2-7B-Chat and Qwen-2-7B-Instruct
models on GSM8K and PubMedQA tasks while exploring different merging
strategies. We find that SafeMERGE consistently reduces harmful outputs
compared to other baselines without significantly sacrificing performance,
sometimes even enhancing it. The results suggest that our selective,
subspace-guided, and per-layer merging method provides an effective safeguard
against the inadvertent loss of safety in fine-tuned LLMs while outperforming
simpler post-fine-tuning-stage defenses.
|
2503.17261 | Jie Mei | Jie Mei, Chenyu Lin, Yu Qiu, Yaonan Wang, Hui Zhang, Ziyang Wang, Dong
Dai | Cross-Modal Interactive Perception Network with Mamba for Lung Tumor
Segmentation in PET-CT Images | Accepted to CVPR 2025 | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lung cancer is a leading cause of cancer-related deaths globally. PET-CT is
crucial for imaging lung tumors, providing essential metabolic and anatomical
information, while it faces challenges such as poor image quality, motion
artifacts, and complex tumor morphology. Deep learning-based models are
expected to address these problems, however, existing small-scale and private
datasets limit significant performance improvements for these methods. Hence,
we introduce a large-scale PET-CT lung tumor segmentation dataset, termed
PCLT20K, which comprises 21,930 pairs of PET-CT images from 605 patients.
Furthermore, we propose a cross-modal interactive perception network with Mamba
(CIPA) for lung tumor segmentation in PET-CT images. Specifically, we design a
channel-wise rectification module (CRM) that implements a channel state space
block across multi-modal features to learn correlated representations and helps
filter out modality-specific noise. A dynamic cross-modality interaction module
(DCIM) is designed to effectively integrate position and context information,
which employs PET images to learn regional position information and serves as a
bridge to assist in modeling the relationships between local features of CT
images. Extensive experiments on a comprehensive benchmark demonstrate the
effectiveness of our CIPA compared to the current state-of-the-art segmentation
methods. We hope our research can provide more exploration opportunities for
medical image segmentation. The dataset and code are available at
https://github.com/mj129/CIPA.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 16:04:11 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Mei",
"Jie",
""
],
[
"Lin",
"Chenyu",
""
],
[
"Qiu",
"Yu",
""
],
[
"Wang",
"Yaonan",
""
],
[
"Zhang",
"Hui",
""
],
[
"Wang",
"Ziyang",
""
],
[
"Dai",
"Dong",
""
]
] | TITLE: Cross-Modal Interactive Perception Network with Mamba for Lung Tumor
Segmentation in PET-CT Images
ABSTRACT: Lung cancer is a leading cause of cancer-related deaths globally. PET-CT is
crucial for imaging lung tumors, providing essential metabolic and anatomical
information, while it faces challenges such as poor image quality, motion
artifacts, and complex tumor morphology. Deep learning-based models are
expected to address these problems, however, existing small-scale and private
datasets limit significant performance improvements for these methods. Hence,
we introduce a large-scale PET-CT lung tumor segmentation dataset, termed
PCLT20K, which comprises 21,930 pairs of PET-CT images from 605 patients.
Furthermore, we propose a cross-modal interactive perception network with Mamba
(CIPA) for lung tumor segmentation in PET-CT images. Specifically, we design a
channel-wise rectification module (CRM) that implements a channel state space
block across multi-modal features to learn correlated representations and helps
filter out modality-specific noise. A dynamic cross-modality interaction module
(DCIM) is designed to effectively integrate position and context information,
which employs PET images to learn regional position information and serves as a
bridge to assist in modeling the relationships between local features of CT
images. Extensive experiments on a comprehensive benchmark demonstrate the
effectiveness of our CIPA compared to the current state-of-the-art segmentation
methods. We hope our research can provide more exploration opportunities for
medical image segmentation. The dataset and code are available at
https://github.com/mj129/CIPA.
|
2503.17267 | Hiromu Taketsugu | Hiromu Taketsugu, Takeru Oba, Takahiro Maeda, Shohei Nobuhara,
Norimichi Ukita | Physical Plausibility-aware Trajectory Prediction via Locomotion
Embodiment | CVPR2025. Project page: https://iminthemiddle.github.io/EmLoco-Page/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans can predict future human trajectories even from momentary observations
by using human pose-related cues. However, previous Human Trajectory Prediction
(HTP) methods leverage the pose cues implicitly, resulting in implausible
predictions. To address this, we propose Locomotion Embodiment, a framework
that explicitly evaluates the physical plausibility of the predicted trajectory
by locomotion generation under the laws of physics. While the plausibility of
locomotion is learned with an indifferentiable physics simulator, it is
replaced by our differentiable Locomotion Value function to train an HTP
network in a data-driven manner. In particular, our proposed Embodied
Locomotion loss is beneficial for efficiently training a stochastic HTP network
using multiple heads. Furthermore, the Locomotion Value filter is proposed to
filter out implausible trajectories at inference. Experiments demonstrate that
our method enhances even the state-of-the-art HTP methods across diverse
datasets and problem settings. Our code is available at:
https://github.com/ImIntheMiddle/EmLoco.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 16:08:25 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Taketsugu",
"Hiromu",
""
],
[
"Oba",
"Takeru",
""
],
[
"Maeda",
"Takahiro",
""
],
[
"Nobuhara",
"Shohei",
""
],
[
"Ukita",
"Norimichi",
""
]
] | TITLE: Physical Plausibility-aware Trajectory Prediction via Locomotion
Embodiment
ABSTRACT: Humans can predict future human trajectories even from momentary observations
by using human pose-related cues. However, previous Human Trajectory Prediction
(HTP) methods leverage the pose cues implicitly, resulting in implausible
predictions. To address this, we propose Locomotion Embodiment, a framework
that explicitly evaluates the physical plausibility of the predicted trajectory
by locomotion generation under the laws of physics. While the plausibility of
locomotion is learned with an indifferentiable physics simulator, it is
replaced by our differentiable Locomotion Value function to train an HTP
network in a data-driven manner. In particular, our proposed Embodied
Locomotion loss is beneficial for efficiently training a stochastic HTP network
using multiple heads. Furthermore, the Locomotion Value filter is proposed to
filter out implausible trajectories at inference. Experiments demonstrate that
our method enhances even the state-of-the-art HTP methods across diverse
datasets and problem settings. Our code is available at:
https://github.com/ImIntheMiddle/EmLoco.
|
2503.17279 | Gaifan Zhang | Gaifan Zhang, Yi Zhou, Danushka Bollegala | CASE -- Condition-Aware Sentence Embeddings for Conditional Semantic
Textual Similarity Measurement | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The meaning conveyed by a sentence often depends on the context in which it
appears. Despite the progress of sentence embedding methods, it remains unclear
how to best modify a sentence embedding conditioned on its context. To address
this problem, we propose Condition-Aware Sentence Embeddings (CASE), an
efficient and accurate method to create an embedding for a sentence under a
given condition. First, CASE creates an embedding for the condition using a
Large Language Model (LLM), where the sentence influences the attention scores
computed for the tokens in the condition during pooling. Next, a supervised
nonlinear projection is learned to reduce the dimensionality of the LLM-based
text embeddings. We show that CASE significantly outperforms previously
proposed Conditional Semantic Textual Similarity (C-STS) methods on an existing
standard benchmark dataset. We find that subtracting the condition embedding
consistently improves the C-STS performance of LLM-based text embeddings.
Moreover, we propose a supervised dimensionality reduction method that not only
reduces the dimensionality of LLM-based embeddings but also significantly
improves their performance.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 16:27:12 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Zhang",
"Gaifan",
""
],
[
"Zhou",
"Yi",
""
],
[
"Bollegala",
"Danushka",
""
]
] | TITLE: CASE -- Condition-Aware Sentence Embeddings for Conditional Semantic
Textual Similarity Measurement
ABSTRACT: The meaning conveyed by a sentence often depends on the context in which it
appears. Despite the progress of sentence embedding methods, it remains unclear
how to best modify a sentence embedding conditioned on its context. To address
this problem, we propose Condition-Aware Sentence Embeddings (CASE), an
efficient and accurate method to create an embedding for a sentence under a
given condition. First, CASE creates an embedding for the condition using a
Large Language Model (LLM), where the sentence influences the attention scores
computed for the tokens in the condition during pooling. Next, a supervised
nonlinear projection is learned to reduce the dimensionality of the LLM-based
text embeddings. We show that CASE significantly outperforms previously
proposed Conditional Semantic Textual Similarity (C-STS) methods on an existing
standard benchmark dataset. We find that subtracting the condition embedding
consistently improves the C-STS performance of LLM-based text embeddings.
Moreover, we propose a supervised dimensionality reduction method that not only
reduces the dimensionality of LLM-based embeddings but also significantly
improves their performance.
|
2503.17286 | Can Chen | Minsu Kim, Jiayao Gu, Ye Yuan, Taeyoung Yun, Zixuan Liu, Yoshua
Bengio, Can Chen | Offline Model-Based Optimization: Comprehensive Review | 29 pages | null | null | null | cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Offline optimization is a fundamental challenge in science and engineering,
where the goal is to optimize black-box functions using only offline datasets.
This setting is particularly relevant when querying the objective function is
prohibitively expensive or infeasible, with applications spanning protein
engineering, material discovery, neural architecture search, and beyond. The
main difficulty lies in accurately estimating the objective landscape beyond
the available data, where extrapolations are fraught with significant epistemic
uncertainty. This uncertainty can lead to objective hacking(reward hacking),
exploiting model inaccuracies in unseen regions, or other spurious
optimizations that yield misleadingly high performance estimates outside the
training distribution. Recent advances in model-based optimization(MBO) have
harnessed the generalization capabilities of deep neural networks to develop
offline-specific surrogate and generative models. Trained with carefully
designed strategies, these models are more robust against out-of-distribution
issues, facilitating the discovery of improved designs. Despite its growing
impact in accelerating scientific discovery, the field lacks a comprehensive
review. To bridge this gap, we present the first thorough review of offline
MBO. We begin by formalizing the problem for both single-objective and
multi-objective settings and by reviewing recent benchmarks and evaluation
metrics. We then categorize existing approaches into two key areas: surrogate
modeling, which emphasizes accurate function approximation in
out-of-distribution regions, and generative modeling, which explores
high-dimensional design spaces to identify high-performing designs. Finally, we
examine the key challenges and propose promising directions for advancement in
this rapidly evolving field including safe control of superintelligent systems.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 16:35:02 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Kim",
"Minsu",
""
],
[
"Gu",
"Jiayao",
""
],
[
"Yuan",
"Ye",
""
],
[
"Yun",
"Taeyoung",
""
],
[
"Liu",
"Zixuan",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Chen",
"Can",
""
]
] | TITLE: Offline Model-Based Optimization: Comprehensive Review
ABSTRACT: Offline optimization is a fundamental challenge in science and engineering,
where the goal is to optimize black-box functions using only offline datasets.
This setting is particularly relevant when querying the objective function is
prohibitively expensive or infeasible, with applications spanning protein
engineering, material discovery, neural architecture search, and beyond. The
main difficulty lies in accurately estimating the objective landscape beyond
the available data, where extrapolations are fraught with significant epistemic
uncertainty. This uncertainty can lead to objective hacking(reward hacking),
exploiting model inaccuracies in unseen regions, or other spurious
optimizations that yield misleadingly high performance estimates outside the
training distribution. Recent advances in model-based optimization(MBO) have
harnessed the generalization capabilities of deep neural networks to develop
offline-specific surrogate and generative models. Trained with carefully
designed strategies, these models are more robust against out-of-distribution
issues, facilitating the discovery of improved designs. Despite its growing
impact in accelerating scientific discovery, the field lacks a comprehensive
review. To bridge this gap, we present the first thorough review of offline
MBO. We begin by formalizing the problem for both single-objective and
multi-objective settings and by reviewing recent benchmarks and evaluation
metrics. We then categorize existing approaches into two key areas: surrogate
modeling, which emphasizes accurate function approximation in
out-of-distribution regions, and generative modeling, which explores
high-dimensional design spaces to identify high-performing designs. Finally, we
examine the key challenges and propose promising directions for advancement in
this rapidly evolving field including safe control of superintelligent systems.
|
2503.17287 | Mingyang Song | Mingyang Song, Mao Zheng, Zheng Li, Wenjie Yang, Xuan Luo, Yue Pan,
Feng Zhang | FastCuRL: Curriculum Reinforcement Learning with Progressive Context
Extension for Efficient Training R1-like Reasoning Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose \textbf{\textsc{FastCuRL}}, a simple yet efficient
\textbf{Cu}rriculum \textbf{R}einforcement \textbf{L}earning approach with
context window extending strategy to accelerate the reinforcement learning
training efficiency for R1-like reasoning models while enhancing their
performance in tackling complex reasoning tasks with long chain-of-thought
rationales, particularly with a 1.5B parameter language model.
\textbf{\textsc{FastCuRL}} consists of two main procedures: length-aware
training data segmentation and context window extension training. Specifically,
the former first splits the original training data into three different levels
by the input prompt length, and then the latter leverages segmented training
datasets with a progressively increasing context window length to train the
reasoning model. Experimental results demonstrate that
\textbf{\textsc{FastCuRL}}-1.5B-Preview surpasses DeepScaleR-1.5B-Preview
across all five datasets (including MATH 500, AIME 2024, AMC 2023, Minerva
Math, and OlympiadBench) while only utilizing 50\% of training steps.
Furthermore, all training stages for FastCuRL-1.5B-Preview are completed using
just a single node with 8 GPUs.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 16:35:31 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Song",
"Mingyang",
""
],
[
"Zheng",
"Mao",
""
],
[
"Li",
"Zheng",
""
],
[
"Yang",
"Wenjie",
""
],
[
"Luo",
"Xuan",
""
],
[
"Pan",
"Yue",
""
],
[
"Zhang",
"Feng",
""
]
] | TITLE: FastCuRL: Curriculum Reinforcement Learning with Progressive Context
Extension for Efficient Training R1-like Reasoning Models
ABSTRACT: In this paper, we propose \textbf{\textsc{FastCuRL}}, a simple yet efficient
\textbf{Cu}rriculum \textbf{R}einforcement \textbf{L}earning approach with
context window extending strategy to accelerate the reinforcement learning
training efficiency for R1-like reasoning models while enhancing their
performance in tackling complex reasoning tasks with long chain-of-thought
rationales, particularly with a 1.5B parameter language model.
\textbf{\textsc{FastCuRL}} consists of two main procedures: length-aware
training data segmentation and context window extension training. Specifically,
the former first splits the original training data into three different levels
by the input prompt length, and then the latter leverages segmented training
datasets with a progressively increasing context window length to train the
reasoning model. Experimental results demonstrate that
\textbf{\textsc{FastCuRL}}-1.5B-Preview surpasses DeepScaleR-1.5B-Preview
across all five datasets (including MATH 500, AIME 2024, AMC 2023, Minerva
Math, and OlympiadBench) while only utilizing 50\% of training steps.
Furthermore, all training stages for FastCuRL-1.5B-Preview are completed using
just a single node with 8 GPUs.
|
2503.17289 | Ali Rabeh | Ali Rabeh, Adarsh Krishnamurthy, Baskar Ganapathysubramanian | 3D Neural Operator-Based Flow Surrogates around 3D geometries: Signed
Distance Functions and Derivative Constraints | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurate modeling of fluid dynamics around complex geometries is critical for
applications such as aerodynamic optimization and biomedical device design.
While advancements in numerical methods and high-performance computing have
improved simulation capabilities, the computational cost of high-fidelity 3D
flow simulations remains a significant challenge. Scientific machine learning
(SciML) offers an efficient alternative, enabling rapid and reliable flow
predictions. In this study, we evaluate Deep Operator Networks (DeepONet) and
Geometric-DeepONet, a variant that incorporates geometry information via signed
distance functions (SDFs), on steady-state 3D flow over complex objects. Our
dataset consists of 1,000 high-fidelity simulations spanning Reynolds numbers
from 10 to 1,000, enabling comprehensive training and evaluation across a range
of flow regimes. To assess model generalization, we test our models on a random
and extrapolatory train-test splitting. Additionally, we explore a
derivative-informed training strategy that augments standard loss functions
with velocity gradient penalties and incompressibility constraints, improving
physics consistency in 3D flow prediction. Our results show that
Geometric-DeepONet improves boundary-layer accuracy by up to 32% compared to
standard DeepONet. Moreover, incorporating derivative constraints enhances
gradient accuracy by 25% in interpolation tasks and up to 45% in extrapolatory
test scenarios, suggesting significant improvement in generalization
capabilities to unseen 3D Reynolds numbers.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 16:40:48 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Rabeh",
"Ali",
""
],
[
"Krishnamurthy",
"Adarsh",
""
],
[
"Ganapathysubramanian",
"Baskar",
""
]
] | TITLE: 3D Neural Operator-Based Flow Surrogates around 3D geometries: Signed
Distance Functions and Derivative Constraints
ABSTRACT: Accurate modeling of fluid dynamics around complex geometries is critical for
applications such as aerodynamic optimization and biomedical device design.
While advancements in numerical methods and high-performance computing have
improved simulation capabilities, the computational cost of high-fidelity 3D
flow simulations remains a significant challenge. Scientific machine learning
(SciML) offers an efficient alternative, enabling rapid and reliable flow
predictions. In this study, we evaluate Deep Operator Networks (DeepONet) and
Geometric-DeepONet, a variant that incorporates geometry information via signed
distance functions (SDFs), on steady-state 3D flow over complex objects. Our
dataset consists of 1,000 high-fidelity simulations spanning Reynolds numbers
from 10 to 1,000, enabling comprehensive training and evaluation across a range
of flow regimes. To assess model generalization, we test our models on a random
and extrapolatory train-test splitting. Additionally, we explore a
derivative-informed training strategy that augments standard loss functions
with velocity gradient penalties and incompressibility constraints, improving
physics consistency in 3D flow prediction. Our results show that
Geometric-DeepONet improves boundary-layer accuracy by up to 32% compared to
standard DeepONet. Moreover, incorporating derivative constraints enhances
gradient accuracy by 25% in interpolation tasks and up to 45% in extrapolatory
test scenarios, suggesting significant improvement in generalization
capabilities to unseen 3D Reynolds numbers.
|
2503.17299 | Syrine Belakaria | Yashas Annadani, Syrine Belakaria, Stefano Ermon, Stefan Bauer,
Barbara E Engelhardt | Preference-Guided Diffusion for Multi-Objective Offline Optimization | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Offline multi-objective optimization aims to identify Pareto-optimal
solutions given a dataset of designs and their objective values. In this work,
we propose a preference-guided diffusion model that generates Pareto-optimal
designs by leveraging a classifier-based guidance mechanism. Our guidance
classifier is a preference model trained to predict the probability that one
design dominates another, directing the diffusion model toward optimal regions
of the design space. Crucially, this preference model generalizes beyond the
training distribution, enabling the discovery of Pareto-optimal solutions
outside the observed dataset. We introduce a novel diversity-aware preference
guidance, augmenting Pareto dominance preference with diversity criteria. This
ensures that generated solutions are optimal and well-distributed across the
objective space, a capability absent in prior generative methods for offline
multi-objective optimization. We evaluate our approach on various continuous
offline multi-objective optimization tasks and find that it consistently
outperforms other inverse/generative approaches while remaining competitive
with forward/surrogate-based optimization methods. Our results highlight the
effectiveness of classifier-guided diffusion models in generating diverse and
high-quality solutions that approximate the Pareto front well.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 16:49:38 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Annadani",
"Yashas",
""
],
[
"Belakaria",
"Syrine",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Bauer",
"Stefan",
""
],
[
"Engelhardt",
"Barbara E",
""
]
] | TITLE: Preference-Guided Diffusion for Multi-Objective Offline Optimization
ABSTRACT: Offline multi-objective optimization aims to identify Pareto-optimal
solutions given a dataset of designs and their objective values. In this work,
we propose a preference-guided diffusion model that generates Pareto-optimal
designs by leveraging a classifier-based guidance mechanism. Our guidance
classifier is a preference model trained to predict the probability that one
design dominates another, directing the diffusion model toward optimal regions
of the design space. Crucially, this preference model generalizes beyond the
training distribution, enabling the discovery of Pareto-optimal solutions
outside the observed dataset. We introduce a novel diversity-aware preference
guidance, augmenting Pareto dominance preference with diversity criteria. This
ensures that generated solutions are optimal and well-distributed across the
objective space, a capability absent in prior generative methods for offline
multi-objective optimization. We evaluate our approach on various continuous
offline multi-objective optimization tasks and find that it consistently
outperforms other inverse/generative approaches while remaining competitive
with forward/surrogate-based optimization methods. Our results highlight the
effectiveness of classifier-guided diffusion models in generating diverse and
high-quality solutions that approximate the Pareto front well.
|
2503.17336 | Reem Gody | Reem Gody, Mohamed Abdelghaffar, Mohammed Jabreel, Ahmed Tawfik | Efficient Intent-Based Filtering for Multi-Party Conversations Using
Knowledge Distillation from LLMs | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have showcased remarkable capabilities in
conversational AI, enabling open-domain responses in chat-bots, as well as
advanced processing of conversations like summarization, intent classification,
and insights generation. However, these models are resource-intensive,
demanding substantial memory and computational power. To address this, we
propose a cost-effective solution that filters conversational snippets of
interest for LLM processing, tailored to the target downstream application,
rather than processing every snippet. In this work, we introduce an innovative
approach that leverages knowledge distillation from LLMs to develop an
intent-based filter for multi-party conversations, optimized for compute power
constrained environments. Our method combines different strategies to create a
diverse multi-party conversational dataset, that is annotated with the target
intents and is then used to fine-tune the MobileBERT model for multi-label
intent classification. This model achieves a balance between efficiency and
performance, effectively filtering conversation snippets based on their
intents. By passing only the relevant snippets to the LLM for further
processing, our approach significantly reduces overall operational costs
depending on the intents and the data distribution as demonstrated in our
experiments.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 17:34:37 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Gody",
"Reem",
""
],
[
"Abdelghaffar",
"Mohamed",
""
],
[
"Jabreel",
"Mohammed",
""
],
[
"Tawfik",
"Ahmed",
""
]
] | TITLE: Efficient Intent-Based Filtering for Multi-Party Conversations Using
Knowledge Distillation from LLMs
ABSTRACT: Large language models (LLMs) have showcased remarkable capabilities in
conversational AI, enabling open-domain responses in chat-bots, as well as
advanced processing of conversations like summarization, intent classification,
and insights generation. However, these models are resource-intensive,
demanding substantial memory and computational power. To address this, we
propose a cost-effective solution that filters conversational snippets of
interest for LLM processing, tailored to the target downstream application,
rather than processing every snippet. In this work, we introduce an innovative
approach that leverages knowledge distillation from LLMs to develop an
intent-based filter for multi-party conversations, optimized for compute power
constrained environments. Our method combines different strategies to create a
diverse multi-party conversational dataset, that is annotated with the target
intents and is then used to fine-tune the MobileBERT model for multi-label
intent classification. This model achieves a balance between efficiency and
performance, effectively filtering conversation snippets based on their
intents. By passing only the relevant snippets to the LLM for further
processing, our approach significantly reduces overall operational costs
depending on the intents and the data distribution as demonstrated in our
experiments.
|
2503.17347 | Jichen Hu | Jichen Hu, Chen Yang, Zanwei Zhou, Jiemin Fang, Xiaokang Yang, Qi
Tian, Wei Shen | Dereflection Any Image with Diffusion Priors and Diversified Data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reflection removal of a single image remains a highly challenging task due to
the complex entanglement between target scenes and unwanted reflections.
Despite significant progress, existing methods are hindered by the scarcity of
high-quality, diverse data and insufficient restoration priors, resulting in
limited generalization across various real-world scenarios. In this paper, we
propose Dereflection Any Image, a comprehensive solution with an efficient data
preparation pipeline and a generalizable model for robust reflection removal.
First, we introduce a dataset named Diverse Reflection Removal (DRR) created by
randomly rotating reflective mediums in target scenes, enabling variation of
reflection angles and intensities, and setting a new benchmark in scale,
quality, and diversity. Second, we propose a diffusion-based framework with
one-step diffusion for deterministic outputs and fast inference. To ensure
stable learning, we design a three-stage progressive training strategy,
including reflection-invariant finetuning to encourage consistent outputs
across varying reflection patterns that characterize our dataset. Extensive
experiments show that our method achieves SOTA performance on both common
benchmarks and challenging in-the-wild images, showing superior generalization
across diverse real-world scenes.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 17:48:14 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Hu",
"Jichen",
""
],
[
"Yang",
"Chen",
""
],
[
"Zhou",
"Zanwei",
""
],
[
"Fang",
"Jiemin",
""
],
[
"Yang",
"Xiaokang",
""
],
[
"Tian",
"Qi",
""
],
[
"Shen",
"Wei",
""
]
] | TITLE: Dereflection Any Image with Diffusion Priors and Diversified Data
ABSTRACT: Reflection removal of a single image remains a highly challenging task due to
the complex entanglement between target scenes and unwanted reflections.
Despite significant progress, existing methods are hindered by the scarcity of
high-quality, diverse data and insufficient restoration priors, resulting in
limited generalization across various real-world scenarios. In this paper, we
propose Dereflection Any Image, a comprehensive solution with an efficient data
preparation pipeline and a generalizable model for robust reflection removal.
First, we introduce a dataset named Diverse Reflection Removal (DRR) created by
randomly rotating reflective mediums in target scenes, enabling variation of
reflection angles and intensities, and setting a new benchmark in scale,
quality, and diversity. Second, we propose a diffusion-based framework with
one-step diffusion for deterministic outputs and fast inference. To ensure
stable learning, we design a three-stage progressive training strategy,
including reflection-invariant finetuning to encourage consistent outputs
across varying reflection patterns that characterize our dataset. Extensive
experiments show that our method achieves SOTA performance on both common
benchmarks and challenging in-the-wild images, showing superior generalization
across diverse real-world scenes.
|
2503.17351 | Vineet Shenoy | Vineet R. Shenoy, Shaoju Wu, Armand Comas, Tim K. Marks, Suhas Lohit,
Hassan Mansour | Time-Series U-Net with Recurrence for Noise-Robust Imaging
Photoplethysmography | 14 Pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote estimation of vital signs enables health monitoring for situations in
which contact-based devices are either not available, too intrusive, or too
expensive. In this paper, we present a modular, interpretable pipeline for
pulse signal estimation from video of the face that achieves state-of-the-art
results on publicly available datasets.Our imaging photoplethysmography (iPPG)
system consists of three modules: face and landmark detection, time-series
extraction, and pulse signal/pulse rate estimation. Unlike many deep learning
methods that make use of a single black-box model that maps directly from input
video to output signal or heart rate, our modular approach enables each of the
three parts of the pipeline to be interpreted individually. The pulse signal
estimation module, which we call TURNIP (Time-Series U-Net with Recurrence for
Noise-Robust Imaging Photoplethysmography), allows the system to faithfully
reconstruct the underlying pulse signal waveform and uses it to measure heart
rate and pulse rate variability metrics, even in the presence of motion. When
parts of the face are occluded due to extreme head poses, our system explicitly
detects such "self-occluded" regions and maintains estimation robustness
despite the missing information. Our algorithm provides reliable heart rate
estimates without the need for specialized sensors or contact with the skin,
outperforming previous iPPG methods on both color (RGB) and near-infrared (NIR)
datasets.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 17:52:33 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Shenoy",
"Vineet R.",
""
],
[
"Wu",
"Shaoju",
""
],
[
"Comas",
"Armand",
""
],
[
"Marks",
"Tim K.",
""
],
[
"Lohit",
"Suhas",
""
],
[
"Mansour",
"Hassan",
""
]
] | TITLE: Time-Series U-Net with Recurrence for Noise-Robust Imaging
Photoplethysmography
ABSTRACT: Remote estimation of vital signs enables health monitoring for situations in
which contact-based devices are either not available, too intrusive, or too
expensive. In this paper, we present a modular, interpretable pipeline for
pulse signal estimation from video of the face that achieves state-of-the-art
results on publicly available datasets.Our imaging photoplethysmography (iPPG)
system consists of three modules: face and landmark detection, time-series
extraction, and pulse signal/pulse rate estimation. Unlike many deep learning
methods that make use of a single black-box model that maps directly from input
video to output signal or heart rate, our modular approach enables each of the
three parts of the pipeline to be interpreted individually. The pulse signal
estimation module, which we call TURNIP (Time-Series U-Net with Recurrence for
Noise-Robust Imaging Photoplethysmography), allows the system to faithfully
reconstruct the underlying pulse signal waveform and uses it to measure heart
rate and pulse rate variability metrics, even in the presence of motion. When
parts of the face are occluded due to extreme head poses, our system explicitly
detects such "self-occluded" regions and maintains estimation robustness
despite the missing information. Our algorithm provides reliable heart rate
estimates without the need for specialized sensors or contact with the skin,
outperforming previous iPPG methods on both color (RGB) and near-infrared (NIR)
datasets.
|
2503.17352 | Yihe Deng | Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, Kai-Wei
Chang | OpenVLThinker: An Early Exploration to Complex Vision-Language Reasoning
via Iterative Self-Improvement | 23 pages, 11 figures, 8 tables | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements demonstrated by DeepSeek-R1 have shown that complex
reasoning abilities in large language models (LLMs), including sophisticated
behaviors such as self-verification and self-correction, can be achieved by RL
with verifiable rewards and significantly improves model performance on
challenging tasks such as AIME. Motivated by these findings, our study
investigates whether similar reasoning capabilities can be successfully
integrated into large vision-language models (LVLMs) and assesses their impact
on challenging multimodal reasoning tasks. We consider an approach that
iteratively leverages supervised fine-tuning (SFT) on lightweight training data
and Reinforcement Learning (RL) to further improve model generalization.
Initially, reasoning capabilities were distilled from pure-text R1 models by
generating reasoning steps using high-quality captions of the images sourced
from diverse visual datasets. Subsequently, iterative RL training further
enhance reasoning skills, with each iteration's RL-improved model generating
refined SFT datasets for the next round. This iterative process yielded
OpenVLThinker, a LVLM exhibiting consistently improved reasoning performance on
challenging benchmarks such as MathVista, MathVerse, and MathVision,
demonstrating the potential of our strategy for robust vision-language
reasoning. The code, model and data are held at
https://github.com/yihedeng9/OpenVLThinker.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 17:52:43 GMT"
}
] | 2025-03-24T00:00:00 | [
[
"Deng",
"Yihe",
""
],
[
"Bansal",
"Hritik",
""
],
[
"Yin",
"Fan",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Wang",
"Wei",
""
],
[
"Chang",
"Kai-Wei",
""
]
] | TITLE: OpenVLThinker: An Early Exploration to Complex Vision-Language Reasoning
via Iterative Self-Improvement
ABSTRACT: Recent advancements demonstrated by DeepSeek-R1 have shown that complex
reasoning abilities in large language models (LLMs), including sophisticated
behaviors such as self-verification and self-correction, can be achieved by RL
with verifiable rewards and significantly improves model performance on
challenging tasks such as AIME. Motivated by these findings, our study
investigates whether similar reasoning capabilities can be successfully
integrated into large vision-language models (LVLMs) and assesses their impact
on challenging multimodal reasoning tasks. We consider an approach that
iteratively leverages supervised fine-tuning (SFT) on lightweight training data
and Reinforcement Learning (RL) to further improve model generalization.
Initially, reasoning capabilities were distilled from pure-text R1 models by
generating reasoning steps using high-quality captions of the images sourced
from diverse visual datasets. Subsequently, iterative RL training further
enhance reasoning skills, with each iteration's RL-improved model generating
refined SFT datasets for the next round. This iterative process yielded
OpenVLThinker, a LVLM exhibiting consistently improved reasoning performance on
challenging benchmarks such as MathVista, MathVerse, and MathVision,
demonstrating the potential of our strategy for robust vision-language
reasoning. The code, model and data are held at
https://github.com/yihedeng9/OpenVLThinker.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.