Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.14922 | Haoyu Sun | Jiazhu Dai and Haoyu Sun | A Semantic and Clean-label Backdoor Attack against Graph Convolutional
Networks | null | null | null | null | cs.LG cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Convolutional Networks (GCNs) have shown excellent performance in
graph-structured tasks such as node classification and graph classification.
However, recent research has shown that GCNs are vulnerable to a new type of
threat called the backdoor attack, where the adversary can inject a hidden
backdoor into the GCNs so that the backdoored model performs well on benign
samples, whereas its prediction will be maliciously changed to the
attacker-specified target label if the hidden backdoor is activated by the
attacker-defined trigger. Clean-label backdoor attack and semantic backdoor
attack are two new backdoor attacks to Deep Neural Networks (DNNs), they are
more imperceptible and have posed new and serious threats. The semantic and
clean-label backdoor attack is not fully explored in GCNs. In this paper, we
propose a semantic and clean-label backdoor attack against GCNs under the
context of graph classification to reveal the existence of this security
vulnerability in GCNs. Specifically, SCLBA conducts an importance analysis on
graph samples to select one type of node as semantic trigger, which is then
inserted into the graph samples to create poisoning samples without changing
the labels of the poisoning samples to the attacker-specified target label. We
evaluate SCLBA on multiple datasets and the results show that SCLBA can achieve
attack success rates close to 99% with poisoning rates of less than 3%, and
with almost no impact on the performance of model on benign samples.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 06:04:55 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Dai",
"Jiazhu",
""
],
[
"Sun",
"Haoyu",
""
]
] | TITLE: A Semantic and Clean-label Backdoor Attack against Graph Convolutional
Networks
ABSTRACT: Graph Convolutional Networks (GCNs) have shown excellent performance in
graph-structured tasks such as node classification and graph classification.
However, recent research has shown that GCNs are vulnerable to a new type of
threat called the backdoor attack, where the adversary can inject a hidden
backdoor into the GCNs so that the backdoored model performs well on benign
samples, whereas its prediction will be maliciously changed to the
attacker-specified target label if the hidden backdoor is activated by the
attacker-defined trigger. Clean-label backdoor attack and semantic backdoor
attack are two new backdoor attacks to Deep Neural Networks (DNNs), they are
more imperceptible and have posed new and serious threats. The semantic and
clean-label backdoor attack is not fully explored in GCNs. In this paper, we
propose a semantic and clean-label backdoor attack against GCNs under the
context of graph classification to reveal the existence of this security
vulnerability in GCNs. Specifically, SCLBA conducts an importance analysis on
graph samples to select one type of node as semantic trigger, which is then
inserted into the graph samples to create poisoning samples without changing
the labels of the poisoning samples to the attacker-specified target label. We
evaluate SCLBA on multiple datasets and the results show that SCLBA can achieve
attack success rates close to 99% with poisoning rates of less than 3%, and
with almost no impact on the performance of model on benign samples.
|
2503.14925 | Haoyu Lei | Haoyu Lei, Shizhan Gong, Qi Dou, Farzan Farnia | pFedFair: Towards Optimal Group Fairness-Accuracy Trade-off in
Heterogeneous Federated Learning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Federated learning (FL) algorithms commonly aim to maximize clients' accuracy
by training a model on their collective data. However, in several FL
applications, the model's decisions should meet a group fairness constraint to
be independent of sensitive attributes such as gender or race. While such group
fairness constraints can be incorporated into the objective function of the FL
optimization problem, in this work, we show that such an approach would lead to
suboptimal classification accuracy in an FL setting with heterogeneous client
distributions. To achieve an optimal accuracy-group fairness trade-off, we
propose the Personalized Federated Learning for Client-Level Group Fairness
(pFedFair) framework, where clients locally impose their fairness constraints
over the distributed training process. Leveraging the image embedding models,
we extend the application of pFedFair to computer vision settings, where we
numerically show that pFedFair achieves an optimal group fairness-accuracy
trade-off in heterogeneous FL settings. We present the results of several
numerical experiments on benchmark and synthetic datasets, which highlight the
suboptimality of non-personalized FL algorithms and the improvements made by
the pFedFair method.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 06:15:31 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lei",
"Haoyu",
""
],
[
"Gong",
"Shizhan",
""
],
[
"Dou",
"Qi",
""
],
[
"Farnia",
"Farzan",
""
]
] | TITLE: pFedFair: Towards Optimal Group Fairness-Accuracy Trade-off in
Heterogeneous Federated Learning
ABSTRACT: Federated learning (FL) algorithms commonly aim to maximize clients' accuracy
by training a model on their collective data. However, in several FL
applications, the model's decisions should meet a group fairness constraint to
be independent of sensitive attributes such as gender or race. While such group
fairness constraints can be incorporated into the objective function of the FL
optimization problem, in this work, we show that such an approach would lead to
suboptimal classification accuracy in an FL setting with heterogeneous client
distributions. To achieve an optimal accuracy-group fairness trade-off, we
propose the Personalized Federated Learning for Client-Level Group Fairness
(pFedFair) framework, where clients locally impose their fairness constraints
over the distributed training process. Leveraging the image embedding models,
we extend the application of pFedFair to computer vision settings, where we
numerically show that pFedFair achieves an optimal group fairness-accuracy
trade-off in heterogeneous FL settings. We present the results of several
numerical experiments on benchmark and synthetic datasets, which highlight the
suboptimality of non-personalized FL algorithms and the improvements made by
the pFedFair method.
|
2503.14926 | Minkyoo Song | Minkyoo Song, Eugene Jang, Jaehan Kim, Seungwon Shin | Covering Cracks in Content Moderation: Delexicalized Distant Supervision
for Illicit Drug Jargon Detection | Accepted for publication in the KDD 2025 Research Track | null | 10.1145/3690624.3709183 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In light of rising drug-related concerns and the increasing role of social
media, sales and discussions of illicit drugs have become commonplace online.
Social media platforms hosting user-generated content must therefore perform
content moderation, which is a difficult task due to the vast amount of jargon
used in drug discussions. Previous works on drug jargon detection were limited
to extracting a list of terms, but these approaches have fundamental problems
in practical application. First, they are trivially evaded using word
substitutions. Second, they cannot distinguish whether euphemistic terms such
as "pot" or "crack" are being used as drugs or in their benign meanings. We
argue that drug content moderation should be done using contexts rather than
relying on a banlist. However, manually annotated datasets for training such a
task are not only expensive but also prone to becoming obsolete. We present
JEDIS, a framework for detecting illicit drug jargon terms by analyzing their
contexts. JEDIS utilizes a novel approach that combines distant supervision and
delexicalization, which allows JEDIS to be trained without human-labeled data
while being robust to new terms and euphemisms. Experiments on two manually
annotated datasets show JEDIS significantly outperforms state-of-the-art
word-based baselines in terms of F1-score and detection coverage in drug jargon
detection. We also conduct qualitative analysis that demonstrates JEDIS is
robust against pitfalls faced by existing approaches.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 06:26:25 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Song",
"Minkyoo",
""
],
[
"Jang",
"Eugene",
""
],
[
"Kim",
"Jaehan",
""
],
[
"Shin",
"Seungwon",
""
]
] | TITLE: Covering Cracks in Content Moderation: Delexicalized Distant Supervision
for Illicit Drug Jargon Detection
ABSTRACT: In light of rising drug-related concerns and the increasing role of social
media, sales and discussions of illicit drugs have become commonplace online.
Social media platforms hosting user-generated content must therefore perform
content moderation, which is a difficult task due to the vast amount of jargon
used in drug discussions. Previous works on drug jargon detection were limited
to extracting a list of terms, but these approaches have fundamental problems
in practical application. First, they are trivially evaded using word
substitutions. Second, they cannot distinguish whether euphemistic terms such
as "pot" or "crack" are being used as drugs or in their benign meanings. We
argue that drug content moderation should be done using contexts rather than
relying on a banlist. However, manually annotated datasets for training such a
task are not only expensive but also prone to becoming obsolete. We present
JEDIS, a framework for detecting illicit drug jargon terms by analyzing their
contexts. JEDIS utilizes a novel approach that combines distant supervision and
delexicalization, which allows JEDIS to be trained without human-labeled data
while being robust to new terms and euphemisms. Experiments on two manually
annotated datasets show JEDIS significantly outperforms state-of-the-art
word-based baselines in terms of F1-score and detection coverage in drug jargon
detection. We also conduct qualitative analysis that demonstrates JEDIS is
robust against pitfalls faced by existing approaches.
|
2503.14929 | Yufan Sheng | Yufan Sheng, Xin Cao, Kaiqi Zhao, Yixiang Fang, Jianzhong Qi, Wenjie
Zhang, Christian S. Jensen | ACE: A Cardinality Estimator for Set-Valued Queries | This paper has been accepted by PVLDB Vol 18 | null | null | null | cs.DB cs.LG | http://creativecommons.org/licenses/by/4.0/ | Cardinality estimation is a fundamental functionality in database systems.
Most existing cardinality estimators focus on handling predicates over numeric
or categorical data. They have largely omitted an important data type,
set-valued data, which frequently occur in contemporary applications such as
information retrieval and recommender systems. The few existing estimators for
such data either favor high-frequency elements or rely on a partial
independence assumption, which limits their practical applicability. We propose
ACE, an Attention-based Cardinality Estimator for estimating the cardinality of
queries over set-valued data. We first design a distillation-based data encoder
to condense the dataset into a compact matrix. We then design an
attention-based query analyzer to capture correlations among query elements. To
handle variable-sized queries, a pooling module is introduced, followed by a
regression model (MLP) to generate final cardinality estimates. We evaluate ACE
on three datasets with varying query element distributions, demonstrating that
ACE outperforms the state-of-the-art competitors in terms of both accuracy and
efficiency.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 06:29:15 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Sheng",
"Yufan",
""
],
[
"Cao",
"Xin",
""
],
[
"Zhao",
"Kaiqi",
""
],
[
"Fang",
"Yixiang",
""
],
[
"Qi",
"Jianzhong",
""
],
[
"Zhang",
"Wenjie",
""
],
[
"Jensen",
"Christian S.",
""
]
] | TITLE: ACE: A Cardinality Estimator for Set-Valued Queries
ABSTRACT: Cardinality estimation is a fundamental functionality in database systems.
Most existing cardinality estimators focus on handling predicates over numeric
or categorical data. They have largely omitted an important data type,
set-valued data, which frequently occur in contemporary applications such as
information retrieval and recommender systems. The few existing estimators for
such data either favor high-frequency elements or rely on a partial
independence assumption, which limits their practical applicability. We propose
ACE, an Attention-based Cardinality Estimator for estimating the cardinality of
queries over set-valued data. We first design a distillation-based data encoder
to condense the dataset into a compact matrix. We then design an
attention-based query analyzer to capture correlations among query elements. To
handle variable-sized queries, a pooling module is introduced, followed by a
regression model (MLP) to generate final cardinality estimates. We evaluate ACE
on three datasets with varying query element distributions, demonstrating that
ACE outperforms the state-of-the-art competitors in terms of both accuracy and
efficiency.
|
2503.14932 | Ziyao Wang | Ziyao Wang, Yexiao He, Zheyu Shen, Yu Li, Guoheng Sun, Myungjin Lee,
Ang Li | Prada: Black-Box LLM Adaptation with Private Data on
Resource-Constrained Devices | null | null | null | null | cs.CR cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, Large Language Models (LLMs) have demonstrated remarkable
abilities in various natural language processing tasks. However, adapting these
models to specialized domains using private datasets stored on
resource-constrained edge devices, such as smartphones and personal computers,
remains challenging due to significant privacy concerns and limited
computational resources. Existing model adaptation methods either compromise
data privacy by requiring data transmission or jeopardize model privacy by
exposing proprietary LLM parameters. To address these challenges, we propose
Prada, a novel privacy-preserving and efficient black-box LLM adaptation system
using private on-device datasets. Prada employs a lightweight proxy model
fine-tuned with Low-Rank Adaptation (LoRA) locally on user devices. During
inference, Prada leverages the logits offset, i.e., difference in outputs
between the base and adapted proxy models, to iteratively refine outputs from a
remote black-box LLM. This offset-based adaptation approach preserves both data
privacy and model privacy, as there is no need to share sensitive data or
proprietary model parameters. Furthermore, we incorporate speculative decoding
to further speed up the inference process of Prada, making the system
practically deployable on bandwidth-constrained edge devices, enabling a more
practical deployment of Prada. Extensive experiments on various downstream
tasks demonstrate that Prada achieves performance comparable to centralized
fine-tuning methods while significantly reducing computational overhead by up
to 60% and communication costs by up to 80%.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 06:38:51 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Ziyao",
""
],
[
"He",
"Yexiao",
""
],
[
"Shen",
"Zheyu",
""
],
[
"Li",
"Yu",
""
],
[
"Sun",
"Guoheng",
""
],
[
"Lee",
"Myungjin",
""
],
[
"Li",
"Ang",
""
]
] | TITLE: Prada: Black-Box LLM Adaptation with Private Data on
Resource-Constrained Devices
ABSTRACT: In recent years, Large Language Models (LLMs) have demonstrated remarkable
abilities in various natural language processing tasks. However, adapting these
models to specialized domains using private datasets stored on
resource-constrained edge devices, such as smartphones and personal computers,
remains challenging due to significant privacy concerns and limited
computational resources. Existing model adaptation methods either compromise
data privacy by requiring data transmission or jeopardize model privacy by
exposing proprietary LLM parameters. To address these challenges, we propose
Prada, a novel privacy-preserving and efficient black-box LLM adaptation system
using private on-device datasets. Prada employs a lightweight proxy model
fine-tuned with Low-Rank Adaptation (LoRA) locally on user devices. During
inference, Prada leverages the logits offset, i.e., difference in outputs
between the base and adapted proxy models, to iteratively refine outputs from a
remote black-box LLM. This offset-based adaptation approach preserves both data
privacy and model privacy, as there is no need to share sensitive data or
proprietary model parameters. Furthermore, we incorporate speculative decoding
to further speed up the inference process of Prada, making the system
practically deployable on bandwidth-constrained edge devices, enabling a more
practical deployment of Prada. Extensive experiments on various downstream
tasks demonstrate that Prada achieves performance comparable to centralized
fine-tuning methods while significantly reducing computational overhead by up
to 60% and communication costs by up to 80%.
|
2503.14933 | Yi Luo | Yi Luo, Hamed Hooshangnejad, Xue Feng, Gaofeng Huang, Xiaojian Chen,
Rui Zhang, Quan Chen, Wil Ngwa, and Kai Ding | A Language Vision Model Approach for Automated Tumor Contouring in
Radiation Oncology | 19 pages, 4 figures | null | null | null | eess.IV cs.CV physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Background: Lung cancer ranks as the leading cause of cancer-related
mortality worldwide. The complexity of tumor delineation, crucial for radiation
therapy, requires expertise often unavailable in resource-limited settings.
Artificial Intelligence(AI), particularly with advancements in deep learning
(DL) and natural language processing (NLP), offers potential solutions yet is
challenged by high false positive rates. Purpose: The Oncology Contouring
Copilot (OCC) system is developed to leverage oncologist expertise for precise
tumor contouring using textual descriptions, aiming to increase the efficiency
of oncological workflows by combining the strengths of AI with human oversight.
Methods: Our OCC system initially identifies nodule candidates from CT scans.
Employing Language Vision Models (LVMs) like GPT-4V, OCC then effectively
reduces false positives with clinical descriptive texts, merging textual and
visual data to automate tumor delineation, designed to elevate the quality of
oncology care by incorporating knowledge from experienced domain experts.
Results: Deployments of the OCC system resulted in a significant reduction in
the false discovery rate by 35.0%, a 72.4% decrease in false positives per
scan, and an F1-score of 0.652 across our dataset for unbiased evaluation.
Conclusions: OCC represents a significant advance in oncology care,
particularly through the use of the latest LVMs to improve contouring results
by (1) streamlining oncology treatment workflows by optimizing tumor
delineation, reducing manual processes; (2) offering a scalable and intuitive
framework to reduce false positives in radiotherapy planning using LVMs; (3)
introducing novel medical language vision prompt techniques to minimize LVMs
hallucinations with ablation study, and (4) conducting a comparative analysis
of LVMs, highlighting their potential in addressing medical language vision
challenges.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 06:41:37 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Luo",
"Yi",
""
],
[
"Hooshangnejad",
"Hamed",
""
],
[
"Feng",
"Xue",
""
],
[
"Huang",
"Gaofeng",
""
],
[
"Chen",
"Xiaojian",
""
],
[
"Zhang",
"Rui",
""
],
[
"Chen",
"Quan",
""
],
[
"Ngwa",
"Wil",
""
],
[
"Ding",
"Kai",
""
]
] | TITLE: A Language Vision Model Approach for Automated Tumor Contouring in
Radiation Oncology
ABSTRACT: Background: Lung cancer ranks as the leading cause of cancer-related
mortality worldwide. The complexity of tumor delineation, crucial for radiation
therapy, requires expertise often unavailable in resource-limited settings.
Artificial Intelligence(AI), particularly with advancements in deep learning
(DL) and natural language processing (NLP), offers potential solutions yet is
challenged by high false positive rates. Purpose: The Oncology Contouring
Copilot (OCC) system is developed to leverage oncologist expertise for precise
tumor contouring using textual descriptions, aiming to increase the efficiency
of oncological workflows by combining the strengths of AI with human oversight.
Methods: Our OCC system initially identifies nodule candidates from CT scans.
Employing Language Vision Models (LVMs) like GPT-4V, OCC then effectively
reduces false positives with clinical descriptive texts, merging textual and
visual data to automate tumor delineation, designed to elevate the quality of
oncology care by incorporating knowledge from experienced domain experts.
Results: Deployments of the OCC system resulted in a significant reduction in
the false discovery rate by 35.0%, a 72.4% decrease in false positives per
scan, and an F1-score of 0.652 across our dataset for unbiased evaluation.
Conclusions: OCC represents a significant advance in oncology care,
particularly through the use of the latest LVMs to improve contouring results
by (1) streamlining oncology treatment workflows by optimizing tumor
delineation, reducing manual processes; (2) offering a scalable and intuitive
framework to reduce false positives in radiotherapy planning using LVMs; (3)
introducing novel medical language vision prompt techniques to minimize LVMs
hallucinations with ablation study, and (4) conducting a comparative analysis
of LVMs, highlighting their potential in addressing medical language vision
challenges.
|
2503.14935 | Chongjun Tu | Chongjun Tu, Lin Zhang, Pengtao Chen, Peng Ye, Xianfang Zeng, Wei
Cheng, Gang Yu, Tao Chen | FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion
Understanding | FAVOR-Bench project page: https://favor-bench.github.io/ | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Large Language Models (MLLMs) have shown remarkable capabilities
in video content understanding but still struggle with fine-grained motion
comprehension. To comprehensively assess the motion understanding ability of
existing MLLMs, we introduce FAVOR-Bench, comprising 1,776 videos with
structured manual annotations of various motions. Our benchmark includes both
close-ended and open-ended tasks. For close-ended evaluation, we carefully
design 8,184 multiple-choice question-answer pairs spanning six distinct
sub-tasks. For open-ended evaluation, we develop both a novel cost-efficient
LLM-free and a GPT-assisted caption assessment method, where the former can
enhance benchmarking interpretability and reproducibility. Comprehensive
experiments with 21 state-of-the-art MLLMs reveal significant limitations in
their ability to comprehend and describe detailed temporal dynamics in video
motions. To alleviate this limitation, we further build FAVOR-Train, a dataset
consisting of 17,152 videos with fine-grained motion annotations. The results
of finetuning Qwen2.5-VL on FAVOR-Train yield consistent improvements on
motion-related tasks of TVBench, MotionBench and our FAVOR-Bench. Comprehensive
assessment results demonstrate that the proposed FAVOR-Bench and FAVOR-Train
provide valuable tools to the community for developing more powerful video
understanding models. Project page:
\href{https://favor-bench.github.io/}{https://favor-bench.github.io/}.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 06:42:32 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Tu",
"Chongjun",
""
],
[
"Zhang",
"Lin",
""
],
[
"Chen",
"Pengtao",
""
],
[
"Ye",
"Peng",
""
],
[
"Zeng",
"Xianfang",
""
],
[
"Cheng",
"Wei",
""
],
[
"Yu",
"Gang",
""
],
[
"Chen",
"Tao",
""
]
] | TITLE: FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion
Understanding
ABSTRACT: Multimodal Large Language Models (MLLMs) have shown remarkable capabilities
in video content understanding but still struggle with fine-grained motion
comprehension. To comprehensively assess the motion understanding ability of
existing MLLMs, we introduce FAVOR-Bench, comprising 1,776 videos with
structured manual annotations of various motions. Our benchmark includes both
close-ended and open-ended tasks. For close-ended evaluation, we carefully
design 8,184 multiple-choice question-answer pairs spanning six distinct
sub-tasks. For open-ended evaluation, we develop both a novel cost-efficient
LLM-free and a GPT-assisted caption assessment method, where the former can
enhance benchmarking interpretability and reproducibility. Comprehensive
experiments with 21 state-of-the-art MLLMs reveal significant limitations in
their ability to comprehend and describe detailed temporal dynamics in video
motions. To alleviate this limitation, we further build FAVOR-Train, a dataset
consisting of 17,152 videos with fine-grained motion annotations. The results
of finetuning Qwen2.5-VL on FAVOR-Train yield consistent improvements on
motion-related tasks of TVBench, MotionBench and our FAVOR-Bench. Comprehensive
assessment results demonstrate that the proposed FAVOR-Bench and FAVOR-Train
provide valuable tools to the community for developing more powerful video
understanding models. Project page:
\href{https://favor-bench.github.io/}{https://favor-bench.github.io/}.
|
2503.14936 | Yifan Zhang | Yifan Zhang, Chen Huang, Zachary Karas, Dung Thuy Nguyen, Kevin Leach,
Yu Huang | Enhancing Code LLM Training with Programmer Attention | null | null | null | null | cs.SE cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Human attention provides valuable yet underexploited signals for code LLM
training, offering a perspective beyond purely machine-driven attention.
Despite the complexity and cost of collecting eye-tracking data, there has also
been limited progress in systematically using these signals for code LLM
training. To address both issues, we propose a cohesive pipeline spanning
augmentation and reward-based fine-tuning. Specifically, we introduce (1) an
eye-tracking path augmentation method to expand programmer attention datasets,
(2) a pattern abstraction step that refines raw fixations into learnable
attention motifs, and (3) a reward-guided strategy for integrating these
insights directly into a CodeT5 supervised fine-tuning process. Our experiments
yield +7.16 in CodeBLEU on the CodeXGlue benchmark for code summarization,
underscoring how uniting human and machine attention can boost code
intelligence. We hope this work encourages broader exploration of human-centric
methods in next-generation AI4SE.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 06:44:29 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhang",
"Yifan",
""
],
[
"Huang",
"Chen",
""
],
[
"Karas",
"Zachary",
""
],
[
"Nguyen",
"Dung Thuy",
""
],
[
"Leach",
"Kevin",
""
],
[
"Huang",
"Yu",
""
]
] | TITLE: Enhancing Code LLM Training with Programmer Attention
ABSTRACT: Human attention provides valuable yet underexploited signals for code LLM
training, offering a perspective beyond purely machine-driven attention.
Despite the complexity and cost of collecting eye-tracking data, there has also
been limited progress in systematically using these signals for code LLM
training. To address both issues, we propose a cohesive pipeline spanning
augmentation and reward-based fine-tuning. Specifically, we introduce (1) an
eye-tracking path augmentation method to expand programmer attention datasets,
(2) a pattern abstraction step that refines raw fixations into learnable
attention motifs, and (3) a reward-guided strategy for integrating these
insights directly into a CodeT5 supervised fine-tuning process. Our experiments
yield +7.16 in CodeBLEU on the CodeXGlue benchmark for code summarization,
underscoring how uniting human and machine attention can boost code
intelligence. We hope this work encourages broader exploration of human-centric
methods in next-generation AI4SE.
|
2503.14938 | Ci Liu | Zhong Ji, Ci Liu, Jingren Liu, Chen Tang, Yanwei Pang, Xuelong Li | Optimal Transport Adapter Tuning for Bridging Modality Gaps in Few-Shot
Remote Sensing Scene Classification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Few-Shot Remote Sensing Scene Classification (FS-RSSC) presents the challenge
of classifying remote sensing images with limited labeled samples. Existing
methods typically emphasize single-modal feature learning, neglecting the
potential benefits of optimizing multi-modal representations. To address this
limitation, we propose a novel Optimal Transport Adapter Tuning (OTAT)
framework aimed at constructing an ideal Platonic representational space
through optimal transport (OT) theory. This framework seeks to harmonize rich
visual information with less dense textual cues, enabling effective cross-modal
information transfer and complementarity. Central to this approach is the
Optimal Transport Adapter (OTA), which employs a cross-modal attention
mechanism to enrich textual representations and facilitate subsequent better
information interaction. By transforming the network optimization into an OT
optimization problem, OTA establishes efficient pathways for balanced
information exchange between modalities. Moreover, we introduce a sample-level
Entropy-Aware Weighted (EAW) loss, which combines difficulty-weighted
similarity scores with entropy-based regularization. This loss function
provides finer control over the OT optimization process, enhancing its
solvability and stability. Our framework offers a scalable and efficient
solution for advancing multimodal learning in remote sensing applications.
Extensive experiments on benchmark datasets demonstrate that OTAT achieves
state-of-the-art performance in FS-RSSC, significantly improving the model
performance and generalization.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:04:24 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ji",
"Zhong",
""
],
[
"Liu",
"Ci",
""
],
[
"Liu",
"Jingren",
""
],
[
"Tang",
"Chen",
""
],
[
"Pang",
"Yanwei",
""
],
[
"Li",
"Xuelong",
""
]
] | TITLE: Optimal Transport Adapter Tuning for Bridging Modality Gaps in Few-Shot
Remote Sensing Scene Classification
ABSTRACT: Few-Shot Remote Sensing Scene Classification (FS-RSSC) presents the challenge
of classifying remote sensing images with limited labeled samples. Existing
methods typically emphasize single-modal feature learning, neglecting the
potential benefits of optimizing multi-modal representations. To address this
limitation, we propose a novel Optimal Transport Adapter Tuning (OTAT)
framework aimed at constructing an ideal Platonic representational space
through optimal transport (OT) theory. This framework seeks to harmonize rich
visual information with less dense textual cues, enabling effective cross-modal
information transfer and complementarity. Central to this approach is the
Optimal Transport Adapter (OTA), which employs a cross-modal attention
mechanism to enrich textual representations and facilitate subsequent better
information interaction. By transforming the network optimization into an OT
optimization problem, OTA establishes efficient pathways for balanced
information exchange between modalities. Moreover, we introduce a sample-level
Entropy-Aware Weighted (EAW) loss, which combines difficulty-weighted
similarity scores with entropy-based regularization. This loss function
provides finer control over the OT optimization process, enhancing its
solvability and stability. Our framework offers a scalable and efficient
solution for advancing multimodal learning in remote sensing applications.
Extensive experiments on benchmark datasets demonstrate that OTAT achieves
state-of-the-art performance in FS-RSSC, significantly improving the model
performance and generalization.
|
2503.14939 | Tengjin Weng | Tengjin Weng, Jingyi Wang, Wenhao Jiang and Zhong Ming | VisNumBench: Evaluating Number Sense of Multimodal Large Language Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Can Multimodal Large Language Models (MLLMs) develop an intuitive number
sense similar to humans? Targeting this problem, we introduce Visual Number
Benchmark (VisNumBench) to evaluate the number sense abilities of MLLMs across
a wide range of visual numerical tasks. VisNumBench consists of about 1,900
multiple-choice question-answer pairs derived from both synthetic and
real-world visual data, covering seven visual numerical attributes and four
types of visual numerical estimation tasks. Our experiments on VisNumBench led
to the following key findings: (i) The 17 MLLMs we tested, including
open-source models such as Qwen2.5-VL and InternVL2.5, as well as proprietary
models like GPT-4o and Gemini 2.0 Flash, perform significantly below human
levels in number sense-related tasks. (ii) Multimodal mathematical models and
multimodal chain-of-thought (CoT) models did not exhibit significant
improvements in number sense abilities. (iii) Stronger MLLMs with larger
parameter sizes and broader general abilities demonstrate modest gains in
number sense abilities. We believe VisNumBench will serve as a valuable
resource for the research community, encouraging further advancements in
enhancing MLLMs' number sense abilities. All benchmark resources, including
code and datasets, will be publicly available at
https://wwwtttjjj.github.io/VisNumBench/.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:07:43 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Weng",
"Tengjin",
""
],
[
"Wang",
"Jingyi",
""
],
[
"Jiang",
"Wenhao",
""
],
[
"Ming",
"Zhong",
""
]
] | TITLE: VisNumBench: Evaluating Number Sense of Multimodal Large Language Models
ABSTRACT: Can Multimodal Large Language Models (MLLMs) develop an intuitive number
sense similar to humans? Targeting this problem, we introduce Visual Number
Benchmark (VisNumBench) to evaluate the number sense abilities of MLLMs across
a wide range of visual numerical tasks. VisNumBench consists of about 1,900
multiple-choice question-answer pairs derived from both synthetic and
real-world visual data, covering seven visual numerical attributes and four
types of visual numerical estimation tasks. Our experiments on VisNumBench led
to the following key findings: (i) The 17 MLLMs we tested, including
open-source models such as Qwen2.5-VL and InternVL2.5, as well as proprietary
models like GPT-4o and Gemini 2.0 Flash, perform significantly below human
levels in number sense-related tasks. (ii) Multimodal mathematical models and
multimodal chain-of-thought (CoT) models did not exhibit significant
improvements in number sense abilities. (iii) Stronger MLLMs with larger
parameter sizes and broader general abilities demonstrate modest gains in
number sense abilities. We believe VisNumBench will serve as a valuable
resource for the research community, encouraging further advancements in
enhancing MLLMs' number sense abilities. All benchmark resources, including
code and datasets, will be publicly available at
https://wwwtttjjj.github.io/VisNumBench/.
|
2503.14941 | Qihui Zhang | Qihui Zhang, Munan Ning, Zheyuan Liu, Yanbo Wang, Jiayi Ye, Yue Huang,
Shuo Yang, Xiao Chen, Yibing Song, Li Yuan | UPME: An Unsupervised Peer Review Framework for Multimodal Large
Language Model Evaluation | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal Large Language Models (MLLMs) have emerged to tackle the
challenges of Visual Question Answering (VQA), sparking a new research focus on
conducting objective evaluations of these models. Existing evaluation methods
face limitations due to the significant human workload required to design Q&A
pairs for visual images, which inherently restricts the scale and scope of
evaluations. Although automated MLLM-as-judge approaches attempt to reduce the
human workload through automatic evaluations, they often introduce biases. To
address these problems, we propose an Unsupervised Peer review MLLM Evaluation
framework. It utilizes only image data, allowing models to automatically
generate questions and conduct peer review assessments of answers from other
models, effectively alleviating the reliance on human workload. Additionally,
we introduce the vision-language scoring system to mitigate the bias issues,
which focuses on three aspects: (i) response correctness; (ii) visual
understanding and reasoning; and (iii) image-text correlation. Experimental
results demonstrate that UPME achieves a Pearson correlation of 0.944 with
human evaluations on the MMstar dataset and 0.814 on the ScienceQA dataset,
indicating that our framework closely aligns with human-designed benchmarks and
inherent human preferences.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:15:41 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhang",
"Qihui",
""
],
[
"Ning",
"Munan",
""
],
[
"Liu",
"Zheyuan",
""
],
[
"Wang",
"Yanbo",
""
],
[
"Ye",
"Jiayi",
""
],
[
"Huang",
"Yue",
""
],
[
"Yang",
"Shuo",
""
],
[
"Chen",
"Xiao",
""
],
[
"Song",
"Yibing",
""
],
[
"Yuan",
"Li",
""
]
] | TITLE: UPME: An Unsupervised Peer Review Framework for Multimodal Large
Language Model Evaluation
ABSTRACT: Multimodal Large Language Models (MLLMs) have emerged to tackle the
challenges of Visual Question Answering (VQA), sparking a new research focus on
conducting objective evaluations of these models. Existing evaluation methods
face limitations due to the significant human workload required to design Q&A
pairs for visual images, which inherently restricts the scale and scope of
evaluations. Although automated MLLM-as-judge approaches attempt to reduce the
human workload through automatic evaluations, they often introduce biases. To
address these problems, we propose an Unsupervised Peer review MLLM Evaluation
framework. It utilizes only image data, allowing models to automatically
generate questions and conduct peer review assessments of answers from other
models, effectively alleviating the reliance on human workload. Additionally,
we introduce the vision-language scoring system to mitigate the bias issues,
which focuses on three aspects: (i) response correctness; (ii) visual
understanding and reasoning; and (iii) image-text correlation. Experimental
results demonstrate that UPME achieves a Pearson correlation of 0.944 with
human evaluations on the MMstar dataset and 0.814 on the ScienceQA dataset,
indicating that our framework closely aligns with human-designed benchmarks and
inherent human preferences.
|
2503.14944 | Zihan Cao | Zihan Cao, Yu Zhong, Ziqi Wang, Liang-Jian Deng | MMAIF: Multi-task and Multi-degradation All-in-One for Image Fusion with
Language Guidance | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Image fusion, a fundamental low-level vision task, aims to integrate multiple
image sequences into a single output while preserving as much information as
possible from the input. However, existing methods face several significant
limitations: 1) requiring task- or dataset-specific models; 2) neglecting
real-world image degradations (\textit{e.g.}, noise), which causes failure when
processing degraded inputs; 3) operating in pixel space, where attention
mechanisms are computationally expensive; and 4) lacking user interaction
capabilities. To address these challenges, we propose a unified framework for
multi-task, multi-degradation, and language-guided image fusion. Our framework
includes two key components: 1) a practical degradation pipeline that simulates
real-world image degradations and generates interactive prompts to guide the
model; 2) an all-in-one Diffusion Transformer (DiT) operating in latent space,
which fuses a clean image conditioned on both the degraded inputs and the
generated prompts. Furthermore, we introduce principled modifications to the
original DiT architecture to better suit the fusion task. Based on this
framework, we develop two versions of the model: Regression-based and Flow
Matching-based variants. Extensive qualitative and quantitative experiments
demonstrate that our approach effectively addresses the aforementioned
limitations and outperforms previous restoration+fusion and all-in-one
pipelines. Codes are available at https://github.com/294coder/MMAIF.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:20:02 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Cao",
"Zihan",
""
],
[
"Zhong",
"Yu",
""
],
[
"Wang",
"Ziqi",
""
],
[
"Deng",
"Liang-Jian",
""
]
] | TITLE: MMAIF: Multi-task and Multi-degradation All-in-One for Image Fusion with
Language Guidance
ABSTRACT: Image fusion, a fundamental low-level vision task, aims to integrate multiple
image sequences into a single output while preserving as much information as
possible from the input. However, existing methods face several significant
limitations: 1) requiring task- or dataset-specific models; 2) neglecting
real-world image degradations (\textit{e.g.}, noise), which causes failure when
processing degraded inputs; 3) operating in pixel space, where attention
mechanisms are computationally expensive; and 4) lacking user interaction
capabilities. To address these challenges, we propose a unified framework for
multi-task, multi-degradation, and language-guided image fusion. Our framework
includes two key components: 1) a practical degradation pipeline that simulates
real-world image degradations and generates interactive prompts to guide the
model; 2) an all-in-one Diffusion Transformer (DiT) operating in latent space,
which fuses a clean image conditioned on both the degraded inputs and the
generated prompts. Furthermore, we introduce principled modifications to the
original DiT architecture to better suit the fusion task. Based on this
framework, we develop two versions of the model: Regression-based and Flow
Matching-based variants. Extensive qualitative and quantitative experiments
demonstrate that our approach effectively addresses the aforementioned
limitations and outperforms previous restoration+fusion and all-in-one
pipelines. Codes are available at https://github.com/294coder/MMAIF.
|
2503.14948 | Hao Liang | Hao Liang, Zhipeng Dong, Yi Yang, Mengyin Fu | ChatStitch: Visualizing Through Structures via Surround-View
Unsupervised Deep Image Stitching with Collaborative LLM-Agents | null | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative perception has garnered significant attention for its ability
to enhance the perception capabilities of individual vehicles through the
exchange of information with surrounding vehicle-agents. However, existing
collaborative perception systems are limited by inefficiencies in user
interaction and the challenge of multi-camera photorealistic visualization. To
address these challenges, this paper introduces ChatStitch, the first
collaborative perception system capable of unveiling obscured blind spot
information through natural language commands integrated with external digital
assets. To adeptly handle complex or abstract commands, ChatStitch employs a
multi-agent collaborative framework based on Large Language Models. For
achieving the most intuitive perception for humans, ChatStitch proposes
SV-UDIS, the first surround-view unsupervised deep image stitching method under
the non-global-overlapping condition. We conducted extensive experiments on the
UDIS-D, MCOV-SLAM open datasets, and our real-world dataset. Specifically, our
SV-UDIS method achieves state-of-the-art performance on the UDIS-D dataset for
3, 4, and 5 image stitching tasks, with PSNR improvements of 9%, 17%, and 21%,
and SSIM improvements of 8%, 18%, and 26%, respectively.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:25:21 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Liang",
"Hao",
""
],
[
"Dong",
"Zhipeng",
""
],
[
"Yang",
"Yi",
""
],
[
"Fu",
"Mengyin",
""
]
] | TITLE: ChatStitch: Visualizing Through Structures via Surround-View
Unsupervised Deep Image Stitching with Collaborative LLM-Agents
ABSTRACT: Collaborative perception has garnered significant attention for its ability
to enhance the perception capabilities of individual vehicles through the
exchange of information with surrounding vehicle-agents. However, existing
collaborative perception systems are limited by inefficiencies in user
interaction and the challenge of multi-camera photorealistic visualization. To
address these challenges, this paper introduces ChatStitch, the first
collaborative perception system capable of unveiling obscured blind spot
information through natural language commands integrated with external digital
assets. To adeptly handle complex or abstract commands, ChatStitch employs a
multi-agent collaborative framework based on Large Language Models. For
achieving the most intuitive perception for humans, ChatStitch proposes
SV-UDIS, the first surround-view unsupervised deep image stitching method under
the non-global-overlapping condition. We conducted extensive experiments on the
UDIS-D, MCOV-SLAM open datasets, and our real-world dataset. Specifically, our
SV-UDIS method achieves state-of-the-art performance on the UDIS-D dataset for
3, 4, and 5 image stitching tasks, with PSNR improvements of 9%, 17%, and 21%,
and SSIM improvements of 8%, 18%, and 26%, respectively.
|
2503.14950 | Joseph Emmanuel Dayo | Joseph Emmanuel DL Dayo and Prospero C. Naval Jr | USAM-Net: A U-Net-based Network for Improved Stereo Correspondence and
Scene Depth Estimation using Features from a Pre-trained Image Segmentation
network | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The increasing demand for high-accuracy depth estimation in autonomous
driving and augmented reality applications necessitates advanced neural
architectures capable of effectively leveraging multiple data modalities. In
this context, we introduce the Unified Segmentation Attention Mechanism Network
(USAM-Net), a novel convolutional neural network that integrates stereo image
inputs with semantic segmentation maps and attention to enhance depth
estimation performance. USAM-Net employs a dual-pathway architecture, which
combines a pre-trained segmentation model (SAM) and a depth estimation model.
The segmentation pathway preprocesses the stereo images to generate semantic
masks, which are then concatenated with the stereo images as inputs to the
depth estimation pathway. This integration allows the model to focus on
important features such as object boundaries and surface textures which are
crucial for accurate depth perception. Empirical evaluation on the
DrivingStereo dataset demonstrates that USAM-Net achieves superior performance
metrics, including a Global Difference (GD) of 3.61\% and an End-Point Error
(EPE) of 0.88, outperforming traditional models such as CFNet, SegStereo, and
iResNet. These results underscore the effectiveness of integrating segmentation
information into stereo depth estimation tasks, highlighting the potential of
USAM-Net in applications demanding high-precision depth data.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:29:02 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Dayo",
"Joseph Emmanuel DL",
""
],
[
"Naval",
"Prospero C.",
"Jr"
]
] | TITLE: USAM-Net: A U-Net-based Network for Improved Stereo Correspondence and
Scene Depth Estimation using Features from a Pre-trained Image Segmentation
network
ABSTRACT: The increasing demand for high-accuracy depth estimation in autonomous
driving and augmented reality applications necessitates advanced neural
architectures capable of effectively leveraging multiple data modalities. In
this context, we introduce the Unified Segmentation Attention Mechanism Network
(USAM-Net), a novel convolutional neural network that integrates stereo image
inputs with semantic segmentation maps and attention to enhance depth
estimation performance. USAM-Net employs a dual-pathway architecture, which
combines a pre-trained segmentation model (SAM) and a depth estimation model.
The segmentation pathway preprocesses the stereo images to generate semantic
masks, which are then concatenated with the stereo images as inputs to the
depth estimation pathway. This integration allows the model to focus on
important features such as object boundaries and surface textures which are
crucial for accurate depth perception. Empirical evaluation on the
DrivingStereo dataset demonstrates that USAM-Net achieves superior performance
metrics, including a Global Difference (GD) of 3.61\% and an End-Point Error
(EPE) of 0.88, outperforming traditional models such as CFNet, SegStereo, and
iResNet. These results underscore the effectiveness of integrating segmentation
information into stereo depth estimation tasks, highlighting the potential of
USAM-Net in applications demanding high-precision depth data.
|
2503.14953 | Yang Liu | Yang Liu, Wentao Feng, Zhuoyao Liu, Shudong Huang, Jiancheng Lv | Aligning Information Capacity Between Vision and Language via
Dense-to-Sparse Feature Distillation for Image-Text Matching | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enabling Visual Semantic Models to effectively handle multi-view description
matching has been a longstanding challenge. Existing methods typically learn a
set of embeddings to find the optimal match for each view's text and compute
similarity. However, the visual and text embeddings learned through these
approaches have limited information capacity and are prone to interference from
locally similar negative samples. To address this issue, we argue that the
information capacity of embeddings is crucial and propose Dense-to-Sparse
Feature Distilled Visual Semantic Embedding (D2S-VSE), which enhances the
information capacity of sparse text by leveraging dense text distillation.
Specifically, D2S-VSE is a two-stage framework. In the pre-training stage, we
align images with dense text to enhance the information capacity of visual
semantic embeddings. In the fine-tuning stage, we optimize two tasks
simultaneously, distilling dense text embeddings to sparse text embeddings
while aligning images and sparse texts, enhancing the information capacity of
sparse text embeddings. Our proposed D2S-VSE model is extensively evaluated on
the large-scale MS-COCO and Flickr30K datasets, demonstrating its superiority
over recent state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:42:24 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Liu",
"Yang",
""
],
[
"Feng",
"Wentao",
""
],
[
"Liu",
"Zhuoyao",
""
],
[
"Huang",
"Shudong",
""
],
[
"Lv",
"Jiancheng",
""
]
] | TITLE: Aligning Information Capacity Between Vision and Language via
Dense-to-Sparse Feature Distillation for Image-Text Matching
ABSTRACT: Enabling Visual Semantic Models to effectively handle multi-view description
matching has been a longstanding challenge. Existing methods typically learn a
set of embeddings to find the optimal match for each view's text and compute
similarity. However, the visual and text embeddings learned through these
approaches have limited information capacity and are prone to interference from
locally similar negative samples. To address this issue, we argue that the
information capacity of embeddings is crucial and propose Dense-to-Sparse
Feature Distilled Visual Semantic Embedding (D2S-VSE), which enhances the
information capacity of sparse text by leveraging dense text distillation.
Specifically, D2S-VSE is a two-stage framework. In the pre-training stage, we
align images with dense text to enhance the information capacity of visual
semantic embeddings. In the fine-tuning stage, we optimize two tasks
simultaneously, distilling dense text embeddings to sparse text embeddings
while aligning images and sparse texts, enhancing the information capacity of
sparse text embeddings. Our proposed D2S-VSE model is extensively evaluated on
the large-scale MS-COCO and Flickr30K datasets, demonstrating its superiority
over recent state-of-the-art methods.
|
2503.14957 | Basura Fernando | Thanh-Son Nguyen, Hong Yang, Tzeh Yuan Neoh, Hao Zhang, Ee Yeo Keat,
Basura Fernando | Neuro Symbolic Knowledge Reasoning for Procedural Video Question
Answering | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper introduces a new video question-answering (VQA) dataset that
challenges models to leverage procedural knowledge for complex reasoning. It
requires recognizing visual entities, generating hypotheses, and performing
contextual, causal, and counterfactual reasoning. To address this, we propose
neuro symbolic reasoning module that integrates neural networks and LLM-driven
constrained reasoning over variables for interpretable answer generation.
Results show that combining LLMs with structured knowledge reasoning with logic
enhances procedural reasoning on the STAR benchmark and our dataset. Code and
dataset at https://github.com/LUNAProject22/KML soon.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:49:14 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Nguyen",
"Thanh-Son",
""
],
[
"Yang",
"Hong",
""
],
[
"Neoh",
"Tzeh Yuan",
""
],
[
"Zhang",
"Hao",
""
],
[
"Keat",
"Ee Yeo",
""
],
[
"Fernando",
"Basura",
""
]
] | TITLE: Neuro Symbolic Knowledge Reasoning for Procedural Video Question
Answering
ABSTRACT: This paper introduces a new video question-answering (VQA) dataset that
challenges models to leverage procedural knowledge for complex reasoning. It
requires recognizing visual entities, generating hypotheses, and performing
contextual, causal, and counterfactual reasoning. To address this, we propose
neuro symbolic reasoning module that integrates neural networks and LLM-driven
constrained reasoning over variables for interpretable answer generation.
Results show that combining LLMs with structured knowledge reasoning with logic
enhances procedural reasoning on the STAR benchmark and our dataset. Code and
dataset at https://github.com/LUNAProject22/KML soon.
|
2503.14963 | Xiaobo Xia | Xiaohao Liu, Xiaobo Xia, See-Kiong Ng, Tat-Seng Chua | Continual Multimodal Contrastive Learning | 36 pages, 9 figures, 4 tables | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal contrastive learning (MCL) advances in aligning different
modalities and generating multimodal representations in a joint space. By
leveraging contrastive learning across diverse modalities, large-scale
multimodal data enhances representational quality. However, a critical yet
often overlooked challenge remains: multimodal data is rarely collected in a
single process, and training from scratch is computationally expensive.
Instead, emergent multimodal data can be used to optimize existing models
gradually, \textit{i.e.}, models are trained on a sequence of modality pair
data. We define this problem as Continual Multimodal Contrastive Learning
(CMCL), an underexplored yet crucial research direction at the intersection of
multimodal and continual learning. In this paper, we formulate CMCL through two
specialized principles of stability and plasticity. We theoretically derive a
novel optimization-based method, which projects updated gradients from dual
sides onto subspaces where any gradient is prevented from interfering with the
previously learned knowledge. Two upper bounds provide theoretical insights on
both stability and plasticity in our solution. Beyond our theoretical
contributions, we conduct experiments on multiple datasets by comparing our
method against advanced continual learning baselines. The empirical results
further support our claims and demonstrate the efficacy of our method. The code
will be publicly available.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:57:08 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Liu",
"Xiaohao",
""
],
[
"Xia",
"Xiaobo",
""
],
[
"Ng",
"See-Kiong",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | TITLE: Continual Multimodal Contrastive Learning
ABSTRACT: Multimodal contrastive learning (MCL) advances in aligning different
modalities and generating multimodal representations in a joint space. By
leveraging contrastive learning across diverse modalities, large-scale
multimodal data enhances representational quality. However, a critical yet
often overlooked challenge remains: multimodal data is rarely collected in a
single process, and training from scratch is computationally expensive.
Instead, emergent multimodal data can be used to optimize existing models
gradually, \textit{i.e.}, models are trained on a sequence of modality pair
data. We define this problem as Continual Multimodal Contrastive Learning
(CMCL), an underexplored yet crucial research direction at the intersection of
multimodal and continual learning. In this paper, we formulate CMCL through two
specialized principles of stability and plasticity. We theoretically derive a
novel optimization-based method, which projects updated gradients from dual
sides onto subspaces where any gradient is prevented from interfering with the
previously learned knowledge. Two upper bounds provide theoretical insights on
both stability and plasticity in our solution. Beyond our theoretical
contributions, we conduct experiments on multiple datasets by comparing our
method against advanced continual learning baselines. The empirical results
further support our claims and demonstrate the efficacy of our method. The code
will be publicly available.
|
2503.14966 | Lichao Mou | Tingxiu Chen, Yilei Shi, Zixuan Zheng, Bingcong Yan, Jingliang Hu,
Xiao Xiang Zhu, Lichao Mou | Ultrasound Image-to-Video Synthesis via Latent Dynamic Diffusion Models | MICCAI 2024 | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ultrasound video classification enables automated diagnosis and has emerged
as an important research area. However, publicly available ultrasound video
datasets remain scarce, hindering progress in developing effective video
classification models. We propose addressing this shortage by synthesizing
plausible ultrasound videos from readily available, abundant ultrasound images.
To this end, we introduce a latent dynamic diffusion model (LDDM) to
efficiently translate static images to dynamic sequences with realistic video
characteristics. We demonstrate strong quantitative results and visually
appealing synthesized videos on the BUSV benchmark. Notably, training video
classification models on combinations of real and LDDM-synthesized videos
substantially improves performance over using real data alone, indicating our
method successfully emulates dynamics critical for discrimination. Our
image-to-video approach provides an effective data augmentation solution to
advance ultrasound video analysis. Code is available at
https://github.com/MedAITech/U_I2V.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 07:58:43 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Tingxiu",
""
],
[
"Shi",
"Yilei",
""
],
[
"Zheng",
"Zixuan",
""
],
[
"Yan",
"Bingcong",
""
],
[
"Hu",
"Jingliang",
""
],
[
"Zhu",
"Xiao Xiang",
""
],
[
"Mou",
"Lichao",
""
]
] | TITLE: Ultrasound Image-to-Video Synthesis via Latent Dynamic Diffusion Models
ABSTRACT: Ultrasound video classification enables automated diagnosis and has emerged
as an important research area. However, publicly available ultrasound video
datasets remain scarce, hindering progress in developing effective video
classification models. We propose addressing this shortage by synthesizing
plausible ultrasound videos from readily available, abundant ultrasound images.
To this end, we introduce a latent dynamic diffusion model (LDDM) to
efficiently translate static images to dynamic sequences with realistic video
characteristics. We demonstrate strong quantitative results and visually
appealing synthesized videos on the BUSV benchmark. Notably, training video
classification models on combinations of real and LDDM-synthesized videos
substantially improves performance over using real data alone, indicating our
method successfully emulates dynamics critical for discrimination. Our
image-to-video approach provides an effective data augmentation solution to
advance ultrasound video analysis. Code is available at
https://github.com/MedAITech/U_I2V.
|
2503.14973 | Rishav Rishav | Rishav Rishav, Somjit Nath, Vincent Michalski, Samira Ebrahimi Kahou | Behaviour Discovery and Attribution for Explainable Reinforcement
Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Explaining the decisions made by reinforcement learning (RL) agents is
critical for building trust and ensuring reliability in real-world
applications. Traditional approaches to explainability often rely on saliency
analysis, which can be limited in providing actionable insights. Recently,
there has been growing interest in attributing RL decisions to specific
trajectories within a dataset. However, these methods often generalize
explanations to long trajectories, potentially involving multiple distinct
behaviors. Often, providing multiple more fine grained explanations would
improve clarity. In this work, we propose a framework for behavior discovery
and action attribution to behaviors in offline RL trajectories. Our method
identifies meaningful behavioral segments, enabling more precise and granular
explanations associated with high level agent behaviors. This approach is
adaptable across diverse environments with minimal modifications, offering a
scalable and versatile solution for behavior discovery and attribution for
explainable RL.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:06:00 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Rishav",
"Rishav",
""
],
[
"Nath",
"Somjit",
""
],
[
"Michalski",
"Vincent",
""
],
[
"Kahou",
"Samira Ebrahimi",
""
]
] | TITLE: Behaviour Discovery and Attribution for Explainable Reinforcement
Learning
ABSTRACT: Explaining the decisions made by reinforcement learning (RL) agents is
critical for building trust and ensuring reliability in real-world
applications. Traditional approaches to explainability often rely on saliency
analysis, which can be limited in providing actionable insights. Recently,
there has been growing interest in attributing RL decisions to specific
trajectories within a dataset. However, these methods often generalize
explanations to long trajectories, potentially involving multiple distinct
behaviors. Often, providing multiple more fine grained explanations would
improve clarity. In this work, we propose a framework for behavior discovery
and action attribution to behaviors in offline RL trajectories. Our method
identifies meaningful behavioral segments, enabling more precise and granular
explanations associated with high level agent behaviors. This approach is
adaptable across diverse environments with minimal modifications, offering a
scalable and versatile solution for behavior discovery and attribution for
explainable RL.
|
2503.14974 | Yifan Li | Yifan Li, Shuai Yang, Jiaying Liu | Language-based Image Colorization: A Benchmark and Beyond | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image colorization aims to bring colors back to grayscale images. Automatic
image colorization methods, which requires no additional guidance, struggle to
generate high-quality images due to color ambiguity, and provides limited user
controllability. Thanks to the emergency of cross-modality datasets and models,
language-based colorization methods are proposed to fully utilize the
efficiency and flexibly of text descriptions to guide colorization. In view of
the lack of a comprehensive review of language-based colorization literature,
we conduct a thorough analysis and benchmarking. We first briefly summarize
existing automatic colorization methods. Then, we focus on language-based
methods and point out their core challenge on cross-modal alignment. We further
divide these methods into two categories: one attempts to train a
cross-modality network from scratch, while the other utilizes the pre-trained
cross-modality model to establish the textual-visual correspondence. Based on
the analyzed limitations of existing language-based methods, we propose a
simple yet effective method based on distilled diffusion model. Extensive
experiments demonstrate that our simple baseline can produces better results
than previous complex methods with 14 times speed up. To the best of our
knowledge, this is the first comprehensive review and benchmark on
language-based image colorization field, providing meaningful insights for the
community. The code is available at https://github.com/lyf1212/Color-Turbo.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:09:32 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Yifan",
""
],
[
"Yang",
"Shuai",
""
],
[
"Liu",
"Jiaying",
""
]
] | TITLE: Language-based Image Colorization: A Benchmark and Beyond
ABSTRACT: Image colorization aims to bring colors back to grayscale images. Automatic
image colorization methods, which requires no additional guidance, struggle to
generate high-quality images due to color ambiguity, and provides limited user
controllability. Thanks to the emergency of cross-modality datasets and models,
language-based colorization methods are proposed to fully utilize the
efficiency and flexibly of text descriptions to guide colorization. In view of
the lack of a comprehensive review of language-based colorization literature,
we conduct a thorough analysis and benchmarking. We first briefly summarize
existing automatic colorization methods. Then, we focus on language-based
methods and point out their core challenge on cross-modal alignment. We further
divide these methods into two categories: one attempts to train a
cross-modality network from scratch, while the other utilizes the pre-trained
cross-modality model to establish the textual-visual correspondence. Based on
the analyzed limitations of existing language-based methods, we propose a
simple yet effective method based on distilled diffusion model. Extensive
experiments demonstrate that our simple baseline can produces better results
than previous complex methods with 14 times speed up. To the best of our
knowledge, this is the first comprehensive review and benchmark on
language-based image colorization field, providing meaningful insights for the
community. The code is available at https://github.com/lyf1212/Color-Turbo.
|
2503.14975 | Zihan Cao | Zihan Cao, Yu Zhong, Liang-Jian Deng | Taming Flow Matching with Unbalanced Optimal Transport into Fast
Pansharpening | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pansharpening, a pivotal task in remote sensing for fusing high-resolution
panchromatic and multispectral imagery, has garnered significant research
interest. Recent advancements employing diffusion models based on stochastic
differential equations (SDEs) have demonstrated state-of-the-art performance.
However, the inherent multi-step sampling process of SDEs imposes substantial
computational overhead, hindering practical deployment. While existing methods
adopt efficient samplers, knowledge distillation, or retraining to reduce
sampling steps (e.g., from 1,000 to fewer steps), such approaches often
compromise fusion quality. In this work, we propose the Optimal Transport Flow
Matching (OTFM) framework, which integrates the dual formulation of unbalanced
optimal transport (UOT) to achieve one-step, high-quality pansharpening. Unlike
conventional OT formulations that enforce rigid distribution alignment, UOT
relaxes marginal constraints to enhance modeling flexibility, accommodating the
intrinsic spectral and spatial disparities in remote sensing data. Furthermore,
we incorporate task-specific regularization into the UOT objective, enhancing
the robustness of the flow model. The OTFM framework enables simulation-free
training and single-step inference while maintaining strict adherence to
pansharpening constraints. Experimental evaluations across multiple datasets
demonstrate that OTFM matches or exceeds the performance of previous
regression-based models and leading diffusion-based methods while only needing
one sampling step. Codes are available at https://github.com/294coder/PAN-OTFM.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:10:49 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Cao",
"Zihan",
""
],
[
"Zhong",
"Yu",
""
],
[
"Deng",
"Liang-Jian",
""
]
] | TITLE: Taming Flow Matching with Unbalanced Optimal Transport into Fast
Pansharpening
ABSTRACT: Pansharpening, a pivotal task in remote sensing for fusing high-resolution
panchromatic and multispectral imagery, has garnered significant research
interest. Recent advancements employing diffusion models based on stochastic
differential equations (SDEs) have demonstrated state-of-the-art performance.
However, the inherent multi-step sampling process of SDEs imposes substantial
computational overhead, hindering practical deployment. While existing methods
adopt efficient samplers, knowledge distillation, or retraining to reduce
sampling steps (e.g., from 1,000 to fewer steps), such approaches often
compromise fusion quality. In this work, we propose the Optimal Transport Flow
Matching (OTFM) framework, which integrates the dual formulation of unbalanced
optimal transport (UOT) to achieve one-step, high-quality pansharpening. Unlike
conventional OT formulations that enforce rigid distribution alignment, UOT
relaxes marginal constraints to enhance modeling flexibility, accommodating the
intrinsic spectral and spatial disparities in remote sensing data. Furthermore,
we incorporate task-specific regularization into the UOT objective, enhancing
the robustness of the flow model. The OTFM framework enables simulation-free
training and single-step inference while maintaining strict adherence to
pansharpening constraints. Experimental evaluations across multiple datasets
demonstrate that OTFM matches or exceeds the performance of previous
regression-based models and leading diffusion-based methods while only needing
one sampling step. Codes are available at https://github.com/294coder/PAN-OTFM.
|
2503.14979 | Lichao Mou | Yaxiong Chen, Junjian Hu, Chunlei Li, Zixuan Zheng, Jingliang Hu,
Yilei Shi, Shengwu Xiong, Xiao Xiang Zhu, Lichao Mou | One-Shot Medical Video Object Segmentation via Temporal Contrastive
Memory Networks | MICCAI 2024 Workshop | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video object segmentation is crucial for the efficient analysis of complex
medical video data, yet it faces significant challenges in data availability
and annotation. We introduce the task of one-shot medical video object
segmentation, which requires separating foreground and background pixels
throughout a video given only the mask annotation of the first frame. To
address this problem, we propose a temporal contrastive memory network
comprising image and mask encoders to learn feature representations, a temporal
contrastive memory bank that aligns embeddings from adjacent frames while
pushing apart distant ones to explicitly model inter-frame relationships and
stores these features, and a decoder that fuses encoded image features and
memory readouts for segmentation. We also collect a diverse, multi-source
medical video dataset spanning various modalities and anatomies to benchmark
this task. Extensive experiments demonstrate state-of-the-art performance in
segmenting both seen and unseen structures from a single exemplar, showing
ability to generalize from scarce labels. This highlights the potential to
alleviate annotation burdens for medical video analysis. Code is available at
https://github.com/MedAITech/TCMN.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:17:48 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chen",
"Yaxiong",
""
],
[
"Hu",
"Junjian",
""
],
[
"Li",
"Chunlei",
""
],
[
"Zheng",
"Zixuan",
""
],
[
"Hu",
"Jingliang",
""
],
[
"Shi",
"Yilei",
""
],
[
"Xiong",
"Shengwu",
""
],
[
"Zhu",
"Xiao Xiang",
""
],
[
"Mou",
"Lichao",
""
]
] | TITLE: One-Shot Medical Video Object Segmentation via Temporal Contrastive
Memory Networks
ABSTRACT: Video object segmentation is crucial for the efficient analysis of complex
medical video data, yet it faces significant challenges in data availability
and annotation. We introduce the task of one-shot medical video object
segmentation, which requires separating foreground and background pixels
throughout a video given only the mask annotation of the first frame. To
address this problem, we propose a temporal contrastive memory network
comprising image and mask encoders to learn feature representations, a temporal
contrastive memory bank that aligns embeddings from adjacent frames while
pushing apart distant ones to explicitly model inter-frame relationships and
stores these features, and a decoder that fuses encoded image features and
memory readouts for segmentation. We also collect a diverse, multi-source
medical video dataset spanning various modalities and anatomies to benchmark
this task. Extensive experiments demonstrate state-of-the-art performance in
segmenting both seen and unseen structures from a single exemplar, showing
ability to generalize from scarce labels. This highlights the potential to
alleviate annotation burdens for medical video analysis. Code is available at
https://github.com/MedAITech/TCMN.
|
2503.14983 | Zanting Ye | Zanting Ye, Xiaolong Niu, Xuanbin Wu, Wenxiang Yi, Yuan Chang, Lijun
Lu | Semi-KAN: KAN Provides an Effective Representation for Semi-Supervised
Learning in Medical Image Segmentation | 18 pages, 7 figures, 6 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning-based medical image segmentation has shown remarkable success;
however, it typically requires extensive pixel-level annotations, which are
both expensive and time-intensive. Semi-supervised medical image segmentation
(SSMIS) offers a viable alternative, driven by advancements in CNNs and ViTs.
However, these networks often rely on single fixed activation functions and
linear modeling patterns, limiting their ability to effectively learn robust
representations. Given the limited availability of labeled date, achieving
robust representation learning becomes crucial. Inspired by Kolmogorov-Arnold
Networks (KANs), we propose Semi-KAN, which leverages the untapped potential of
KANs to enhance backbone architectures for representation learning in SSMIS.
Our findings indicate that: (1) compared to networks with fixed activation
functions, KANs exhibit superior representation learning capabilities with
fewer parameters, and (2) KANs excel in high-semantic feature spaces. Building
on these insights, we integrate KANs into tokenized intermediate
representations, applying them selectively at the encoder's bottleneck and the
decoder's top layers within a U-Net pipeline to extract high-level semantic
features. Although learnable activation functions improve feature expansion,
they introduce significant computational overhead with only marginal
performance gains. To mitigate this, we reduce the feature dimensions and
employ horizontal scaling to capture multiple pattern representations.
Furthermore, we design a multi-branch U-Net architecture with uncertainty
estimation to effectively learn diverse pattern representations. Extensive
experiments on four public datasets demonstrate that Semi-KAN surpasses
baseline networks, utilizing fewer KAN layers and lower computational cost,
thereby underscoring the potential of KANs as a promising approach for SSMIS.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:27:41 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ye",
"Zanting",
""
],
[
"Niu",
"Xiaolong",
""
],
[
"Wu",
"Xuanbin",
""
],
[
"Yi",
"Wenxiang",
""
],
[
"Chang",
"Yuan",
""
],
[
"Lu",
"Lijun",
""
]
] | TITLE: Semi-KAN: KAN Provides an Effective Representation for Semi-Supervised
Learning in Medical Image Segmentation
ABSTRACT: Deep learning-based medical image segmentation has shown remarkable success;
however, it typically requires extensive pixel-level annotations, which are
both expensive and time-intensive. Semi-supervised medical image segmentation
(SSMIS) offers a viable alternative, driven by advancements in CNNs and ViTs.
However, these networks often rely on single fixed activation functions and
linear modeling patterns, limiting their ability to effectively learn robust
representations. Given the limited availability of labeled date, achieving
robust representation learning becomes crucial. Inspired by Kolmogorov-Arnold
Networks (KANs), we propose Semi-KAN, which leverages the untapped potential of
KANs to enhance backbone architectures for representation learning in SSMIS.
Our findings indicate that: (1) compared to networks with fixed activation
functions, KANs exhibit superior representation learning capabilities with
fewer parameters, and (2) KANs excel in high-semantic feature spaces. Building
on these insights, we integrate KANs into tokenized intermediate
representations, applying them selectively at the encoder's bottleneck and the
decoder's top layers within a U-Net pipeline to extract high-level semantic
features. Although learnable activation functions improve feature expansion,
they introduce significant computational overhead with only marginal
performance gains. To mitigate this, we reduce the feature dimensions and
employ horizontal scaling to capture multiple pattern representations.
Furthermore, we design a multi-branch U-Net architecture with uncertainty
estimation to effectively learn diverse pattern representations. Extensive
experiments on four public datasets demonstrate that Semi-KAN surpasses
baseline networks, utilizing fewer KAN layers and lower computational cost,
thereby underscoring the potential of KANs as a promising approach for SSMIS.
|
2503.14990 | Kevin Polisano | K\'evin Polisano (SVH), Sylvain Meignen (DAO), Nils Laurent
(Phys-ENS), Hubert Leterme (ENSICAEN) | Disentangling Modes and Interference in the Spectrogram of
Multicomponent Signals | null | null | null | null | cs.CV eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate how the spectrogram of multicomponent signals
can be decomposed into a mode part and an interference part. We explore two
approaches: (i) a variational method inspired by texture-geometry decomposition
in image processing, and (ii) a supervised learning approach using a U-Net
architecture, trained on a dataset encompassing diverse interference patterns
and noise conditions. Once the interference component is identified, we explain
how it enables us to define a criterion to locally adapt the window length used
in the definition of the spectrogram, for the sake of improving ridge detection
in the presence of close modes. Numerical experiments illustrate the advantages
and limitations of both approaches for spectrogram decomposition, highlighting
their potential for enhancing time-frequency analysis in the presence of strong
interference.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:36:20 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Polisano",
"Kévin",
"",
"SVH"
],
[
"Meignen",
"Sylvain",
"",
"DAO"
],
[
"Laurent",
"Nils",
"",
"Phys-ENS"
],
[
"Leterme",
"Hubert",
"",
"ENSICAEN"
]
] | TITLE: Disentangling Modes and Interference in the Spectrogram of
Multicomponent Signals
ABSTRACT: In this paper, we investigate how the spectrogram of multicomponent signals
can be decomposed into a mode part and an interference part. We explore two
approaches: (i) a variational method inspired by texture-geometry decomposition
in image processing, and (ii) a supervised learning approach using a U-Net
architecture, trained on a dataset encompassing diverse interference patterns
and noise conditions. Once the interference component is identified, we explain
how it enables us to define a criterion to locally adapt the window length used
in the definition of the spectrogram, for the sake of improving ridge detection
in the presence of close modes. Numerical experiments illustrate the advantages
and limitations of both approaches for spectrogram decomposition, highlighting
their potential for enhancing time-frequency analysis in the presence of strong
interference.
|
2503.15001 | Michael Neri | Michael Neri and Federica Battisti | Low-Complexity Patch-based No-Reference Point Cloud Quality Metric
exploiting Weighted Structure and Texture Features | Accepted for publication in IEEE Transactions on Broadcasting. Code
at https://github.com/michaelneri/PST-PCQA | null | 10.1109/TBC.2025.3553305 | null | cs.CV cs.MM eess.IV | http://creativecommons.org/licenses/by/4.0/ | During the compression, transmission, and rendering of point clouds, various
artifacts are introduced, affecting the quality perceived by the end user.
However, evaluating the impact of these distortions on the overall quality is a
challenging task. This study introduces PST-PCQA, a no-reference point cloud
quality metric based on a low-complexity, learning-based framework. It
evaluates point cloud quality by analyzing individual patches, integrating
local and global features to predict the Mean Opinion Score. In summary, the
process involves extracting features from patches, combining them, and using
correlation weights to predict the overall quality. This approach allows us to
assess point cloud quality without relying on a reference point cloud, making
it particularly useful in scenarios where reference data is unavailable.
Experimental tests on three state-of-the-art datasets show good prediction
capabilities of PST-PCQA, through the analysis of different feature pooling
strategies and its ability to generalize across different datasets. The
ablation study confirms the benefits of evaluating quality on a patch-by-patch
basis. Additionally, PST-PCQA's light-weight structure, with a small number of
parameters to learn, makes it well-suited for real-time applications and
devices with limited computational capacity. For reproducibility purposes, we
made code, model, and pretrained weights available at
https://github.com/michaelneri/PST-PCQA.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:52:04 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Neri",
"Michael",
""
],
[
"Battisti",
"Federica",
""
]
] | TITLE: Low-Complexity Patch-based No-Reference Point Cloud Quality Metric
exploiting Weighted Structure and Texture Features
ABSTRACT: During the compression, transmission, and rendering of point clouds, various
artifacts are introduced, affecting the quality perceived by the end user.
However, evaluating the impact of these distortions on the overall quality is a
challenging task. This study introduces PST-PCQA, a no-reference point cloud
quality metric based on a low-complexity, learning-based framework. It
evaluates point cloud quality by analyzing individual patches, integrating
local and global features to predict the Mean Opinion Score. In summary, the
process involves extracting features from patches, combining them, and using
correlation weights to predict the overall quality. This approach allows us to
assess point cloud quality without relying on a reference point cloud, making
it particularly useful in scenarios where reference data is unavailable.
Experimental tests on three state-of-the-art datasets show good prediction
capabilities of PST-PCQA, through the analysis of different feature pooling
strategies and its ability to generalize across different datasets. The
ablation study confirms the benefits of evaluating quality on a patch-by-patch
basis. Additionally, PST-PCQA's light-weight structure, with a small number of
parameters to learn, makes it well-suited for real-time applications and
devices with limited computational capacity. For reproducibility purposes, we
made code, model, and pretrained weights available at
https://github.com/michaelneri/PST-PCQA.
|
2503.15002 | Hao Zhang | Hao Zhang, Wei Chen, Xingyu Zhao, Jianpeng Qi, Guiyuan Jiang, Yanwei
Yu | Scalable Trajectory-User Linking with Dual-Stream Representation
Networks | The paper has been accepted by AAAI 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trajectory-user linking (TUL) aims to match anonymous trajectories to the
most likely users who generated them, offering benefits for a wide range of
real-world spatio-temporal applications. However, existing TUL methods are
limited by high model complexity and poor learning of the effective
representations of trajectories, rendering them ineffective in handling
large-scale user trajectory data. In this work, we propose a novel
$\underline{Scal}$abl$\underline{e}$ Trajectory-User Linking with dual-stream
representation networks for large-scale $\underline{TUL}$ problem, named
ScaleTUL. Specifically, ScaleTUL generates two views using temporal and spatial
augmentations to exploit supervised contrastive learning framework to
effectively capture the irregularities of trajectories. In each view, a
dual-stream trajectory encoder, consisting of a long-term encoder and a
short-term encoder, is designed to learn unified trajectory representations
that fuse different temporal-spatial dependencies. Then, a TUL layer is used to
associate the trajectories with the corresponding users in the representation
space using a two-stage training model. Experimental results on check-in
mobility datasets from three real-world cities and the nationwide U.S.
demonstrate the superiority of ScaleTUL over state-of-the-art baselines for
large-scale TUL tasks.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:52:23 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhang",
"Hao",
""
],
[
"Chen",
"Wei",
""
],
[
"Zhao",
"Xingyu",
""
],
[
"Qi",
"Jianpeng",
""
],
[
"Jiang",
"Guiyuan",
""
],
[
"Yu",
"Yanwei",
""
]
] | TITLE: Scalable Trajectory-User Linking with Dual-Stream Representation
Networks
ABSTRACT: Trajectory-user linking (TUL) aims to match anonymous trajectories to the
most likely users who generated them, offering benefits for a wide range of
real-world spatio-temporal applications. However, existing TUL methods are
limited by high model complexity and poor learning of the effective
representations of trajectories, rendering them ineffective in handling
large-scale user trajectory data. In this work, we propose a novel
$\underline{Scal}$abl$\underline{e}$ Trajectory-User Linking with dual-stream
representation networks for large-scale $\underline{TUL}$ problem, named
ScaleTUL. Specifically, ScaleTUL generates two views using temporal and spatial
augmentations to exploit supervised contrastive learning framework to
effectively capture the irregularities of trajectories. In each view, a
dual-stream trajectory encoder, consisting of a long-term encoder and a
short-term encoder, is designed to learn unified trajectory representations
that fuse different temporal-spatial dependencies. Then, a TUL layer is used to
associate the trajectories with the corresponding users in the representation
space using a two-stage training model. Experimental results on check-in
mobility datasets from three real-world cities and the nationwide U.S.
demonstrate the superiority of ScaleTUL over state-of-the-art baselines for
large-scale TUL tasks.
|
2503.15004 | Tristan Wirth | Annalena Bl\"ansdorf, Tristan Wirth, Arne Rak, Thomas P\"ollabauer,
Volker Knauthe, Arjan Kuijper | Semantic Segmentation of Transparent and Opaque Drinking Glasses with
the Help of Zero-shot Learning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Segmenting transparent structures in images is challenging since they are
difficult to distinguish from the background. Common examples are drinking
glasses, which are a ubiquitous part of our lives and appear in many different
shapes and sizes. In this work we propose TransCaGNet, a modified version of
the zero-shot model CaGNet. We exchange the segmentation backbone with the
architecture of Trans4Trans to be capable of segmenting transparent objects.
Since some glasses are rarely captured, we use zeroshot learning to be able to
create semantic segmentations of glass categories not given during training. We
propose a novel synthetic dataset covering a diverse set of different
environmental conditions. Additionally we capture a real-world evaluation
dataset since most applications take place in the real world. Comparing our
model with Zeg-Clip we are able to show that TransCaGNet produces better mean
IoU and accuracy values while ZegClip outperforms it mostly for unseen classes.
To improve the segmentation results, we combine the semantic segmentation of
the models with the segmentation results of SAM 2. Our evaluation emphasizes
that distinguishing between different classes is challenging for the models due
to similarity, points of view, or coverings. Taking this behavior into account,
we assign glasses multiple possible categories. The modification leads to an
improvement up to 13.68% for the mean IoU and up to 17.88% for the mean
accuracy values on the synthetic dataset. Using our difficult synthetic dataset
for training, the models produce even better results on the real-world dataset.
The mean IoU is improved up to 5.55% and the mean accuracy up to 5.72% on the
real-world dataset.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:54:14 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Blänsdorf",
"Annalena",
""
],
[
"Wirth",
"Tristan",
""
],
[
"Rak",
"Arne",
""
],
[
"Pöllabauer",
"Thomas",
""
],
[
"Knauthe",
"Volker",
""
],
[
"Kuijper",
"Arjan",
""
]
] | TITLE: Semantic Segmentation of Transparent and Opaque Drinking Glasses with
the Help of Zero-shot Learning
ABSTRACT: Segmenting transparent structures in images is challenging since they are
difficult to distinguish from the background. Common examples are drinking
glasses, which are a ubiquitous part of our lives and appear in many different
shapes and sizes. In this work we propose TransCaGNet, a modified version of
the zero-shot model CaGNet. We exchange the segmentation backbone with the
architecture of Trans4Trans to be capable of segmenting transparent objects.
Since some glasses are rarely captured, we use zeroshot learning to be able to
create semantic segmentations of glass categories not given during training. We
propose a novel synthetic dataset covering a diverse set of different
environmental conditions. Additionally we capture a real-world evaluation
dataset since most applications take place in the real world. Comparing our
model with Zeg-Clip we are able to show that TransCaGNet produces better mean
IoU and accuracy values while ZegClip outperforms it mostly for unseen classes.
To improve the segmentation results, we combine the semantic segmentation of
the models with the segmentation results of SAM 2. Our evaluation emphasizes
that distinguishing between different classes is challenging for the models due
to similarity, points of view, or coverings. Taking this behavior into account,
we assign glasses multiple possible categories. The modification leads to an
improvement up to 13.68% for the mean IoU and up to 17.88% for the mean
accuracy values on the synthetic dataset. Using our difficult synthetic dataset
for training, the models produce even better results on the real-world dataset.
The mean IoU is improved up to 5.55% and the mean accuracy up to 5.72% on the
real-world dataset.
|
2503.15008 | Saddam Hussain Khan | Aamir Mehmood, Yue Hu, Saddam Hussain Khan (Artificial Intelligence
Lab, Department of Computer Systems Engineering, University of Engineering
and Applied Sciences (UEAS), Swat, Pakistan) | A Novel Channel Boosted Residual CNN-Transformer with Regional-Boundary
Learning for Breast Cancer Detection | 12 pages, 10 Figures, 2 Tables. arXiv admin note: substantial text
overlap with arXiv:2405.12986 | null | null | null | eess.IV cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent advancements in detecting tumors using deep learning on breast
ultrasound images (BUSI) have demonstrated significant success. Deep CNNs and
vision-transformers (ViTs) have demonstrated individually promising initial
performance. However, challenges related to model complexity and contrast,
texture, and tumor morphology variations introduce uncertainties that hinder
the effectiveness of current methods. This study introduces a novel hybrid
framework, CB-Res-RBCMT, combining customized residual CNNs and new ViT
components for detailed BUSI cancer analysis. The proposed RBCMT uses stem
convolution blocks with CNN Meet Transformer (CMT) blocks, followed by new
Regional and boundary (RB) feature extraction operations for capturing contrast
and morphological variations. Moreover, the CMT block incorporates global
contextual interactions through multi-head attention, enhancing computational
efficiency with a lightweight design. Additionally, the customized inverse
residual and stem CNNs within the CMT effectively extract local texture
information and handle vanishing gradients. Finally, the new channel-boosted
(CB) strategy enriches the feature diversity of the limited dataset by
combining the original RBCMT channels with transfer learning-based residual
CNN-generated maps. These diverse channels are processed through a spatial
attention block for optimal pixel selection, reducing redundancy and improving
the discrimination of minor contrast and texture variations. The proposed
CB-Res-RBCMT achieves an F1-score of 95.57%, accuracy of 95.63%, sensitivity of
96.42%, and precision of 94.79% on the standard harmonized stringent BUSI
dataset, outperforming existing ViT and CNN methods. These results demonstrate
the versatility of our integrated CNN-Transformer framework in capturing
diverse features and delivering superior performance in BUSI cancer diagnosis.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:59:02 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Mehmood",
"Aamir",
"",
"Artificial Intelligence\n Lab, Department of Computer Systems Engineering, University of Engineering\n and Applied Sciences"
],
[
"Hu",
"Yue",
"",
"Artificial Intelligence\n Lab, Department of Computer Systems Engineering, University of Engineering\n and Applied Sciences"
],
[
"Khan",
"Saddam Hussain",
"",
"Artificial Intelligence\n Lab, Department of Computer Systems Engineering, University of Engineering\n and Applied Sciences"
]
] | TITLE: A Novel Channel Boosted Residual CNN-Transformer with Regional-Boundary
Learning for Breast Cancer Detection
ABSTRACT: Recent advancements in detecting tumors using deep learning on breast
ultrasound images (BUSI) have demonstrated significant success. Deep CNNs and
vision-transformers (ViTs) have demonstrated individually promising initial
performance. However, challenges related to model complexity and contrast,
texture, and tumor morphology variations introduce uncertainties that hinder
the effectiveness of current methods. This study introduces a novel hybrid
framework, CB-Res-RBCMT, combining customized residual CNNs and new ViT
components for detailed BUSI cancer analysis. The proposed RBCMT uses stem
convolution blocks with CNN Meet Transformer (CMT) blocks, followed by new
Regional and boundary (RB) feature extraction operations for capturing contrast
and morphological variations. Moreover, the CMT block incorporates global
contextual interactions through multi-head attention, enhancing computational
efficiency with a lightweight design. Additionally, the customized inverse
residual and stem CNNs within the CMT effectively extract local texture
information and handle vanishing gradients. Finally, the new channel-boosted
(CB) strategy enriches the feature diversity of the limited dataset by
combining the original RBCMT channels with transfer learning-based residual
CNN-generated maps. These diverse channels are processed through a spatial
attention block for optimal pixel selection, reducing redundancy and improving
the discrimination of minor contrast and texture variations. The proposed
CB-Res-RBCMT achieves an F1-score of 95.57%, accuracy of 95.63%, sensitivity of
96.42%, and precision of 94.79% on the standard harmonized stringent BUSI
dataset, outperforming existing ViT and CNN methods. These results demonstrate
the versatility of our integrated CNN-Transformer framework in capturing
diverse features and delivering superior performance in BUSI cancer diagnosis.
|
2503.15016 | Kevin Polisano | Fethi Harkat (EDP, DT), Tiphaine Deuberet (DT), Guillaume Gey (DT),
Val\'erie Perrier (EDP), K\'evin Polisano (SVH) | Manifold Learning for Hyperspectral Images | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional feature extraction and projection techniques, such as Principal
Component Analysis, struggle to adequately represent X-Ray Transmission (XRT)
Multi-Energy (ME) images, limiting the performance of neural networks in
decision-making processes. To address this issue, we propose a method that
approximates the dataset topology by constructing adjacency graphs using the
Uniform Manifold Approximation and Projection. This approach captures nonlinear
correlations within the data, significantly improving the performance of
machine learning algorithms, particularly in processing Hyperspectral Images
(HSI) from X-ray transmission spectroscopy. This technique not only preserves
the global structure of the data but also enhances feature separability,
leading to more accurate and robust classification results.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:12:56 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Harkat",
"Fethi",
"",
"EDP, DT"
],
[
"Deuberet",
"Tiphaine",
"",
"DT"
],
[
"Gey",
"Guillaume",
"",
"DT"
],
[
"Perrier",
"Valérie",
"",
"EDP"
],
[
"Polisano",
"Kévin",
"",
"SVH"
]
] | TITLE: Manifold Learning for Hyperspectral Images
ABSTRACT: Traditional feature extraction and projection techniques, such as Principal
Component Analysis, struggle to adequately represent X-Ray Transmission (XRT)
Multi-Energy (ME) images, limiting the performance of neural networks in
decision-making processes. To address this issue, we propose a method that
approximates the dataset topology by constructing adjacency graphs using the
Uniform Manifold Approximation and Projection. This approach captures nonlinear
correlations within the data, significantly improving the performance of
machine learning algorithms, particularly in processing Hyperspectral Images
(HSI) from X-ray transmission spectroscopy. This technique not only preserves
the global structure of the data but also enhances feature separability,
leading to more accurate and robust classification results.
|
2503.15017 | Yunwei Lan | Yunwei Lan, Zhigao Cui, Chang Liu, Jialun Peng, Nian Wang, Xin Luo,
Dong Liu | Exploiting Diffusion Prior for Real-World Image Dehazing with Unpaired
Training | Accepted by AAAI2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Unpaired training has been verified as one of the most effective paradigms
for real scene dehazing by learning from unpaired real-world hazy and clear
images. Although numerous studies have been proposed, current methods
demonstrate limited generalization for various real scenes due to limited
feature representation and insufficient use of real-world prior. Inspired by
the strong generative capabilities of diffusion models in producing both hazy
and clear images, we exploit diffusion prior for real-world image dehazing, and
propose an unpaired framework named Diff-Dehazer. Specifically, we leverage
diffusion prior as bijective mapping learners within the CycleGAN, a classic
unpaired learning framework. Considering that physical priors contain pivotal
statistics information of real-world data, we further excavate real-world
knowledge by integrating physical priors into our framework. Furthermore, we
introduce a new perspective for adequately leveraging the representation
ability of diffusion models by removing degradation in image and text
modalities, so as to improve the dehazing effect. Extensive experiments on
multiple real-world datasets demonstrate the superior performance of our
method. Our code https://github.com/ywxjm/Diff-Dehazer.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:13:06 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lan",
"Yunwei",
""
],
[
"Cui",
"Zhigao",
""
],
[
"Liu",
"Chang",
""
],
[
"Peng",
"Jialun",
""
],
[
"Wang",
"Nian",
""
],
[
"Luo",
"Xin",
""
],
[
"Liu",
"Dong",
""
]
] | TITLE: Exploiting Diffusion Prior for Real-World Image Dehazing with Unpaired
Training
ABSTRACT: Unpaired training has been verified as one of the most effective paradigms
for real scene dehazing by learning from unpaired real-world hazy and clear
images. Although numerous studies have been proposed, current methods
demonstrate limited generalization for various real scenes due to limited
feature representation and insufficient use of real-world prior. Inspired by
the strong generative capabilities of diffusion models in producing both hazy
and clear images, we exploit diffusion prior for real-world image dehazing, and
propose an unpaired framework named Diff-Dehazer. Specifically, we leverage
diffusion prior as bijective mapping learners within the CycleGAN, a classic
unpaired learning framework. Considering that physical priors contain pivotal
statistics information of real-world data, we further excavate real-world
knowledge by integrating physical priors into our framework. Furthermore, we
introduce a new perspective for adequately leveraging the representation
ability of diffusion models by removing degradation in image and text
modalities, so as to improve the dehazing effect. Extensive experiments on
multiple real-world datasets demonstrate the superior performance of our
method. Our code https://github.com/ywxjm/Diff-Dehazer.
|
2503.15021 | Stefano Zacchiroli | Lu{\i}s Soeiro (IP Paris, LTCI, ACES, INFRES), Thomas Robert (IP
Paris, LTCI, ACES, INFRES), Stefano Zacchiroli (IP Paris, LTCI, ACES, INFRES) | Wild SBOMs: a Large-scale Dataset of Software Bills of Materials from
Public Code | null | Mining Software Repositories 2025 (MSR 2025), Apr 2025, Ottawa
(Canada), Canada | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developers gain productivity by reusing readily available Free and Open
Source Software (FOSS) components. Such practices also bring some difficulties,
such as managing licensing, components and related security. One approach to
handle those difficulties is to use Software Bill of Materials (SBOMs). While
there have been studies on the readiness of practitioners to embrace SBOMs and
on the SBOM tools ecosystem, a large scale study on SBOM practices based on
SBOM files produced in the wild is still lacking. A starting point for such a
study is a large dataset of SBOM files found in the wild. We introduce such a
dataset, consisting of over 78 thousand unique SBOM files, deduplicated from
those found in over 94 million repositories. We include metadata that contains
the standard and format used, quality score generated by the tool sbomqs,
number of revisions, filenames and provenance information. Finally, we give
suggestions and examples of research that could bring new insights on assessing
and improving SBOM real practices.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:20:28 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Soeiro",
"Luıs",
"",
"IP Paris, LTCI, ACES, INFRES"
],
[
"Robert",
"Thomas",
"",
"IP\n Paris, LTCI, ACES, INFRES"
],
[
"Zacchiroli",
"Stefano",
"",
"IP Paris, LTCI, ACES, INFRES"
]
] | TITLE: Wild SBOMs: a Large-scale Dataset of Software Bills of Materials from
Public Code
ABSTRACT: Developers gain productivity by reusing readily available Free and Open
Source Software (FOSS) components. Such practices also bring some difficulties,
such as managing licensing, components and related security. One approach to
handle those difficulties is to use Software Bill of Materials (SBOMs). While
there have been studies on the readiness of practitioners to embrace SBOMs and
on the SBOM tools ecosystem, a large scale study on SBOM practices based on
SBOM files produced in the wild is still lacking. A starting point for such a
study is a large dataset of SBOM files found in the wild. We introduce such a
dataset, consisting of over 78 thousand unique SBOM files, deduplicated from
those found in over 94 million repositories. We include metadata that contains
the standard and format used, quality score generated by the tool sbomqs,
number of revisions, filenames and provenance information. Finally, we give
suggestions and examples of research that could bring new insights on assessing
and improving SBOM real practices.
|
2503.15022 | Saad Lahlali | Saad Lahlali, Sandra Kara, Hejer Ammar, Florian Chabot, Nicolas
Granger, Herv\'e Le Borgne, Quoc-Cuong Pham | xMOD: Cross-Modal Distillation for 2D/3D Multi-Object Discovery from 2D
motion | Accepted at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Object discovery, which refers to the task of localizing objects without
human annotations, has gained significant attention in 2D image analysis.
However, despite this growing interest, it remains under-explored in 3D data,
where approaches rely exclusively on 3D motion, despite its several challenges.
In this paper, we present a novel framework that leverages advances in 2D
object discovery which are based on 2D motion to exploit the advantages of such
motion cues being more flexible and generalizable and to bridge the gap between
2D and 3D modalities. Our primary contributions are twofold: (i) we introduce
DIOD-3D, the first baseline for multi-object discovery in 3D data using 2D
motion, incorporating scene completion as an auxiliary task to enable dense
object localization from sparse input data; (ii) we develop xMOD, a cross-modal
training framework that integrates 2D and 3D data while always using 2D motion
cues. xMOD employs a teacher-student training paradigm across the two
modalities to mitigate confirmation bias by leveraging the domain gap. During
inference, the model supports both RGB-only and point cloud-only inputs.
Additionally, we propose a late-fusion technique tailored to our pipeline that
further enhances performance when both modalities are available at inference.
We evaluate our approach extensively on synthetic (TRIP-PD) and challenging
real-world datasets (KITTI and Waymo). Notably, our approach yields a
substantial performance improvement compared with the 2D object discovery
state-of-the-art on all datasets with gains ranging from +8.7 to +15.1 in F1@50
score. The code is available at https://github.com/CEA-LIST/xMOD
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:20:35 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lahlali",
"Saad",
""
],
[
"Kara",
"Sandra",
""
],
[
"Ammar",
"Hejer",
""
],
[
"Chabot",
"Florian",
""
],
[
"Granger",
"Nicolas",
""
],
[
"Borgne",
"Hervé Le",
""
],
[
"Pham",
"Quoc-Cuong",
""
]
] | TITLE: xMOD: Cross-Modal Distillation for 2D/3D Multi-Object Discovery from 2D
motion
ABSTRACT: Object discovery, which refers to the task of localizing objects without
human annotations, has gained significant attention in 2D image analysis.
However, despite this growing interest, it remains under-explored in 3D data,
where approaches rely exclusively on 3D motion, despite its several challenges.
In this paper, we present a novel framework that leverages advances in 2D
object discovery which are based on 2D motion to exploit the advantages of such
motion cues being more flexible and generalizable and to bridge the gap between
2D and 3D modalities. Our primary contributions are twofold: (i) we introduce
DIOD-3D, the first baseline for multi-object discovery in 3D data using 2D
motion, incorporating scene completion as an auxiliary task to enable dense
object localization from sparse input data; (ii) we develop xMOD, a cross-modal
training framework that integrates 2D and 3D data while always using 2D motion
cues. xMOD employs a teacher-student training paradigm across the two
modalities to mitigate confirmation bias by leveraging the domain gap. During
inference, the model supports both RGB-only and point cloud-only inputs.
Additionally, we propose a late-fusion technique tailored to our pipeline that
further enhances performance when both modalities are available at inference.
We evaluate our approach extensively on synthetic (TRIP-PD) and challenging
real-world datasets (KITTI and Waymo). Notably, our approach yields a
substantial performance improvement compared with the 2D object discovery
state-of-the-art on all datasets with gains ranging from +8.7 to +15.1 in F1@50
score. The code is available at https://github.com/CEA-LIST/xMOD
|
2503.15023 | Mehdi Ayoub Rabiai | Chaouki Boufenar, Mehdi Ayoub Rabiai, Boualem Nadjib Zahaf and Khelil
Rafik Ouaras | Bridging the Gap: Fusing CNNs and Transformers to Decode the Elegance of
Handwritten Arabic Script | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Handwritten Arabic script recognition is a challenging task due to the
script's dynamic letter forms and contextual variations. This paper proposes a
hybrid approach combining convolutional neural networks (CNNs) and
Transformer-based architectures to address these complexities. We evaluated
custom and fine-tuned models, including EfficientNet-B7 and Vision Transformer
(ViT-B16), and introduced an ensemble model that leverages confidence-based
fusion to integrate their strengths. Our ensemble achieves remarkable
performance on the IFN/ENIT dataset, with 96.38% accuracy for letter
classification and 97.22% for positional classification. The results highlight
the complementary nature of CNNs and Transformers, demonstrating their combined
potential for robust Arabic handwriting recognition. This work advances OCR
systems, offering a scalable solution for real-world applications.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:20:42 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Boufenar",
"Chaouki",
""
],
[
"Rabiai",
"Mehdi Ayoub",
""
],
[
"Zahaf",
"Boualem Nadjib",
""
],
[
"Ouaras",
"Khelil Rafik",
""
]
] | TITLE: Bridging the Gap: Fusing CNNs and Transformers to Decode the Elegance of
Handwritten Arabic Script
ABSTRACT: Handwritten Arabic script recognition is a challenging task due to the
script's dynamic letter forms and contextual variations. This paper proposes a
hybrid approach combining convolutional neural networks (CNNs) and
Transformer-based architectures to address these complexities. We evaluated
custom and fine-tuned models, including EfficientNet-B7 and Vision Transformer
(ViT-B16), and introduced an ensemble model that leverages confidence-based
fusion to integrate their strengths. Our ensemble achieves remarkable
performance on the IFN/ENIT dataset, with 96.38% accuracy for letter
classification and 97.22% for positional classification. The results highlight
the complementary nature of CNNs and Transformers, demonstrating their combined
potential for robust Arabic handwriting recognition. This work advances OCR
systems, offering a scalable solution for real-world applications.
|
2503.15035 | Yeonjoo Hong | Sungjae Lee, Yeonjoo Hong, Kwang In Kim | GraspCorrect: Robotic Grasp Correction via Vision-Language Model-Guided
Feedback | null | null | null | null | cs.AI cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Despite significant advancements in robotic manipulation, achieving
consistent and stable grasping remains a fundamental challenge, often limiting
the successful execution of complex tasks. Our analysis reveals that even
state-of-the-art policy models frequently exhibit unstable grasping behaviors,
leading to failure cases that create bottlenecks in real-world robotic
applications. To address these challenges, we introduce GraspCorrect, a
plug-and-play module designed to enhance grasp performance through
vision-language model-guided feedback. GraspCorrect employs an iterative visual
question-answering framework with two key components: grasp-guided prompting,
which incorporates task-specific constraints, and object-aware sampling, which
ensures the selection of physically feasible grasp candidates. By iteratively
generating intermediate visual goals and translating them into joint-level
actions, GraspCorrect significantly improves grasp stability and consistently
enhances task success rates across existing policy models in the RLBench and
CALVIN datasets.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:25:32 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lee",
"Sungjae",
""
],
[
"Hong",
"Yeonjoo",
""
],
[
"Kim",
"Kwang In",
""
]
] | TITLE: GraspCorrect: Robotic Grasp Correction via Vision-Language Model-Guided
Feedback
ABSTRACT: Despite significant advancements in robotic manipulation, achieving
consistent and stable grasping remains a fundamental challenge, often limiting
the successful execution of complex tasks. Our analysis reveals that even
state-of-the-art policy models frequently exhibit unstable grasping behaviors,
leading to failure cases that create bottlenecks in real-world robotic
applications. To address these challenges, we introduce GraspCorrect, a
plug-and-play module designed to enhance grasp performance through
vision-language model-guided feedback. GraspCorrect employs an iterative visual
question-answering framework with two key components: grasp-guided prompting,
which incorporates task-specific constraints, and object-aware sampling, which
ensures the selection of physically feasible grasp candidates. By iteratively
generating intermediate visual goals and translating them into joint-level
actions, GraspCorrect significantly improves grasp stability and consistently
enhances task success rates across existing policy models in the RLBench and
CALVIN datasets.
|
2503.15036 | Satyajeet Sahoo Mr | Satyajeet Sahoo, Jhareswar Maiti and Virendra Kumar Tewari | Multivariate Gaussian Topic Modelling: A novel approach to discover
topics with greater semantic coherence | 12 pages | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | An important aspect of text mining involves information retrieval in form of
discovery of semantic themes (topics) from documents using topic modelling.
While generative topic models like Latent Dirichlet Allocation (LDA) elegantly
model topics as probability distributions and are useful in identifying latent
topics from large document corpora with minimal supervision, they suffer from
difficulty in topic interpretability and reduced performance in shorter texts.
Here we propose a novel Multivariate Gaussian Topic modelling (MGD) approach.
In this approach topics are presented as Multivariate Gaussian Distributions
and documents as Gaussian Mixture Models. Using EM algorithm, the various
constituent Multivariate Gaussian Distributions and their corresponding
parameters are identified. Analysis of the parameters helps identify the
keywords having the highest variance and mean contributions to the topic, and
from these key-words topic annotations are carried out. This approach is first
applied on a synthetic dataset to demonstrate the interpretability benefits
vis-\`a-vis LDA. A real-world application of this topic model is demonstrated
in analysis of risks and hazards at a petrochemical plant by applying the model
on safety incident reports to identify the major latent hazards plaguing the
plant. This model achieves a higher mean topic coherence of 0.436 vis-\`a-vis
0.294 for LDA.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:25:54 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Sahoo",
"Satyajeet",
""
],
[
"Maiti",
"Jhareswar",
""
],
[
"Tewari",
"Virendra Kumar",
""
]
] | TITLE: Multivariate Gaussian Topic Modelling: A novel approach to discover
topics with greater semantic coherence
ABSTRACT: An important aspect of text mining involves information retrieval in form of
discovery of semantic themes (topics) from documents using topic modelling.
While generative topic models like Latent Dirichlet Allocation (LDA) elegantly
model topics as probability distributions and are useful in identifying latent
topics from large document corpora with minimal supervision, they suffer from
difficulty in topic interpretability and reduced performance in shorter texts.
Here we propose a novel Multivariate Gaussian Topic modelling (MGD) approach.
In this approach topics are presented as Multivariate Gaussian Distributions
and documents as Gaussian Mixture Models. Using EM algorithm, the various
constituent Multivariate Gaussian Distributions and their corresponding
parameters are identified. Analysis of the parameters helps identify the
keywords having the highest variance and mean contributions to the topic, and
from these key-words topic annotations are carried out. This approach is first
applied on a synthetic dataset to demonstrate the interpretability benefits
vis-\`a-vis LDA. A real-world application of this topic model is demonstrated
in analysis of risks and hazards at a petrochemical plant by applying the model
on safety incident reports to identify the major latent hazards plaguing the
plant. This model achieves a higher mean topic coherence of 0.436 vis-\`a-vis
0.294 for LDA.
|
2503.15044 | Haoyi Li | Haoyi Li, Angela Yifei Yuan, Soyeon Caren Han, Christopher Leckie | SPADE: Systematic Prompt Framework for Automated Dialogue Expansion in
Machine-Generated Text Detection | 9 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing capability of large language models (LLMs) to generate
synthetic content has heightened concerns about their misuse, driving the
development of Machine-Generated Text (MGT) detection models. However, these
detectors face significant challenges due to the lack of systematically
generated, high-quality datasets for training. To address this issue, we
propose five novel data augmentation frameworks for synthetic user dialogue
generation through a structured prompting approach, reducing the costs
associated with traditional data collection methods. Our proposed method yields
14 new dialogue datasets, which we benchmark against seven MGT detection
models. The results demonstrate improved generalization performance when
utilizing a mixed dataset produced by our proposed augmentation framework.
Furthermore, considering that real-world agents lack knowledge of future
opponent utterances, we simulate online dialogue detection and examine the
relationship between chat history length and detection accuracy. We also
benchmark online detection performance with limited chat history on our
frameworks. Our open-source datasets can be downloaded from
https://github.com/AngieYYF/SPADE-customer-service-dialogue.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:32:52 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Haoyi",
""
],
[
"Yuan",
"Angela Yifei",
""
],
[
"Han",
"Soyeon Caren",
""
],
[
"Leckie",
"Christopher",
""
]
] | TITLE: SPADE: Systematic Prompt Framework for Automated Dialogue Expansion in
Machine-Generated Text Detection
ABSTRACT: The increasing capability of large language models (LLMs) to generate
synthetic content has heightened concerns about their misuse, driving the
development of Machine-Generated Text (MGT) detection models. However, these
detectors face significant challenges due to the lack of systematically
generated, high-quality datasets for training. To address this issue, we
propose five novel data augmentation frameworks for synthetic user dialogue
generation through a structured prompting approach, reducing the costs
associated with traditional data collection methods. Our proposed method yields
14 new dialogue datasets, which we benchmark against seven MGT detection
models. The results demonstrate improved generalization performance when
utilizing a mixed dataset produced by our proposed augmentation framework.
Furthermore, considering that real-world agents lack knowledge of future
opponent utterances, we simulate online dialogue detection and examine the
relationship between chat history length and detection accuracy. We also
benchmark online detection performance with limited chat history on our
frameworks. Our open-source datasets can be downloaded from
https://github.com/AngieYYF/SPADE-customer-service-dialogue.
|
2503.15055 | Arina Razmyslovich | Arina Razmyslovich, Kseniia Murasheva, Sofia Sedlova, Julien
Capitaine, Eugene Dmitriev | ELTEX: A Framework for Domain-Driven Synthetic Data Generation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present ELTEX (Efficient LLM Token Extraction), a domain-driven framework
for generating high-quality synthetic training data in specialized domains.
While Large Language Models (LLMs) have shown impressive general capabilities,
their performance in specialized domains like cybersecurity remains limited by
the scarcity of domain-specific training data. ELTEX addresses this challenge
by systematically integrating explicit domain indicator extraction with dynamic
prompting to preserve critical domain knowledge throughout the generation
process. We demonstrate ELTEX's effectiveness in the context of
blockchain-related cyberattack detection, where we fine-tune Gemma-2B using
various combinations of real and ELTEX-generated data. Our results show that
the ELTEX-enhanced model achieves performance competitive with GPT-4 across
both standard classification metrics and uncertainty calibration, while
requiring significantly fewer computational resources. We release a curated
synthetic dataset of social media texts for cyberattack detection in
blockchain. Our work demonstrates that domain-driven synthetic data generation
can effectively bridge the performance gap between resource-efficient models
and larger architectures in specialized domains.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:46:54 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Razmyslovich",
"Arina",
""
],
[
"Murasheva",
"Kseniia",
""
],
[
"Sedlova",
"Sofia",
""
],
[
"Capitaine",
"Julien",
""
],
[
"Dmitriev",
"Eugene",
""
]
] | TITLE: ELTEX: A Framework for Domain-Driven Synthetic Data Generation
ABSTRACT: We present ELTEX (Efficient LLM Token Extraction), a domain-driven framework
for generating high-quality synthetic training data in specialized domains.
While Large Language Models (LLMs) have shown impressive general capabilities,
their performance in specialized domains like cybersecurity remains limited by
the scarcity of domain-specific training data. ELTEX addresses this challenge
by systematically integrating explicit domain indicator extraction with dynamic
prompting to preserve critical domain knowledge throughout the generation
process. We demonstrate ELTEX's effectiveness in the context of
blockchain-related cyberattack detection, where we fine-tune Gemma-2B using
various combinations of real and ELTEX-generated data. Our results show that
the ELTEX-enhanced model achieves performance competitive with GPT-4 across
both standard classification metrics and uncertainty calibration, while
requiring significantly fewer computational resources. We release a curated
synthetic dataset of social media texts for cyberattack detection in
blockchain. Our work demonstrates that domain-driven synthetic data generation
can effectively bridge the performance gap between resource-efficient models
and larger architectures in specialized domains.
|
2503.15056 | Jong Chul Ye | Suhyeon Lee, Kwanyoung Kim, Jong Chul Ye | Single-Step Bidirectional Unpaired Image Translation Using Implicit
Bridge Consistency Distillation | 25 pages, 16 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Unpaired image-to-image translation has seen significant progress since the
introduction of CycleGAN. However, methods based on diffusion models or
Schr\"odinger bridges have yet to be widely adopted in real-world applications
due to their iterative sampling nature. To address this challenge, we propose a
novel framework, Implicit Bridge Consistency Distillation (IBCD), which enables
single-step bidirectional unpaired translation without using adversarial loss.
IBCD extends consistency distillation by using a diffusion implicit bridge
model that connects PF-ODE trajectories between distributions. Additionally, we
introduce two key improvements: 1) distribution matching for consistency
distillation and 2) adaptive weighting method based on distillation difficulty.
Experimental results demonstrate that IBCD achieves state-of-the-art
performance on benchmark datasets in a single generation step. Project page
available at https://hyn2028.github.io/project_page/IBCD/index.html
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:48:04 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lee",
"Suhyeon",
""
],
[
"Kim",
"Kwanyoung",
""
],
[
"Ye",
"Jong Chul",
""
]
] | TITLE: Single-Step Bidirectional Unpaired Image Translation Using Implicit
Bridge Consistency Distillation
ABSTRACT: Unpaired image-to-image translation has seen significant progress since the
introduction of CycleGAN. However, methods based on diffusion models or
Schr\"odinger bridges have yet to be widely adopted in real-world applications
due to their iterative sampling nature. To address this challenge, we propose a
novel framework, Implicit Bridge Consistency Distillation (IBCD), which enables
single-step bidirectional unpaired translation without using adversarial loss.
IBCD extends consistency distillation by using a diffusion implicit bridge
model that connects PF-ODE trajectories between distributions. Additionally, we
introduce two key improvements: 1) distribution matching for consistency
distillation and 2) adaptive weighting method based on distillation difficulty.
Experimental results demonstrate that IBCD achieves state-of-the-art
performance on benchmark datasets in a single generation step. Project page
available at https://hyn2028.github.io/project_page/IBCD/index.html
|
2503.15058 | Francesco Di Feola | Francesco Di Feola, Ludovica Pompilio, Cecilia Assolito, Valerio
Guarrasi, Paolo Soda | Texture-Aware StarGAN for CT data harmonisation | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Computed Tomography (CT) plays a pivotal role in medical diagnosis; however,
variability across reconstruction kernels hinders data-driven approaches, such
as deep learning models, from achieving reliable and generalized performance.
To this end, CT data harmonization has emerged as a promising solution to
minimize such non-biological variances by standardizing data across different
sources or conditions. In this context, Generative Adversarial Networks (GANs)
have proved to be a powerful framework for harmonization, framing it as a
style-transfer problem. However, GAN-based approaches still face limitations in
capturing complex relationships within the images, which are essential for
effective harmonization. In this work, we propose a novel texture-aware StarGAN
for CT data harmonization, enabling one-to-many translations across different
reconstruction kernels. Although the StarGAN model has been successfully
applied in other domains, its potential for CT data harmonization remains
unexplored. Furthermore, our approach introduces a multi-scale texture loss
function that embeds texture information across different spatial and angular
scales into the harmonization process, effectively addressing kernel-induced
texture variations. We conducted extensive experimentation on a publicly
available dataset, utilizing a total of 48667 chest CT slices from 197 patients
distributed over three different reconstruction kernels, demonstrating the
superiority of our method over the baseline StarGAN.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:50:32 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Di Feola",
"Francesco",
""
],
[
"Pompilio",
"Ludovica",
""
],
[
"Assolito",
"Cecilia",
""
],
[
"Guarrasi",
"Valerio",
""
],
[
"Soda",
"Paolo",
""
]
] | TITLE: Texture-Aware StarGAN for CT data harmonisation
ABSTRACT: Computed Tomography (CT) plays a pivotal role in medical diagnosis; however,
variability across reconstruction kernels hinders data-driven approaches, such
as deep learning models, from achieving reliable and generalized performance.
To this end, CT data harmonization has emerged as a promising solution to
minimize such non-biological variances by standardizing data across different
sources or conditions. In this context, Generative Adversarial Networks (GANs)
have proved to be a powerful framework for harmonization, framing it as a
style-transfer problem. However, GAN-based approaches still face limitations in
capturing complex relationships within the images, which are essential for
effective harmonization. In this work, we propose a novel texture-aware StarGAN
for CT data harmonization, enabling one-to-many translations across different
reconstruction kernels. Although the StarGAN model has been successfully
applied in other domains, its potential for CT data harmonization remains
unexplored. Furthermore, our approach introduces a multi-scale texture loss
function that embeds texture information across different spatial and angular
scales into the harmonization process, effectively addressing kernel-induced
texture variations. We conducted extensive experimentation on a publicly
available dataset, utilizing a total of 48667 chest CT slices from 197 patients
distributed over three different reconstruction kernels, demonstrating the
superiority of our method over the baseline StarGAN.
|
2503.15074 | Marius Fai{\ss} | Marius Fai{\ss}, Burooj Ghani, Dan Stowell | InsectSet459: an open dataset of insect sounds for bioacoustic machine
learning | null | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Automatic recognition of insect sound could help us understand changing
biodiversity trends around the world -- but insect sounds are challenging to
recognize even for deep learning. We present a new dataset comprised of 26399
audio files, from 459 species of Orthoptera and Cicadidae. It is the first
large-scale dataset of insect sound that is easily applicable for developing
novel deep-learning methods. Its recordings were made with a variety of audio
recorders using varying sample rates to capture the extremely broad range of
frequencies that insects produce. We benchmark performance with two
state-of-the-art deep learning classifiers, demonstrating good performance but
also significant room for improvement in acoustic insect classification. This
dataset can serve as a realistic test case for implementing insect monitoring
workflows, and as a challenging basis for the development of audio
representation methods that can handle highly variable frequencies and/or
sample rates.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 10:13:29 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Faiß",
"Marius",
""
],
[
"Ghani",
"Burooj",
""
],
[
"Stowell",
"Dan",
""
]
] | TITLE: InsectSet459: an open dataset of insect sounds for bioacoustic machine
learning
ABSTRACT: Automatic recognition of insect sound could help us understand changing
biodiversity trends around the world -- but insect sounds are challenging to
recognize even for deep learning. We present a new dataset comprised of 26399
audio files, from 459 species of Orthoptera and Cicadidae. It is the first
large-scale dataset of insect sound that is easily applicable for developing
novel deep-learning methods. Its recordings were made with a variety of audio
recorders using varying sample rates to capture the extremely broad range of
frequencies that insects produce. We benchmark performance with two
state-of-the-art deep learning classifiers, demonstrating good performance but
also significant room for improvement in acoustic insect classification. This
dataset can serve as a realistic test case for implementing insect monitoring
workflows, and as a challenging basis for the development of audio
representation methods that can handle highly variable frequencies and/or
sample rates.
|
2503.15082 | Ziyu Meng | Le Ma, Ziyu Meng, Tengyu Liu, Yuhan Li, Ran Song, Wei Zhang, Siyuan
Huang | StyleLoco: Generative Adversarial Distillation for Natural Humanoid
Robot Locomotion | 9 pages, 4 figures | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humanoid robots are anticipated to acquire a wide range of locomotion
capabilities while ensuring natural movement across varying speeds and
terrains. Existing methods encounter a fundamental dilemma in learning humanoid
locomotion: reinforcement learning with handcrafted rewards can achieve agile
locomotion but produces unnatural gaits, while Generative Adversarial Imitation
Learning (GAIL) with motion capture data yields natural movements but suffers
from unstable training processes and restricted agility. Integrating these
approaches proves challenging due to the inherent heterogeneity between expert
policies and human motion datasets. To address this, we introduce StyleLoco, a
novel two-stage framework that bridges this gap through a Generative
Adversarial Distillation (GAD) process. Our framework begins by training a
teacher policy using reinforcement learning to achieve agile and dynamic
locomotion. It then employs a multi-discriminator architecture, where distinct
discriminators concurrently extract skills from both the teacher policy and
motion capture data. This approach effectively combines the agility of
reinforcement learning with the natural fluidity of human-like movements while
mitigating the instability issues commonly associated with adversarial
training. Through extensive simulation and real-world experiments, we
demonstrate that StyleLoco enables humanoid robots to perform diverse
locomotion tasks with the precision of expertly trained policies and the
natural aesthetics of human motion, successfully transferring styles across
different movement types while maintaining stable locomotion across a broad
spectrum of command inputs.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 10:27:44 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ma",
"Le",
""
],
[
"Meng",
"Ziyu",
""
],
[
"Liu",
"Tengyu",
""
],
[
"Li",
"Yuhan",
""
],
[
"Song",
"Ran",
""
],
[
"Zhang",
"Wei",
""
],
[
"Huang",
"Siyuan",
""
]
] | TITLE: StyleLoco: Generative Adversarial Distillation for Natural Humanoid
Robot Locomotion
ABSTRACT: Humanoid robots are anticipated to acquire a wide range of locomotion
capabilities while ensuring natural movement across varying speeds and
terrains. Existing methods encounter a fundamental dilemma in learning humanoid
locomotion: reinforcement learning with handcrafted rewards can achieve agile
locomotion but produces unnatural gaits, while Generative Adversarial Imitation
Learning (GAIL) with motion capture data yields natural movements but suffers
from unstable training processes and restricted agility. Integrating these
approaches proves challenging due to the inherent heterogeneity between expert
policies and human motion datasets. To address this, we introduce StyleLoco, a
novel two-stage framework that bridges this gap through a Generative
Adversarial Distillation (GAD) process. Our framework begins by training a
teacher policy using reinforcement learning to achieve agile and dynamic
locomotion. It then employs a multi-discriminator architecture, where distinct
discriminators concurrently extract skills from both the teacher policy and
motion capture data. This approach effectively combines the agility of
reinforcement learning with the natural fluidity of human-like movements while
mitigating the instability issues commonly associated with adversarial
training. Through extensive simulation and real-world experiments, we
demonstrate that StyleLoco enables humanoid robots to perform diverse
locomotion tasks with the precision of expertly trained policies and the
natural aesthetics of human motion, successfully transferring styles across
different movement types while maintaining stable locomotion across a broad
spectrum of command inputs.
|
2503.15089 | Achmad Ginanjar Mr | Achmad Ginanjar, Xue Li, Priyanka Singh, and Wen Hua | Continual Contrastive Learning on Tabular Data with Out of Distribution | accepeted on esann 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Out-of-distribution (OOD) prediction remains a significant challenge in
machine learning, particularly for tabular data where traditional methods often
fail to generalize beyond their training distribution. This paper introduces
Tabular Continual Contrastive Learning (TCCL), a novel framework designed to
address OOD challenges in tabular data processing. TCCL integrates contrastive
learning principles with continual learning mechanisms, featuring a
three-component architecture: an Encoder for data transformation, a Decoder for
representation learning, and a Learner Head. We evaluate TCCL against 14
baseline models, including state-of-the-art deep learning approaches and
gradient-boosted decision trees (GBDT), across eight diverse tabular datasets.
Our experimental results demonstrate that TCCL consistently outperforms
existing methods in both classification and regression tasks on OOD data, with
particular strength in handling distribution shifts. These findings suggest
that TCCL represents a significant advancement in handling OOD scenarios for
tabular data.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 10:40:07 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ginanjar",
"Achmad",
""
],
[
"Li",
"Xue",
""
],
[
"Singh",
"Priyanka",
""
],
[
"Hua",
"Wen",
""
]
] | TITLE: Continual Contrastive Learning on Tabular Data with Out of Distribution
ABSTRACT: Out-of-distribution (OOD) prediction remains a significant challenge in
machine learning, particularly for tabular data where traditional methods often
fail to generalize beyond their training distribution. This paper introduces
Tabular Continual Contrastive Learning (TCCL), a novel framework designed to
address OOD challenges in tabular data processing. TCCL integrates contrastive
learning principles with continual learning mechanisms, featuring a
three-component architecture: an Encoder for data transformation, a Decoder for
representation learning, and a Learner Head. We evaluate TCCL against 14
baseline models, including state-of-the-art deep learning approaches and
gradient-boosted decision trees (GBDT), across eight diverse tabular datasets.
Our experimental results demonstrate that TCCL consistently outperforms
existing methods in both classification and regression tasks on OOD data, with
particular strength in handling distribution shifts. These findings suggest
that TCCL represents a significant advancement in handling OOD scenarios for
tabular data.
|
2503.15092 | Zonghao Ying | Zonghao Ying, Guangyi Zheng, Yongxin Huang, Deyue Zhang, Wenxin Zhang,
Quanchen Zou, Aishan Liu, Xianglong Liu, Dacheng Tao | Towards Understanding the Safety Boundaries of DeepSeek Models:
Evaluation and Findings | null | null | null | null | cs.CR cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | This study presents the first comprehensive safety evaluation of the DeepSeek
models, focusing on evaluating the safety risks associated with their generated
content. Our evaluation encompasses DeepSeek's latest generation of large
language models, multimodal large language models, and text-to-image models,
systematically examining their performance regarding unsafe content generation.
Notably, we developed a bilingual (Chinese-English) safety evaluation dataset
tailored to Chinese sociocultural contexts, enabling a more thorough evaluation
of the safety capabilities of Chinese-developed models. Experimental results
indicate that despite their strong general capabilities, DeepSeek models
exhibit significant safety vulnerabilities across multiple risk dimensions,
including algorithmic discrimination and sexual content. These findings provide
crucial insights for understanding and improving the safety of large foundation
models. Our code is available at
https://github.com/NY1024/DeepSeek-Safety-Eval.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 10:44:37 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ying",
"Zonghao",
""
],
[
"Zheng",
"Guangyi",
""
],
[
"Huang",
"Yongxin",
""
],
[
"Zhang",
"Deyue",
""
],
[
"Zhang",
"Wenxin",
""
],
[
"Zou",
"Quanchen",
""
],
[
"Liu",
"Aishan",
""
],
[
"Liu",
"Xianglong",
""
],
[
"Tao",
"Dacheng",
""
]
] | TITLE: Towards Understanding the Safety Boundaries of DeepSeek Models:
Evaluation and Findings
ABSTRACT: This study presents the first comprehensive safety evaluation of the DeepSeek
models, focusing on evaluating the safety risks associated with their generated
content. Our evaluation encompasses DeepSeek's latest generation of large
language models, multimodal large language models, and text-to-image models,
systematically examining their performance regarding unsafe content generation.
Notably, we developed a bilingual (Chinese-English) safety evaluation dataset
tailored to Chinese sociocultural contexts, enabling a more thorough evaluation
of the safety capabilities of Chinese-developed models. Experimental results
indicate that despite their strong general capabilities, DeepSeek models
exhibit significant safety vulnerabilities across multiple risk dimensions,
including algorithmic discrimination and sexual content. These findings provide
crucial insights for understanding and improving the safety of large foundation
models. Our code is available at
https://github.com/NY1024/DeepSeek-Safety-Eval.
|
2503.15112 | Wenji Fang | Shang Liu, Yao Lu, Wenji Fang, Mengming Li, Zhiyao Xie | OpenLLM-RTL: Open Dataset and Benchmark for LLM-Aided Design RTL
Generation | ICCAD'24 | null | null | null | cs.AR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The automated generation of design RTL based on large language model (LLM)
and natural language instructions has demonstrated great potential in agile
circuit design. However, the lack of datasets and benchmarks in the public
domain prevents the development and fair evaluation of LLM solutions. This
paper highlights our latest advances in open datasets and benchmarks from three
perspectives: (1) RTLLM 2.0, an updated benchmark assessing LLM's capability in
design RTL generation. The benchmark is augmented to 50 hand-crafted designs.
Each design provides the design description, test cases, and a correct RTL
code. (2) AssertEval, an open-source benchmark assessing the LLM's assertion
generation capabilities for RTL verification. The benchmark includes 18
designs, each providing specification, signal definition, and correct RTL code.
(3) RTLCoder-Data, an extended open-source dataset with 80K instruction-code
data samples. Moreover, we propose a new verification-based method to verify
the functionality correctness of training data samples. Based on this
technique, we further release a dataset with 7K verified high-quality samples.
These three studies are integrated into one framework, providing off-the-shelf
support for the development and evaluation of LLMs for RTL code generation and
verification. Finally, extensive experiments indicate that LLM performance can
be boosted by enlarging the training dataset, improving data quality, and
improving the training scheme.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:12:53 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Liu",
"Shang",
""
],
[
"Lu",
"Yao",
""
],
[
"Fang",
"Wenji",
""
],
[
"Li",
"Mengming",
""
],
[
"Xie",
"Zhiyao",
""
]
] | TITLE: OpenLLM-RTL: Open Dataset and Benchmark for LLM-Aided Design RTL
Generation
ABSTRACT: The automated generation of design RTL based on large language model (LLM)
and natural language instructions has demonstrated great potential in agile
circuit design. However, the lack of datasets and benchmarks in the public
domain prevents the development and fair evaluation of LLM solutions. This
paper highlights our latest advances in open datasets and benchmarks from three
perspectives: (1) RTLLM 2.0, an updated benchmark assessing LLM's capability in
design RTL generation. The benchmark is augmented to 50 hand-crafted designs.
Each design provides the design description, test cases, and a correct RTL
code. (2) AssertEval, an open-source benchmark assessing the LLM's assertion
generation capabilities for RTL verification. The benchmark includes 18
designs, each providing specification, signal definition, and correct RTL code.
(3) RTLCoder-Data, an extended open-source dataset with 80K instruction-code
data samples. Moreover, we propose a new verification-based method to verify
the functionality correctness of training data samples. Based on this
technique, we further release a dataset with 7K verified high-quality samples.
These three studies are integrated into one framework, providing off-the-shelf
support for the development and evaluation of LLMs for RTL code generation and
verification. Finally, extensive experiments indicate that LLM performance can
be boosted by enlarging the training dataset, improving data quality, and
improving the training scheme.
|
2503.15114 | Adri\'an Javaloy | Alejandro Almod\'ovar, Adri\'an Javaloy, Juan Parras, Santiago Zazo,
Isabel Valera | DeCaFlow: A Deconfounding Causal Generative Model | 32 pages, 22 figures. Under submission | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Causal generative models (CGMs) have recently emerged as capable approaches
to simulate the causal mechanisms generating our observations, enabling causal
inference. Unfortunately, existing approaches either are overly restrictive,
assuming the absence of hidden confounders, or lack generality, being tailored
to a particular query and graph. In this work, we introduce DeCaFlow, a CGM
that accounts for hidden confounders in a single amortized training process
using only observational data and the causal graph. Importantly, DeCaFlow can
provably identify all causal queries with a valid adjustment set or
sufficiently informative proxy variables. Remarkably, for the first time to our
knowledge, we show that a confounded counterfactual query is identifiable, and
thus solvable by DeCaFlow, as long as its interventional counterpart is as
well. Our empirical results on diverse settings (including the Ecoli70 dataset,
with 3 independent hidden confounders, tens of observed variables and hundreds
of causal queries) show that DeCaFlow outperforms existing approaches, while
demonstrating its out-of-the-box flexibility.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:14:16 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Almodóvar",
"Alejandro",
""
],
[
"Javaloy",
"Adrián",
""
],
[
"Parras",
"Juan",
""
],
[
"Zazo",
"Santiago",
""
],
[
"Valera",
"Isabel",
""
]
] | TITLE: DeCaFlow: A Deconfounding Causal Generative Model
ABSTRACT: Causal generative models (CGMs) have recently emerged as capable approaches
to simulate the causal mechanisms generating our observations, enabling causal
inference. Unfortunately, existing approaches either are overly restrictive,
assuming the absence of hidden confounders, or lack generality, being tailored
to a particular query and graph. In this work, we introduce DeCaFlow, a CGM
that accounts for hidden confounders in a single amortized training process
using only observational data and the causal graph. Importantly, DeCaFlow can
provably identify all causal queries with a valid adjustment set or
sufficiently informative proxy variables. Remarkably, for the first time to our
knowledge, we show that a confounded counterfactual query is identifiable, and
thus solvable by DeCaFlow, as long as its interventional counterpart is as
well. Our empirical results on diverse settings (including the Ecoli70 dataset,
with 3 independent hidden confounders, tens of observed variables and hundreds
of causal queries) show that DeCaFlow outperforms existing approaches, while
demonstrating its out-of-the-box flexibility.
|
2503.15126 | Haoyu Ji | Haoyu Ji, Bowen Chen, Weihong Ren, Wenze Huang, Zhihao Yang, Zhiyong
Wang, and Honghai Liu | Text-Derived Relational Graph-Enhanced Network for Skeleton-Based Action
Segmentation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Skeleton-based Temporal Action Segmentation (STAS) aims to segment and
recognize various actions from long, untrimmed sequences of human skeletal
movements. Current STAS methods typically employ spatio-temporal modeling to
establish dependencies among joints as well as frames, and utilize one-hot
encoding with cross-entropy loss for frame-wise classification supervision.
However, these methods overlook the intrinsic correlations among joints and
actions within skeletal features, leading to a limited understanding of human
movements. To address this, we propose a Text-Derived Relational Graph-Enhanced
Network (TRG-Net) that leverages prior graphs generated by Large Language
Models (LLM) to enhance both modeling and supervision. For modeling, the
Dynamic Spatio-Temporal Fusion Modeling (DSFM) method incorporates Text-Derived
Joint Graphs (TJG) with channel- and frame-level dynamic adaptation to
effectively model spatial relations, while integrating spatio-temporal core
features during temporal modeling. For supervision, the Absolute-Relative
Inter-Class Supervision (ARIS) method employs contrastive learning between
action features and text embeddings to regularize the absolute class
distributions, and utilizes Text-Derived Action Graphs (TAG) to capture the
relative inter-class relationships among action features. Additionally, we
propose a Spatial-Aware Enhancement Processing (SAEP) method, which
incorporates random joint occlusion and axial rotation to enhance spatial
generalization. Performance evaluations on four public datasets demonstrate
that TRG-Net achieves state-of-the-art results.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:38:14 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ji",
"Haoyu",
""
],
[
"Chen",
"Bowen",
""
],
[
"Ren",
"Weihong",
""
],
[
"Huang",
"Wenze",
""
],
[
"Yang",
"Zhihao",
""
],
[
"Wang",
"Zhiyong",
""
],
[
"Liu",
"Honghai",
""
]
] | TITLE: Text-Derived Relational Graph-Enhanced Network for Skeleton-Based Action
Segmentation
ABSTRACT: Skeleton-based Temporal Action Segmentation (STAS) aims to segment and
recognize various actions from long, untrimmed sequences of human skeletal
movements. Current STAS methods typically employ spatio-temporal modeling to
establish dependencies among joints as well as frames, and utilize one-hot
encoding with cross-entropy loss for frame-wise classification supervision.
However, these methods overlook the intrinsic correlations among joints and
actions within skeletal features, leading to a limited understanding of human
movements. To address this, we propose a Text-Derived Relational Graph-Enhanced
Network (TRG-Net) that leverages prior graphs generated by Large Language
Models (LLM) to enhance both modeling and supervision. For modeling, the
Dynamic Spatio-Temporal Fusion Modeling (DSFM) method incorporates Text-Derived
Joint Graphs (TJG) with channel- and frame-level dynamic adaptation to
effectively model spatial relations, while integrating spatio-temporal core
features during temporal modeling. For supervision, the Absolute-Relative
Inter-Class Supervision (ARIS) method employs contrastive learning between
action features and text embeddings to regularize the absolute class
distributions, and utilizes Text-Derived Action Graphs (TAG) to capture the
relative inter-class relationships among action features. Additionally, we
propose a Spatial-Aware Enhancement Processing (SAEP) method, which
incorporates random joint occlusion and axial rotation to enhance spatial
generalization. Performance evaluations on four public datasets demonstrate
that TRG-Net achieves state-of-the-art results.
|
2503.15133 | Sebastian Schmidt | Christina Zorenb\"ohmer and Sebastian Schmidt and Bernd Resch | EmoGRACE: Aspect-based emotion analysis for social media data | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | While sentiment analysis has advanced from sentence to aspect-level, i.e.,
the identification of concrete terms related to a sentiment, the equivalent
field of Aspect-based Emotion Analysis (ABEA) is faced with dataset bottlenecks
and the increased complexity of emotion classes in contrast to binary
sentiments. This paper addresses these gaps, by generating a first ABEA
training dataset, consisting of 2,621 English Tweets, and fine-tuning a
BERT-based model for the ABEA sub-tasks of Aspect Term Extraction (ATE) and
Aspect Emotion Classification (AEC).
The dataset annotation process was based on the hierarchical emotion theory
by Shaver et al. [1] and made use of group annotation and majority voting
strategies to facilitate label consistency. The resulting dataset contained
aspect-level emotion labels for Anger, Sadness, Happiness, Fear, and a None
class. Using the new ABEA training dataset, the state-of-the-art ABSA model
GRACE by Luo et al. [2] was fine-tuned for ABEA. The results reflected a
performance plateau at an F1-score of 70.1% for ATE and 46.9% for joint ATE and
AEC extraction. The limiting factors for model performance were broadly
identified as the small training dataset size coupled with the increased task
complexity, causing model overfitting and limited abilities to generalize well
on new data.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:48:52 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zorenböhmer",
"Christina",
""
],
[
"Schmidt",
"Sebastian",
""
],
[
"Resch",
"Bernd",
""
]
] | TITLE: EmoGRACE: Aspect-based emotion analysis for social media data
ABSTRACT: While sentiment analysis has advanced from sentence to aspect-level, i.e.,
the identification of concrete terms related to a sentiment, the equivalent
field of Aspect-based Emotion Analysis (ABEA) is faced with dataset bottlenecks
and the increased complexity of emotion classes in contrast to binary
sentiments. This paper addresses these gaps, by generating a first ABEA
training dataset, consisting of 2,621 English Tweets, and fine-tuning a
BERT-based model for the ABEA sub-tasks of Aspect Term Extraction (ATE) and
Aspect Emotion Classification (AEC).
The dataset annotation process was based on the hierarchical emotion theory
by Shaver et al. [1] and made use of group annotation and majority voting
strategies to facilitate label consistency. The resulting dataset contained
aspect-level emotion labels for Anger, Sadness, Happiness, Fear, and a None
class. Using the new ABEA training dataset, the state-of-the-art ABSA model
GRACE by Luo et al. [2] was fine-tuned for ABEA. The results reflected a
performance plateau at an F1-score of 70.1% for ATE and 46.9% for joint ATE and
AEC extraction. The limiting factors for model performance were broadly
identified as the small training dataset size coupled with the increased task
complexity, causing model overfitting and limited abilities to generalize well
on new data.
|
2503.15141 | Nikola {\DJ}uki\'c | Nikola {\DJ}uki\'c, Tim Lebailly, Tinne Tuytelaars | Object-Centric Pretraining via Target Encoder Bootstrapping | ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Object-centric representation learning has recently been successfully applied
to real-world datasets. This success can be attributed to pretrained
non-object-centric foundation models, whose features serve as reconstruction
targets for slot attention. However, targets must remain frozen throughout the
training, which sets an upper bound on the performance object-centric models
can attain. Attempts to update the target encoder by bootstrapping result in
large performance drops, which can be attributed to its lack of object-centric
inductive biases, causing the object-centric model's encoder to drift away from
representations useful as reconstruction targets. To address these limitations,
we propose Object-CEntric Pretraining by Target Encoder BOotstrapping, a
self-distillation setup for training object-centric models from scratch, on
real-world data, for the first time ever. In OCEBO, the target encoder is
updated as an exponential moving average of the object-centric model, thus
explicitly being enriched with object-centric inductive biases introduced by
slot attention while removing the upper bound on performance present in other
models. We mitigate the slot collapse caused by random initialization of the
target encoder by introducing a novel cross-view patch filtering approach that
limits the supervision to sufficiently informative patches. When pretrained on
241k images from COCO, OCEBO achieves unsupervised object discovery performance
comparable to that of object-centric models with frozen non-object-centric
target encoders pretrained on hundreds of millions of images. The code and
pretrained models are publicly available at https://github.com/djukicn/ocebo.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 12:06:50 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Đukić",
"Nikola",
""
],
[
"Lebailly",
"Tim",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] | TITLE: Object-Centric Pretraining via Target Encoder Bootstrapping
ABSTRACT: Object-centric representation learning has recently been successfully applied
to real-world datasets. This success can be attributed to pretrained
non-object-centric foundation models, whose features serve as reconstruction
targets for slot attention. However, targets must remain frozen throughout the
training, which sets an upper bound on the performance object-centric models
can attain. Attempts to update the target encoder by bootstrapping result in
large performance drops, which can be attributed to its lack of object-centric
inductive biases, causing the object-centric model's encoder to drift away from
representations useful as reconstruction targets. To address these limitations,
we propose Object-CEntric Pretraining by Target Encoder BOotstrapping, a
self-distillation setup for training object-centric models from scratch, on
real-world data, for the first time ever. In OCEBO, the target encoder is
updated as an exponential moving average of the object-centric model, thus
explicitly being enriched with object-centric inductive biases introduced by
slot attention while removing the upper bound on performance present in other
models. We mitigate the slot collapse caused by random initialization of the
target encoder by introducing a novel cross-view patch filtering approach that
limits the supervision to sufficiently informative patches. When pretrained on
241k images from COCO, OCEBO achieves unsupervised object discovery performance
comparable to that of object-centric models with frozen non-object-centric
target encoders pretrained on hundreds of millions of images. The code and
pretrained models are publicly available at https://github.com/djukicn/ocebo.
|
2503.15144 | Zhe Zhu | Xing He, Zhe Zhu, Liangliang Nan, Honghua Chen, Jing Qin, Mingqiang
Wei | PointSFDA: Source-free Domain Adaptation for Point Cloud Completion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional methods for point cloud completion, typically trained on
synthetic datasets, face significant challenges when applied to
out-of-distribution real-world scans. In this paper, we propose an effective
yet simple source-free domain adaptation framework for point cloud completion,
termed \textbf{PointSFDA}. Unlike unsupervised domain adaptation that reduces
the domain gap by directly leveraging labeled source data, PointSFDA uses only
a pretrained source model and unlabeled target data for adaptation, avoiding
the need for inaccessible source data in practical scenarios. Being the first
source-free domain adaptation architecture for point cloud completion, our
method offers two core contributions. First, we introduce a coarse-to-fine
distillation solution to explicitly transfer the global geometry knowledge
learned from the source dataset. Second, as noise may be introduced due to
domain gaps, we propose a self-supervised partial-mask consistency training
strategy to learn local geometry information in the target domain. Extensive
experiments have validated that our method significantly improves the
performance of state-of-the-art networks in cross-domain shape completion. Our
code is available at
\emph{\textcolor{magenta}{https://github.com/Starak-x/PointSFDA}}.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 12:09:45 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"He",
"Xing",
""
],
[
"Zhu",
"Zhe",
""
],
[
"Nan",
"Liangliang",
""
],
[
"Chen",
"Honghua",
""
],
[
"Qin",
"Jing",
""
],
[
"Wei",
"Mingqiang",
""
]
] | TITLE: PointSFDA: Source-free Domain Adaptation for Point Cloud Completion
ABSTRACT: Conventional methods for point cloud completion, typically trained on
synthetic datasets, face significant challenges when applied to
out-of-distribution real-world scans. In this paper, we propose an effective
yet simple source-free domain adaptation framework for point cloud completion,
termed \textbf{PointSFDA}. Unlike unsupervised domain adaptation that reduces
the domain gap by directly leveraging labeled source data, PointSFDA uses only
a pretrained source model and unlabeled target data for adaptation, avoiding
the need for inaccessible source data in practical scenarios. Being the first
source-free domain adaptation architecture for point cloud completion, our
method offers two core contributions. First, we introduce a coarse-to-fine
distillation solution to explicitly transfer the global geometry knowledge
learned from the source dataset. Second, as noise may be introduced due to
domain gaps, we propose a self-supervised partial-mask consistency training
strategy to learn local geometry information in the target domain. Extensive
experiments have validated that our method significantly improves the
performance of state-of-the-art networks in cross-domain shape completion. Our
code is available at
\emph{\textcolor{magenta}{https://github.com/Starak-x/PointSFDA}}.
|
2503.15149 | Zhaoxiang Shen | Zhaoxiang Shen and Ra\'ul I. Sosa and Jakub Lengiewicz and Alexandre
Tkatchenko and St\'ephane P.A. Bordas | Machine learning surrogate models of many-body dispersion interactions
in polymer melts | null | null | null | null | cs.LG physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Accurate prediction of many-body dispersion (MBD) interactions is essential
for understanding the van der Waals forces that govern the behavior of many
complex molecular systems. However, the high computational cost of MBD
calculations limits their direct application in large-scale simulations. In
this work, we introduce a machine learning surrogate model specifically
designed to predict MBD forces in polymer melts, a system that demands accurate
MBD description and offers structural advantages for machine learning
approaches. Our model is based on a trimmed SchNet architecture that
selectively retains the most relevant atomic connections and incorporates
trainable radial basis functions for geometric encoding. We validate our
surrogate model on datasets from polyethylene, polypropylene, and polyvinyl
chloride melts, demonstrating high predictive accuracy and robust
generalization across diverse polymer systems. In addition, the model captures
key physical features, such as the characteristic decay behavior of MBD
interactions, providing valuable insights for optimizing cutoff strategies.
Characterized by high computational efficiency, our surrogate model enables
practical incorporation of MBD effects into large-scale molecular simulations.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 12:15:35 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Shen",
"Zhaoxiang",
""
],
[
"Sosa",
"Raúl I.",
""
],
[
"Lengiewicz",
"Jakub",
""
],
[
"Tkatchenko",
"Alexandre",
""
],
[
"Bordas",
"Stéphane P. A.",
""
]
] | TITLE: Machine learning surrogate models of many-body dispersion interactions
in polymer melts
ABSTRACT: Accurate prediction of many-body dispersion (MBD) interactions is essential
for understanding the van der Waals forces that govern the behavior of many
complex molecular systems. However, the high computational cost of MBD
calculations limits their direct application in large-scale simulations. In
this work, we introduce a machine learning surrogate model specifically
designed to predict MBD forces in polymer melts, a system that demands accurate
MBD description and offers structural advantages for machine learning
approaches. Our model is based on a trimmed SchNet architecture that
selectively retains the most relevant atomic connections and incorporates
trainable radial basis functions for geometric encoding. We validate our
surrogate model on datasets from polyethylene, polypropylene, and polyvinyl
chloride melts, demonstrating high predictive accuracy and robust
generalization across diverse polymer systems. In addition, the model captures
key physical features, such as the characteristic decay behavior of MBD
interactions, providing valuable insights for optimizing cutoff strategies.
Characterized by high computational efficiency, our surrogate model enables
practical incorporation of MBD effects into large-scale molecular simulations.
|
2503.15150 | Yan Wang | Yan Wang, Jiapeng Liu, Milosz Kadzi\'nski, Xiuwu Liao | Preference Construction: A Bayesian Interactive Preference Elicitation
Framework Based on Monte Carlo Tree Search | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel preference learning framework to capture participant
preferences efficiently within limited interaction rounds. It involves three
main contributions. First, we develop a variational Bayesian approach to infer
the participant's preference model by estimating posterior distributions and
managing uncertainty from limited information. Second, we propose an adaptive
questioning policy that maximizes cumulative uncertainty reduction, formulating
questioning as a finite Markov decision process and using Monte Carlo Tree
Search to prioritize promising question trajectories. By considering long-term
effects and leveraging the efficiency of the Bayesian approach, the policy
avoids shortsightedness. Third, we apply the framework to Multiple Criteria
Decision Aiding, with pairwise comparison as the preference information and an
additive value function as the preference model. We integrate the
reparameterization trick to address high-variance issues, enhancing robustness
and efficiency. Computational studies on real-world and synthetic datasets
demonstrate the framework's practical usability, outperforming baselines in
capturing preferences and achieving superior uncertainty reduction within
limited interactions.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 12:16:54 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wang",
"Yan",
""
],
[
"Liu",
"Jiapeng",
""
],
[
"Kadziński",
"Milosz",
""
],
[
"Liao",
"Xiuwu",
""
]
] | TITLE: Preference Construction: A Bayesian Interactive Preference Elicitation
Framework Based on Monte Carlo Tree Search
ABSTRACT: We present a novel preference learning framework to capture participant
preferences efficiently within limited interaction rounds. It involves three
main contributions. First, we develop a variational Bayesian approach to infer
the participant's preference model by estimating posterior distributions and
managing uncertainty from limited information. Second, we propose an adaptive
questioning policy that maximizes cumulative uncertainty reduction, formulating
questioning as a finite Markov decision process and using Monte Carlo Tree
Search to prioritize promising question trajectories. By considering long-term
effects and leveraging the efficiency of the Bayesian approach, the policy
avoids shortsightedness. Third, we apply the framework to Multiple Criteria
Decision Aiding, with pairwise comparison as the preference information and an
additive value function as the preference model. We integrate the
reparameterization trick to address high-variance issues, enhancing robustness
and efficiency. Computational studies on real-world and synthetic datasets
demonstrate the framework's practical usability, outperforming baselines in
capturing preferences and achieving superior uncertainty reduction within
limited interactions.
|
2503.15161 | Yang Li | Yang Li, Soumya Snigdha Kundu, Maxence Boels, Toktam Mahmoodi,
Sebastien Ourselin, Tom Vercauteren, Prokar Dasgupta, Jonathan Shapey,
Alejandro Granados | UltraFlwr -- An Efficient Federated Medical and Surgical Object
Detection Framework | 10 pages, 2 figures, under review @ MICCAI | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Object detection shows promise for medical and surgical applications such as
cell counting and tool tracking. However, its faces multiple real-world edge
deployment challenges including limited high-quality annotated data, data
sharing restrictions, and computational constraints. In this work, we introduce
UltraFlwr, a framework for federated medical and surgical object detection. By
leveraging Federated Learning (FL), UltraFlwr enables decentralized model
training across multiple sites without sharing raw data. To further enhance
UltraFlwr's efficiency, we propose YOLO-PA, a set of novel Partial Aggregation
(PA) strategies specifically designed for YOLO models in FL. YOLO-PA
significantly reduces communication overhead by up to 83% per round while
maintaining performance comparable to Full Aggregation (FA) strategies. Our
extensive experiments on BCCD and m2cai16-tool-locations datasets demonstrate
that YOLO-PA not only provides better client models compared to client-wise
centralized training and FA strategies, but also facilitates efficient training
and deployment across resource-constrained edge devices. Further, we also
establish one of the first benchmarks in federated medical and surgical object
detection. This paper advances the feasibility of training and deploying
detection models on the edge, making federated object detection more practical
for time-critical and resource-constrained medical and surgical applications.
UltraFlwr is publicly available at https://github.com/KCL-BMEIS/UltraFlwr.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 12:38:04 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Yang",
""
],
[
"Kundu",
"Soumya Snigdha",
""
],
[
"Boels",
"Maxence",
""
],
[
"Mahmoodi",
"Toktam",
""
],
[
"Ourselin",
"Sebastien",
""
],
[
"Vercauteren",
"Tom",
""
],
[
"Dasgupta",
"Prokar",
""
],
[
"Shapey",
"Jonathan",
""
],
[
"Granados",
"Alejandro",
""
]
] | TITLE: UltraFlwr -- An Efficient Federated Medical and Surgical Object
Detection Framework
ABSTRACT: Object detection shows promise for medical and surgical applications such as
cell counting and tool tracking. However, its faces multiple real-world edge
deployment challenges including limited high-quality annotated data, data
sharing restrictions, and computational constraints. In this work, we introduce
UltraFlwr, a framework for federated medical and surgical object detection. By
leveraging Federated Learning (FL), UltraFlwr enables decentralized model
training across multiple sites without sharing raw data. To further enhance
UltraFlwr's efficiency, we propose YOLO-PA, a set of novel Partial Aggregation
(PA) strategies specifically designed for YOLO models in FL. YOLO-PA
significantly reduces communication overhead by up to 83% per round while
maintaining performance comparable to Full Aggregation (FA) strategies. Our
extensive experiments on BCCD and m2cai16-tool-locations datasets demonstrate
that YOLO-PA not only provides better client models compared to client-wise
centralized training and FA strategies, but also facilitates efficient training
and deployment across resource-constrained edge devices. Further, we also
establish one of the first benchmarks in federated medical and surgical object
detection. This paper advances the feasibility of training and deploying
detection models on the edge, making federated object detection more practical
for time-critical and resource-constrained medical and surgical applications.
UltraFlwr is publicly available at https://github.com/KCL-BMEIS/UltraFlwr.
|
2503.15167 | Hongsheng He | Fujian Yan, Hui Li, and Hongsheng He | Volumetric Reconstruction From Partial Views for Task-Oriented Grasping | null | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Object affordance and volumetric information are essential in devising
effective grasping strategies under task-specific constraints. This paper
presents an approach for inferring suitable grasping strategies from limited
partial views of an object. To achieve this, a recurrent generative adversarial
network (R-GAN) was proposed by incorporating a recurrent generator with long
short-term memory (LSTM) units for it to process a variable number of depth
scans. To determine object affordances, the AffordPose knowledge dataset is
utilized as prior knowledge. Affordance retrieving is defined by the volume
similarity measured via Chamfer Distance and action similarities. A Proximal
Policy Optimization (PPO) reinforcement learning model is further implemented
to refine the retrieved grasp strategies for task-oriented grasping. The
retrieved grasp strategies were evaluated on a dual-arm mobile manipulation
robot with an overall grasping accuracy of 89% for four tasks: lift, handle
grasp, wrap grasp, and press.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 12:47:50 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Yan",
"Fujian",
""
],
[
"Li",
"Hui",
""
],
[
"He",
"Hongsheng",
""
]
] | TITLE: Volumetric Reconstruction From Partial Views for Task-Oriented Grasping
ABSTRACT: Object affordance and volumetric information are essential in devising
effective grasping strategies under task-specific constraints. This paper
presents an approach for inferring suitable grasping strategies from limited
partial views of an object. To achieve this, a recurrent generative adversarial
network (R-GAN) was proposed by incorporating a recurrent generator with long
short-term memory (LSTM) units for it to process a variable number of depth
scans. To determine object affordances, the AffordPose knowledge dataset is
utilized as prior knowledge. Affordance retrieving is defined by the volume
similarity measured via Chamfer Distance and action similarities. A Proximal
Policy Optimization (PPO) reinforcement learning model is further implemented
to refine the retrieved grasp strategies for task-oriented grasping. The
retrieved grasp strategies were evaluated on a dual-arm mobile manipulation
robot with an overall grasping accuracy of 89% for four tasks: lift, handle
grasp, wrap grasp, and press.
|
2503.15177 | Swara Parekh | Ananya Garg, Mohmmad Ayaan, Swara Parekh, Vikranth Udandarao | Food Delivery Time Prediction in Indian Cities Using Machine Learning
Models | for code implementation, check
https://github.com/Vikranth3140/Food-Delivery-Time-Prediction | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurate prediction of food delivery times significantly impacts customer
satisfaction, operational efficiency, and profitability in food delivery
services. However, existing studies primarily utilize static historical data
and often overlook dynamic, real-time contextual factors crucial for precise
prediction, particularly in densely populated Indian cities. This research
addresses these gaps by integrating real-time contextual variables such as
traffic density, weather conditions, local events, and geospatial data
(restaurant and delivery location coordinates) into predictive models. We
systematically compare various machine learning algorithms, including Linear
Regression, Decision Trees, Bagging, Random Forest, XGBoost, and LightGBM, on a
comprehensive food delivery dataset specific to Indian urban contexts. Rigorous
data preprocessing and feature selection significantly enhanced model
performance. Experimental results demonstrate that the LightGBM model achieves
superior predictive accuracy, with an R2 score of 0.76 and Mean Squared Error
(MSE) of 20.59, outperforming traditional baseline approaches. Our study thus
provides actionable insights for improving logistics strategies in complex
urban environments. The complete methodology and code are publicly available
for reproducibility and further research.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 13:02:23 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Garg",
"Ananya",
""
],
[
"Ayaan",
"Mohmmad",
""
],
[
"Parekh",
"Swara",
""
],
[
"Udandarao",
"Vikranth",
""
]
] | TITLE: Food Delivery Time Prediction in Indian Cities Using Machine Learning
Models
ABSTRACT: Accurate prediction of food delivery times significantly impacts customer
satisfaction, operational efficiency, and profitability in food delivery
services. However, existing studies primarily utilize static historical data
and often overlook dynamic, real-time contextual factors crucial for precise
prediction, particularly in densely populated Indian cities. This research
addresses these gaps by integrating real-time contextual variables such as
traffic density, weather conditions, local events, and geospatial data
(restaurant and delivery location coordinates) into predictive models. We
systematically compare various machine learning algorithms, including Linear
Regression, Decision Trees, Bagging, Random Forest, XGBoost, and LightGBM, on a
comprehensive food delivery dataset specific to Indian urban contexts. Rigorous
data preprocessing and feature selection significantly enhanced model
performance. Experimental results demonstrate that the LightGBM model achieves
superior predictive accuracy, with an R2 score of 0.76 and Mean Squared Error
(MSE) of 20.59, outperforming traditional baseline approaches. Our study thus
provides actionable insights for improving logistics strategies in complex
urban environments. The complete methodology and code are publicly available
for reproducibility and further research.
|
2503.15191 | Hyunjun Kim He | Sejong Kim, Hyunseo Song, Hyunwoo Seo, Hyunjun Kim | Optimizing Retrieval Strategies for Financial Question Answering
Documents in Retrieval-Augmented Generation Systems | 15 pages, 3 figures, 11 tables. Accepted at ICLR 2025 Workshop on
Advances in Financial AI. Code available at
https://github.com/seohyunwoo-0407/GAR | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Retrieval-Augmented Generation (RAG) has emerged as a promising framework to
mitigate hallucinations in Large Language Models (LLMs), yet its overall
performance is dependent on the underlying retrieval system. In the finance
domain, documents such as 10-K reports pose distinct challenges due to
domain-specific vocabulary and multi-hierarchical tabular data. In this work,
we introduce an efficient, end-to-end RAG pipeline that enhances retrieval for
financial documents through a three-phase approach: pre-retrieval, retrieval,
and post-retrieval. In the pre-retrieval phase, various query and corpus
preprocessing techniques are employed to enrich input data. During the
retrieval phase, we fine-tuned state-of-the-art (SOTA) embedding models with
domain-specific knowledge and implemented a hybrid retrieval strategy that
combines dense and sparse representations. Finally, the post-retrieval phase
leverages Direct Preference Optimization (DPO) training and document selection
methods to further refine the results. Evaluations on seven financial question
answering datasets-FinDER, FinQABench, FinanceBench, TATQA, FinQA, ConvFinQA,
and MultiHiertt-demonstrate substantial improvements in retrieval performance,
leading to more accurate and contextually appropriate generation. These
findings highlight the critical role of tailored retrieval techniques in
advancing the effectiveness of RAG systems for financial applications. A fully
replicable pipeline is available on GitHub:
https://github.com/seohyunwoo-0407/GAR.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 13:21:49 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Kim",
"Sejong",
""
],
[
"Song",
"Hyunseo",
""
],
[
"Seo",
"Hyunwoo",
""
],
[
"Kim",
"Hyunjun",
""
]
] | TITLE: Optimizing Retrieval Strategies for Financial Question Answering
Documents in Retrieval-Augmented Generation Systems
ABSTRACT: Retrieval-Augmented Generation (RAG) has emerged as a promising framework to
mitigate hallucinations in Large Language Models (LLMs), yet its overall
performance is dependent on the underlying retrieval system. In the finance
domain, documents such as 10-K reports pose distinct challenges due to
domain-specific vocabulary and multi-hierarchical tabular data. In this work,
we introduce an efficient, end-to-end RAG pipeline that enhances retrieval for
financial documents through a three-phase approach: pre-retrieval, retrieval,
and post-retrieval. In the pre-retrieval phase, various query and corpus
preprocessing techniques are employed to enrich input data. During the
retrieval phase, we fine-tuned state-of-the-art (SOTA) embedding models with
domain-specific knowledge and implemented a hybrid retrieval strategy that
combines dense and sparse representations. Finally, the post-retrieval phase
leverages Direct Preference Optimization (DPO) training and document selection
methods to further refine the results. Evaluations on seven financial question
answering datasets-FinDER, FinQABench, FinanceBench, TATQA, FinQA, ConvFinQA,
and MultiHiertt-demonstrate substantial improvements in retrieval performance,
leading to more accurate and contextually appropriate generation. These
findings highlight the critical role of tailored retrieval techniques in
advancing the effectiveness of RAG systems for financial applications. A fully
replicable pipeline is available on GitHub:
https://github.com/seohyunwoo-0407/GAR.
|
2503.15197 | Feifei Li | Feifei Li, Mi Zhang, Yiming Sun and Min Yang | Detect-and-Guide: Self-regulation of Diffusion Models for Safe
Text-to-Image Generation via Guideline Token Optimization | CVPR25 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text-to-image diffusion models have achieved state-of-the-art results in
synthesis tasks; however, there is a growing concern about their potential
misuse in creating harmful content. To mitigate these risks, post-hoc model
intervention techniques, such as concept unlearning and safety guidance, have
been developed. However, fine-tuning model weights or adapting the hidden
states of the diffusion model operates in an uninterpretable way, making it
unclear which part of the intermediate variables is responsible for unsafe
generation. These interventions severely affect the sampling trajectory when
erasing harmful concepts from complex, multi-concept prompts, thus hindering
their practical use in real-world settings. In this work, we propose the safe
generation framework Detect-and-Guide (DAG), leveraging the internal knowledge
of diffusion models to perform self-diagnosis and fine-grained self-regulation
during the sampling process. DAG first detects harmful concepts from noisy
latents using refined cross-attention maps of optimized tokens, then applies
safety guidance with adaptive strength and editing regions to negate unsafe
generation. The optimization only requires a small annotated dataset and can
provide precise detection maps with generalizability and concept specificity.
Moreover, DAG does not require fine-tuning of diffusion models, and therefore
introduces no loss to their generation diversity. Experiments on erasing sexual
content show that DAG achieves state-of-the-art safe generation performance,
balancing harmfulness mitigation and text-following performance on
multi-concept real-world prompts.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 13:37:52 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Feifei",
""
],
[
"Zhang",
"Mi",
""
],
[
"Sun",
"Yiming",
""
],
[
"Yang",
"Min",
""
]
] | TITLE: Detect-and-Guide: Self-regulation of Diffusion Models for Safe
Text-to-Image Generation via Guideline Token Optimization
ABSTRACT: Text-to-image diffusion models have achieved state-of-the-art results in
synthesis tasks; however, there is a growing concern about their potential
misuse in creating harmful content. To mitigate these risks, post-hoc model
intervention techniques, such as concept unlearning and safety guidance, have
been developed. However, fine-tuning model weights or adapting the hidden
states of the diffusion model operates in an uninterpretable way, making it
unclear which part of the intermediate variables is responsible for unsafe
generation. These interventions severely affect the sampling trajectory when
erasing harmful concepts from complex, multi-concept prompts, thus hindering
their practical use in real-world settings. In this work, we propose the safe
generation framework Detect-and-Guide (DAG), leveraging the internal knowledge
of diffusion models to perform self-diagnosis and fine-grained self-regulation
during the sampling process. DAG first detects harmful concepts from noisy
latents using refined cross-attention maps of optimized tokens, then applies
safety guidance with adaptive strength and editing regions to negate unsafe
generation. The optimization only requires a small annotated dataset and can
provide precise detection maps with generalizability and concept specificity.
Moreover, DAG does not require fine-tuning of diffusion models, and therefore
introduces no loss to their generation diversity. Experiments on erasing sexual
content show that DAG achieves state-of-the-art safe generation performance,
balancing harmfulness mitigation and text-following performance on
multi-concept real-world prompts.
|
2503.15210 | Wenxing Guo | Wenxing Guo, Jinhan Xie, Jianya Lu, Bei jiang, Hongsheng Dai, Linglong
Kong | Online federated learning framework for classification | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we develop a novel online federated learning framework for
classification, designed to handle streaming data from multiple clients while
ensuring data privacy and computational efficiency. Our method leverages the
generalized distance-weighted discriminant technique, making it robust to both
homogeneous and heterogeneous data distributions across clients. In particular,
we develop a new optimization algorithm based on the Majorization-Minimization
principle, integrated with a renewable estimation procedure, enabling efficient
model updates without full retraining. We provide a theoretical guarantee for
the convergence of our estimator, proving its consistency and asymptotic
normality under standard regularity conditions. In addition, we establish that
our method achieves Bayesian risk consistency, ensuring its reliability for
classification tasks in federated environments. We further incorporate
differential privacy mechanisms to enhance data security, protecting client
information while maintaining model performance. Extensive numerical
experiments on both simulated and real-world datasets demonstrate that our
approach delivers high classification accuracy, significant computational
efficiency gains, and substantial savings in data storage requirements compared
to existing methods.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 13:50:19 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Guo",
"Wenxing",
""
],
[
"Xie",
"Jinhan",
""
],
[
"Lu",
"Jianya",
""
],
[
"jiang",
"Bei",
""
],
[
"Dai",
"Hongsheng",
""
],
[
"Kong",
"Linglong",
""
]
] | TITLE: Online federated learning framework for classification
ABSTRACT: In this paper, we develop a novel online federated learning framework for
classification, designed to handle streaming data from multiple clients while
ensuring data privacy and computational efficiency. Our method leverages the
generalized distance-weighted discriminant technique, making it robust to both
homogeneous and heterogeneous data distributions across clients. In particular,
we develop a new optimization algorithm based on the Majorization-Minimization
principle, integrated with a renewable estimation procedure, enabling efficient
model updates without full retraining. We provide a theoretical guarantee for
the convergence of our estimator, proving its consistency and asymptotic
normality under standard regularity conditions. In addition, we establish that
our method achieves Bayesian risk consistency, ensuring its reliability for
classification tasks in federated environments. We further incorporate
differential privacy mechanisms to enhance data security, protecting client
information while maintaining model performance. Extensive numerical
experiments on both simulated and real-world datasets demonstrate that our
approach delivers high classification accuracy, significant computational
efficiency gains, and substantial savings in data storage requirements compared
to existing methods.
|
2503.15221 | Josu\'e P\'erez Sabater | Rodrigo Oliver, Josu\'e P\'erez-Sabater, Leire Paz-Arbaizar, Alejandro
Lancho, Antonio Art\'es, Pablo M. Olmos | A Foundation Model for Patient Behavior Monitoring and Suicide Detection | 10 pages (31 with appendices), 6 figures (13 with appendices);
submitted to UAI 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Foundation models (FMs) have achieved remarkable success across various
domains, yet their adoption in healthcare remains limited. While significant
advances have been made in medical imaging, genetic biomarkers, and time series
from electronic health records, the potential of FMs for patient behavior
monitoring through wearable devices remains underexplored. These datasets are
inherently heterogeneous, multisource, and often exhibit high rates of missing
data, posing unique challenges. This paper introduces a novel FM based on a
modified vector quantized variational autoencoder (VQ-VAE), specifically
designed to process real-world data from wearable devices. We demonstrate that
our pretrained FM, trained on a broad cohort of psychiatric patients, performs
downstream tasks via its latent representation without fine-tuning on a
held-out cohort of suicidal patients. To illustrate this, we develop a
probabilistic change-point detection algorithm for suicide detection and
demonstrate the FM's effectiveness in predicting emotional states. Our results
show that the discrete latent structure of the VQ-VAE outperforms a
state-of-the-art Informer architecture in unsupervised suicide detection, while
matching its performance in supervised emotion prediction when the latent
dimensionality is increased, though at the cost of reduced unsupervised
accuracy. This trade-off highlights the need for future FMs to integrate hybrid
discrete-continuous structures for balanced performance across tasks.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:01:16 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Oliver",
"Rodrigo",
""
],
[
"Pérez-Sabater",
"Josué",
""
],
[
"Paz-Arbaizar",
"Leire",
""
],
[
"Lancho",
"Alejandro",
""
],
[
"Artés",
"Antonio",
""
],
[
"Olmos",
"Pablo M.",
""
]
] | TITLE: A Foundation Model for Patient Behavior Monitoring and Suicide Detection
ABSTRACT: Foundation models (FMs) have achieved remarkable success across various
domains, yet their adoption in healthcare remains limited. While significant
advances have been made in medical imaging, genetic biomarkers, and time series
from electronic health records, the potential of FMs for patient behavior
monitoring through wearable devices remains underexplored. These datasets are
inherently heterogeneous, multisource, and often exhibit high rates of missing
data, posing unique challenges. This paper introduces a novel FM based on a
modified vector quantized variational autoencoder (VQ-VAE), specifically
designed to process real-world data from wearable devices. We demonstrate that
our pretrained FM, trained on a broad cohort of psychiatric patients, performs
downstream tasks via its latent representation without fine-tuning on a
held-out cohort of suicidal patients. To illustrate this, we develop a
probabilistic change-point detection algorithm for suicide detection and
demonstrate the FM's effectiveness in predicting emotional states. Our results
show that the discrete latent structure of the VQ-VAE outperforms a
state-of-the-art Informer architecture in unsupervised suicide detection, while
matching its performance in supervised emotion prediction when the latent
dimensionality is increased, though at the cost of reduced unsupervised
accuracy. This trade-off highlights the need for future FMs to integrate hybrid
discrete-continuous structures for balanced performance across tasks.
|
2503.15234 | Wenlong Yu | Wenlong Yu, Qilong Wang, Chuang Liu, Dong Li, Qinghua Hu | CoE: Chain-of-Explanation via Automatic Visual Concept Circuit
Description and Polysemanticity Quantification | Accepted by CVPR2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explainability is a critical factor influencing the wide deployment of deep
vision models (DVMs). Concept-based post-hoc explanation methods can provide
both global and local insights into model decisions. However, current methods
in this field face challenges in that they are inflexible to automatically
construct accurate and sufficient linguistic explanations for global concepts
and local circuits. Particularly, the intrinsic polysemanticity in semantic
Visual Concepts (VCs) impedes the interpretability of concepts and DVMs, which
is underestimated severely. In this paper, we propose a Chain-of-Explanation
(CoE) approach to address these issues. Specifically, CoE automates the
decoding and description of VCs to construct global concept explanation
datasets. Further, to alleviate the effect of polysemanticity on model
explainability, we design a concept polysemanticity disentanglement and
filtering mechanism to distinguish the most contextually relevant concept
atoms. Besides, a Concept Polysemanticity Entropy (CPE), as a measure of model
interpretability, is formulated to quantify the degree of concept uncertainty.
The modeling of deterministic concepts is upgraded to uncertain concept atom
distributions. Finally, CoE automatically enables linguistic local explanations
of the decision-making process of DVMs by tracing the concept circuit. GPT-4o
and human-based experiments demonstrate the effectiveness of CPE and the
superiority of CoE, achieving an average absolute improvement of 36% in terms
of explainability scores.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:13:02 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Yu",
"Wenlong",
""
],
[
"Wang",
"Qilong",
""
],
[
"Liu",
"Chuang",
""
],
[
"Li",
"Dong",
""
],
[
"Hu",
"Qinghua",
""
]
] | TITLE: CoE: Chain-of-Explanation via Automatic Visual Concept Circuit
Description and Polysemanticity Quantification
ABSTRACT: Explainability is a critical factor influencing the wide deployment of deep
vision models (DVMs). Concept-based post-hoc explanation methods can provide
both global and local insights into model decisions. However, current methods
in this field face challenges in that they are inflexible to automatically
construct accurate and sufficient linguistic explanations for global concepts
and local circuits. Particularly, the intrinsic polysemanticity in semantic
Visual Concepts (VCs) impedes the interpretability of concepts and DVMs, which
is underestimated severely. In this paper, we propose a Chain-of-Explanation
(CoE) approach to address these issues. Specifically, CoE automates the
decoding and description of VCs to construct global concept explanation
datasets. Further, to alleviate the effect of polysemanticity on model
explainability, we design a concept polysemanticity disentanglement and
filtering mechanism to distinguish the most contextually relevant concept
atoms. Besides, a Concept Polysemanticity Entropy (CPE), as a measure of model
interpretability, is formulated to quantify the degree of concept uncertainty.
The modeling of deterministic concepts is upgraded to uncertain concept atom
distributions. Finally, CoE automatically enables linguistic local explanations
of the decision-making process of DVMs by tracing the concept circuit. GPT-4o
and human-based experiments demonstrate the effectiveness of CPE and the
superiority of CoE, achieving an average absolute improvement of 36% in terms
of explainability scores.
|
2503.15235 | Chentian Wei | Chentian Wei, Jiewei Chen, Jinzhu Xu | Exploring Large Language Models for Word Games:Who is the Spy? | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Word games hold significant research value for natural language processing
(NLP), game theory, and related fields due to their rule-based and situational
nature. This study explores how large language models (LLMs) can be effectively
involved in word games and proposes a training-free framework. "Shei Shi Wo Di"
or "Who is the Spy" in English, is a classic word game. Using this game as an
example, we introduce a Chain-of-Thought (CoT)-based scheduling framework to
enable LLMs to achieve excellent performance in tasks such as inferring role
words and disguising their identities. We evaluate the framework's performance
based on game success rates and the accuracy of the LLM agents' analytical
results. Experimental results affirm the framework's effectiveness,
demonstrating notable improvements in LLM performance across multiple datasets.
This work highlights the potential of LLMs in mastering situational reasoning
and social interactions within structured game environments. Our code is
publicly available at https://github.com/ct-wei/Who-is-The-Spy.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:13:02 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wei",
"Chentian",
""
],
[
"Chen",
"Jiewei",
""
],
[
"Xu",
"Jinzhu",
""
]
] | TITLE: Exploring Large Language Models for Word Games:Who is the Spy?
ABSTRACT: Word games hold significant research value for natural language processing
(NLP), game theory, and related fields due to their rule-based and situational
nature. This study explores how large language models (LLMs) can be effectively
involved in word games and proposes a training-free framework. "Shei Shi Wo Di"
or "Who is the Spy" in English, is a classic word game. Using this game as an
example, we introduce a Chain-of-Thought (CoT)-based scheduling framework to
enable LLMs to achieve excellent performance in tasks such as inferring role
words and disguising their identities. We evaluate the framework's performance
based on game success rates and the accuracy of the LLM agents' analytical
results. Experimental results affirm the framework's effectiveness,
demonstrating notable improvements in LLM performance across multiple datasets.
This work highlights the potential of LLMs in mastering situational reasoning
and social interactions within structured game environments. Our code is
publicly available at https://github.com/ct-wei/Who-is-The-Spy.
|
2503.15237 | Liyun Zhang | Liyun Zhang, Zheng Lian, Hong Liu, Takanori Takebe, Yuta Nakashima | QuMATL: Query-based Multi-annotator Tendency Learning | 12 pages | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Different annotators often assign different labels to the same sample due to
backgrounds or preferences, and such labeling patterns are referred to as
tendency. In multi-annotator scenarios, we introduce a novel task called
Multi-annotator Tendency Learning (MATL), which aims to capture each annotator
tendency. Unlike traditional tasks that prioritize consensus-oriented learning,
which averages out annotator differences and leads to tendency information
loss, MATL emphasizes learning each annotator tendency, better preserves
tendency information. To this end, we propose an efficient baseline method,
Query-based Multi-annotator Tendency Learning (QuMATL), which uses lightweight
query to represent each annotator for tendency modeling. It saves the costs of
building separate conventional models for each annotator, leverages shared
learnable queries to capture inter-annotator correlations as an additional
hidden supervisory signal to enhance modeling performance. Meanwhile, we
provide a new metric, Difference of Inter-annotator Consistency (DIC), to
evaluate how effectively models preserve annotators tendency information.
Additionally, we contribute two large-scale datasets, STREET and AMER,
providing averages of 4300 and 3118 per-annotator labels, respectively.
Extensive experiments verified the effectiveness of our QuMATL.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:14:57 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhang",
"Liyun",
""
],
[
"Lian",
"Zheng",
""
],
[
"Liu",
"Hong",
""
],
[
"Takebe",
"Takanori",
""
],
[
"Nakashima",
"Yuta",
""
]
] | TITLE: QuMATL: Query-based Multi-annotator Tendency Learning
ABSTRACT: Different annotators often assign different labels to the same sample due to
backgrounds or preferences, and such labeling patterns are referred to as
tendency. In multi-annotator scenarios, we introduce a novel task called
Multi-annotator Tendency Learning (MATL), which aims to capture each annotator
tendency. Unlike traditional tasks that prioritize consensus-oriented learning,
which averages out annotator differences and leads to tendency information
loss, MATL emphasizes learning each annotator tendency, better preserves
tendency information. To this end, we propose an efficient baseline method,
Query-based Multi-annotator Tendency Learning (QuMATL), which uses lightweight
query to represent each annotator for tendency modeling. It saves the costs of
building separate conventional models for each annotator, leverages shared
learnable queries to capture inter-annotator correlations as an additional
hidden supervisory signal to enhance modeling performance. Meanwhile, we
provide a new metric, Difference of Inter-annotator Consistency (DIC), to
evaluate how effectively models preserve annotators tendency information.
Additionally, we contribute two large-scale datasets, STREET and AMER,
providing averages of 4300 and 3118 per-annotator labels, respectively.
Extensive experiments verified the effectiveness of our QuMATL.
|
2503.15250 | Quentin Nater | Quentin Nater, Mourad Khayati, Jacques Pasquier | ImputeGAP: A Comprehensive Library for Time Series Imputation | null | null | null | null | cs.LG cs.DB | http://creativecommons.org/licenses/by/4.0/ | With the prevalence of sensor failures, imputation--the process of estimating
missing values--has emerged as the cornerstone of time series data preparation.
While numerous imputation algorithms have been developed to address these data
gaps, existing libraries provide limited support. Furthermore, they often lack
the ability to simulate realistic patterns of time series missing data and fail
to account for the impact of imputation on subsequent downstream analysis.
This paper introduces ImputeGAP, a comprehensive library for time series
imputation that supports a diverse range of imputation methods and modular
missing data simulation catering to datasets with varying characteristics. The
library includes extensive customization options, such as automated
hyperparameter tuning, benchmarking, explainability, downstream evaluation, and
compatibility with popular time series frameworks.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:24:20 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Nater",
"Quentin",
""
],
[
"Khayati",
"Mourad",
""
],
[
"Pasquier",
"Jacques",
""
]
] | TITLE: ImputeGAP: A Comprehensive Library for Time Series Imputation
ABSTRACT: With the prevalence of sensor failures, imputation--the process of estimating
missing values--has emerged as the cornerstone of time series data preparation.
While numerous imputation algorithms have been developed to address these data
gaps, existing libraries provide limited support. Furthermore, they often lack
the ability to simulate realistic patterns of time series missing data and fail
to account for the impact of imputation on subsequent downstream analysis.
This paper introduces ImputeGAP, a comprehensive library for time series
imputation that supports a diverse range of imputation methods and modular
missing data simulation catering to datasets with varying characteristics. The
library includes extensive customization options, such as automated
hyperparameter tuning, benchmarking, explainability, downstream evaluation, and
compatibility with popular time series frameworks.
|
2503.15252 | Marcin Lawenda | Marcin Lawenda, Krzesimir Samborski, Kyrylo Khloponin, {\L}ukasz
Szustak | Efficient allocation of image recognition and LLM tasks on multi-GPU
system | null | null | null | null | cs.DC cs.PF | http://creativecommons.org/licenses/by/4.0/ | This work is concerned with the evaluation of the performance of
parallelization of learning and tuning processes for image classification and
large language models. For machine learning model in image recognition, various
parallelization methods are developed based on different hardware and software
scenarios: simple data parallelism, distributed data parallelism, and
distributed processing. A detailed description of presented strategies is
given, highlighting the challenges and benefits of their application.
Furthermore, the impact of different dataset types on the tuning process of
large language models is investigated. Experiments show to what extent the task
type affects the iteration time in a multi-GPU environment, offering valuable
insights into the optimal data utilization strategies to improve model
performance. Furthermore, this study leverages the built-in parallelization
mechanisms of PyTorch that can facilitate these tasks. Furthermore, performance
profiling is incorporated into the study to thoroughly evaluate the impact of
memory and communication operations during the training/tuning procedure. Test
scenarios are developed and tested with numerous benchmarks on the NVIDIA H100
architecture showing efficiency through selected metrics.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:26:09 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lawenda",
"Marcin",
""
],
[
"Samborski",
"Krzesimir",
""
],
[
"Khloponin",
"Kyrylo",
""
],
[
"Szustak",
"Łukasz",
""
]
] | TITLE: Efficient allocation of image recognition and LLM tasks on multi-GPU
system
ABSTRACT: This work is concerned with the evaluation of the performance of
parallelization of learning and tuning processes for image classification and
large language models. For machine learning model in image recognition, various
parallelization methods are developed based on different hardware and software
scenarios: simple data parallelism, distributed data parallelism, and
distributed processing. A detailed description of presented strategies is
given, highlighting the challenges and benefits of their application.
Furthermore, the impact of different dataset types on the tuning process of
large language models is investigated. Experiments show to what extent the task
type affects the iteration time in a multi-GPU environment, offering valuable
insights into the optimal data utilization strategies to improve model
performance. Furthermore, this study leverages the built-in parallelization
mechanisms of PyTorch that can facilitate these tasks. Furthermore, performance
profiling is incorporated into the study to thoroughly evaluate the impact of
memory and communication operations during the training/tuning procedure. Test
scenarios are developed and tested with numerous benchmarks on the NVIDIA H100
architecture showing efficiency through selected metrics.
|
2503.15260 | Lei Shi | Lei Shi, Xi Fang, Naiyu Wang, Junxing Zhang | DEPT: Deep Extreme Point Tracing for Ultrasound Image Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic medical image segmentation plays a crucial role in computer aided
diagnosis. However, fully supervised learning approaches often require
extensive and labor-intensive annotation efforts. To address this challenge,
weakly supervised learning methods, particularly those using extreme points as
supervisory signals, have the potential to offer an effective solution. In this
paper, we introduce Deep Extreme Point Tracing (DEPT) integrated with
Feature-Guided Extreme Point Masking (FGEPM) algorithm for ultrasound image
segmentation. Notably, our method generates pseudo labels by identifying the
lowest-cost path that connects all extreme points on the feature map-based cost
matrix. Additionally, an iterative training strategy is proposed to refine
pseudo labels progressively, enabling continuous network improvement.
Experimental results on two public datasets demonstrate the effectiveness of
our proposed method. The performance of our method approaches that of the fully
supervised method and outperforms several existing weakly supervised methods.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:32:14 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Shi",
"Lei",
""
],
[
"Fang",
"Xi",
""
],
[
"Wang",
"Naiyu",
""
],
[
"Zhang",
"Junxing",
""
]
] | TITLE: DEPT: Deep Extreme Point Tracing for Ultrasound Image Segmentation
ABSTRACT: Automatic medical image segmentation plays a crucial role in computer aided
diagnosis. However, fully supervised learning approaches often require
extensive and labor-intensive annotation efforts. To address this challenge,
weakly supervised learning methods, particularly those using extreme points as
supervisory signals, have the potential to offer an effective solution. In this
paper, we introduce Deep Extreme Point Tracing (DEPT) integrated with
Feature-Guided Extreme Point Masking (FGEPM) algorithm for ultrasound image
segmentation. Notably, our method generates pseudo labels by identifying the
lowest-cost path that connects all extreme points on the feature map-based cost
matrix. Additionally, an iterative training strategy is proposed to refine
pseudo labels progressively, enabling continuous network improvement.
Experimental results on two public datasets demonstrate the effectiveness of
our proposed method. The performance of our method approaches that of the fully
supervised method and outperforms several existing weakly supervised methods.
|
2503.15264 | Hengrui Kang | Hengrui Kang, Siwei Wen, Zichen Wen, Junyan Ye, Weijia Li, Peilin
Feng, Baichuan Zhou, Bin Wang, Dahua Lin, Linfeng Zhang, Conghui He | LEGION: Learning to Ground and Explain for Synthetic Image Detection | Project Page: https://opendatalab.github.io/LEGION | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancements in generative technology have emerged as a
double-edged sword. While offering powerful tools that enhance convenience,
they also pose significant social concerns. As defenders, current synthetic
image detection methods often lack artifact-level textual interpretability and
are overly focused on image manipulation detection, and current datasets
usually suffer from outdated generators and a lack of fine-grained annotations.
In this paper, we introduce SynthScars, a high-quality and diverse dataset
consisting of 12,236 fully synthetic images with human-expert annotations. It
features 4 distinct image content types, 3 categories of artifacts, and
fine-grained annotations covering pixel-level segmentation, detailed textual
explanations, and artifact category labels. Furthermore, we propose LEGION
(LEarning to Ground and explain for Synthetic Image detectiON), a multimodal
large language model (MLLM)-based image forgery analysis framework that
integrates artifact detection, segmentation, and explanation. Building upon
this capability, we further explore LEGION as a controller, integrating it into
image refinement pipelines to guide the generation of higher-quality and more
realistic images. Extensive experiments show that LEGION outperforms existing
methods across multiple benchmarks, particularly surpassing the second-best
traditional expert on SynthScars by 3.31% in mIoU and 7.75% in F1 score.
Moreover, the refined images generated under its guidance exhibit stronger
alignment with human preferences. The code, model, and dataset will be
released.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:37:21 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Kang",
"Hengrui",
""
],
[
"Wen",
"Siwei",
""
],
[
"Wen",
"Zichen",
""
],
[
"Ye",
"Junyan",
""
],
[
"Li",
"Weijia",
""
],
[
"Feng",
"Peilin",
""
],
[
"Zhou",
"Baichuan",
""
],
[
"Wang",
"Bin",
""
],
[
"Lin",
"Dahua",
""
],
[
"Zhang",
"Linfeng",
""
],
[
"He",
"Conghui",
""
]
] | TITLE: LEGION: Learning to Ground and Explain for Synthetic Image Detection
ABSTRACT: The rapid advancements in generative technology have emerged as a
double-edged sword. While offering powerful tools that enhance convenience,
they also pose significant social concerns. As defenders, current synthetic
image detection methods often lack artifact-level textual interpretability and
are overly focused on image manipulation detection, and current datasets
usually suffer from outdated generators and a lack of fine-grained annotations.
In this paper, we introduce SynthScars, a high-quality and diverse dataset
consisting of 12,236 fully synthetic images with human-expert annotations. It
features 4 distinct image content types, 3 categories of artifacts, and
fine-grained annotations covering pixel-level segmentation, detailed textual
explanations, and artifact category labels. Furthermore, we propose LEGION
(LEarning to Ground and explain for Synthetic Image detectiON), a multimodal
large language model (MLLM)-based image forgery analysis framework that
integrates artifact detection, segmentation, and explanation. Building upon
this capability, we further explore LEGION as a controller, integrating it into
image refinement pipelines to guide the generation of higher-quality and more
realistic images. Extensive experiments show that LEGION outperforms existing
methods across multiple benchmarks, particularly surpassing the second-best
traditional expert on SynthScars by 3.31% in mIoU and 7.75% in F1 score.
Moreover, the refined images generated under its guidance exhibit stronger
alignment with human preferences. The code, model, and dataset will be
released.
|
2503.15272 | David Wan | David Wan, Justin Chih-Yao Chen, Elias Stengel-Eskin, Mohit Bansal | MAMM-Refine: A Recipe for Improving Faithfulness in Generation with
Multi-Agent Collaboration | NAACL 2025, 18 pages. Code:
https://github.com/meetdavidwan/mammrefine | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-agent collaboration among models has shown promise in reasoning tasks
but is underexplored in long-form generation tasks like summarization and
question-answering. We extend multi-agent multi-model reasoning to generation,
specifically to improving faithfulness through refinement, i.e., revising
model-generated outputs to remove factual inconsistencies. We investigate how
iterative collaboration among multiple instances and types of large language
models (LLMs) enhances subtasks in the refinement process, such as error
detection, critiquing unfaithful sentences, and making corrections based on
critiques. We design intrinsic evaluations for each subtask, with our findings
indicating that both multi-agent (multiple instances) and multi-model (diverse
LLM types) approaches benefit error detection and critiquing. Additionally,
reframing critiquing and refinement as reranking rather than generation tasks
improves multi-agent performance. We consolidate these insights into a final
"recipe" called Multi-Agent Multi-Model Refinement (MAMM-Refine), where
multi-agent and multi-model collaboration significantly boosts performance on
three summarization datasets as well as on long-form question answering,
demonstrating the effectiveness and generalizability of our recipe.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:46:53 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Wan",
"David",
""
],
[
"Chen",
"Justin Chih-Yao",
""
],
[
"Stengel-Eskin",
"Elias",
""
],
[
"Bansal",
"Mohit",
""
]
] | TITLE: MAMM-Refine: A Recipe for Improving Faithfulness in Generation with
Multi-Agent Collaboration
ABSTRACT: Multi-agent collaboration among models has shown promise in reasoning tasks
but is underexplored in long-form generation tasks like summarization and
question-answering. We extend multi-agent multi-model reasoning to generation,
specifically to improving faithfulness through refinement, i.e., revising
model-generated outputs to remove factual inconsistencies. We investigate how
iterative collaboration among multiple instances and types of large language
models (LLMs) enhances subtasks in the refinement process, such as error
detection, critiquing unfaithful sentences, and making corrections based on
critiques. We design intrinsic evaluations for each subtask, with our findings
indicating that both multi-agent (multiple instances) and multi-model (diverse
LLM types) approaches benefit error detection and critiquing. Additionally,
reframing critiquing and refinement as reranking rather than generation tasks
improves multi-agent performance. We consolidate these insights into a final
"recipe" called Multi-Agent Multi-Model Refinement (MAMM-Refine), where
multi-agent and multi-model collaboration significantly boosts performance on
three summarization datasets as well as on long-form question answering,
demonstrating the effectiveness and generalizability of our recipe.
|
2503.15284 | Hui Yuan | Yuanchao Yue, Hui Yuan, Qinglong Miao, Xiaolong Mao, Raouf Hamzaoui,
Peter Eisert | EdgeRegNet: Edge Feature-based Multimodal Registration Network between
Images and LiDAR Point Clouds | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Cross-modal data registration has long been a critical task in computer
vision, with extensive applications in autonomous driving and robotics.
Accurate and robust registration methods are essential for aligning data from
different modalities, forming the foundation for multimodal sensor data fusion
and enhancing perception systems' accuracy and reliability. The registration
task between 2D images captured by cameras and 3D point clouds captured by
Light Detection and Ranging (LiDAR) sensors is usually treated as a visual pose
estimation problem. High-dimensional feature similarities from different
modalities are leveraged to identify pixel-point correspondences, followed by
pose estimation techniques using least squares methods. However, existing
approaches often resort to downsampling the original point cloud and image data
due to computational constraints, inevitably leading to a loss in precision.
Additionally, high-dimensional features extracted using different feature
extractors from various modalities require specific techniques to mitigate
cross-modal differences for effective matching. To address these challenges, we
propose a method that uses edge information from the original point clouds and
images for cross-modal registration. We retain crucial information from the
original data by extracting edge points and pixels, enhancing registration
accuracy while maintaining computational efficiency. The use of edge points and
edge pixels allows us to introduce an attention-based feature exchange block to
eliminate cross-modal disparities. Furthermore, we incorporate an optimal
matching layer to improve correspondence identification. We validate the
accuracy of our method on the KITTI and nuScenes datasets, demonstrating its
state-of-the-art performance.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:03:41 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Yue",
"Yuanchao",
""
],
[
"Yuan",
"Hui",
""
],
[
"Miao",
"Qinglong",
""
],
[
"Mao",
"Xiaolong",
""
],
[
"Hamzaoui",
"Raouf",
""
],
[
"Eisert",
"Peter",
""
]
] | TITLE: EdgeRegNet: Edge Feature-based Multimodal Registration Network between
Images and LiDAR Point Clouds
ABSTRACT: Cross-modal data registration has long been a critical task in computer
vision, with extensive applications in autonomous driving and robotics.
Accurate and robust registration methods are essential for aligning data from
different modalities, forming the foundation for multimodal sensor data fusion
and enhancing perception systems' accuracy and reliability. The registration
task between 2D images captured by cameras and 3D point clouds captured by
Light Detection and Ranging (LiDAR) sensors is usually treated as a visual pose
estimation problem. High-dimensional feature similarities from different
modalities are leveraged to identify pixel-point correspondences, followed by
pose estimation techniques using least squares methods. However, existing
approaches often resort to downsampling the original point cloud and image data
due to computational constraints, inevitably leading to a loss in precision.
Additionally, high-dimensional features extracted using different feature
extractors from various modalities require specific techniques to mitigate
cross-modal differences for effective matching. To address these challenges, we
propose a method that uses edge information from the original point clouds and
images for cross-modal registration. We retain crucial information from the
original data by extracting edge points and pixels, enhancing registration
accuracy while maintaining computational efficiency. The use of edge points and
edge pixels allows us to introduce an attention-based feature exchange block to
eliminate cross-modal disparities. Furthermore, we incorporate an optimal
matching layer to improve correspondence identification. We validate the
accuracy of our method on the KITTI and nuScenes datasets, demonstrating its
state-of-the-art performance.
|
2503.15285 | Hui Yuan | Yuanchao Yue, Zhengxin Li, Wei Zhang, Hui Yuan | PAPI-Reg: Patch-to-Pixel Solution for Efficient Cross-Modal Registration
between LiDAR Point Cloud and Camera Image | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The primary requirement for cross-modal data fusion is the precise alignment
of data from different sensors. However, the calibration between LiDAR point
clouds and camera images is typically time-consuming and needs external
calibration board or specific environmental features. Cross-modal registration
effectively solves this problem by aligning the data directly without requiring
external calibration. However, due to the domain gap between the point cloud
and the image, existing methods rarely achieve satisfactory registration
accuracy while maintaining real-time performance. To address this issue, we
propose a framework that projects point clouds into several 2D representations
for matching with camera images, which not only leverages the geometric
characteristic of LiDAR point clouds more effectively but also bridge the
domain gap between the point cloud and image. Moreover, to tackle the
challenges of cross modal differences and the limited overlap between LiDAR
point clouds and images in the image matching task, we introduce a multi-scale
feature extraction network to effectively extract features from both camera
images and the projection maps of LiDAR point cloud. Additionally, we propose a
patch-to-pixel matching network to provide more effective supervision and
achieve higher accuracy. We validate the performance of our model through
experiments on the KITTI and nuScenes datasets. Our network achieves real-time
performance and extremely high registration accuracy. On the KITTI dataset, our
model achieves a registration accuracy rate of over 99\%.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:04:01 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Yue",
"Yuanchao",
""
],
[
"Li",
"Zhengxin",
""
],
[
"Zhang",
"Wei",
""
],
[
"Yuan",
"Hui",
""
]
] | TITLE: PAPI-Reg: Patch-to-Pixel Solution for Efficient Cross-Modal Registration
between LiDAR Point Cloud and Camera Image
ABSTRACT: The primary requirement for cross-modal data fusion is the precise alignment
of data from different sensors. However, the calibration between LiDAR point
clouds and camera images is typically time-consuming and needs external
calibration board or specific environmental features. Cross-modal registration
effectively solves this problem by aligning the data directly without requiring
external calibration. However, due to the domain gap between the point cloud
and the image, existing methods rarely achieve satisfactory registration
accuracy while maintaining real-time performance. To address this issue, we
propose a framework that projects point clouds into several 2D representations
for matching with camera images, which not only leverages the geometric
characteristic of LiDAR point clouds more effectively but also bridge the
domain gap between the point cloud and image. Moreover, to tackle the
challenges of cross modal differences and the limited overlap between LiDAR
point clouds and images in the image matching task, we introduce a multi-scale
feature extraction network to effectively extract features from both camera
images and the projection maps of LiDAR point cloud. Additionally, we propose a
patch-to-pixel matching network to provide more effective supervision and
achieve higher accuracy. We validate the performance of our model through
experiments on the KITTI and nuScenes datasets. Our network achieves real-time
performance and extremely high registration accuracy. On the KITTI dataset, our
model achieves a registration accuracy rate of over 99\%.
|
2503.15287 | Daniel Tinoco | Daniel Tinoco, Raquel Menezes, Carlos Baquero | Distributed Generalized Linear Models: A Privacy-Preserving Approach | Total PDF pages: 23 Figures: 7 | null | null | null | stat.CO cs.DC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper presents a novel approach to classical linear regression, enabling
model computation from data streams or in a distributed setting while
preserving data privacy in federated environments. We extend this framework to
generalized linear models (GLMs), ensuring scalability and adaptability to
diverse data distributions while maintaining privacy-preserving properties. To
assess the effectiveness of our approach, we conduct numerical studies on both
simulated and real datasets, comparing our method with conventional maximum
likelihood estimation for GLMs using iteratively reweighted least squares. Our
results demonstrate the advantages of the proposed method in distributed and
federated settings.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:07:41 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Tinoco",
"Daniel",
""
],
[
"Menezes",
"Raquel",
""
],
[
"Baquero",
"Carlos",
""
]
] | TITLE: Distributed Generalized Linear Models: A Privacy-Preserving Approach
ABSTRACT: This paper presents a novel approach to classical linear regression, enabling
model computation from data streams or in a distributed setting while
preserving data privacy in federated environments. We extend this framework to
generalized linear models (GLMs), ensuring scalability and adaptability to
diverse data distributions while maintaining privacy-preserving properties. To
assess the effectiveness of our approach, we conduct numerical studies on both
simulated and real datasets, comparing our method with conventional maximum
likelihood estimation for GLMs using iteratively reweighted least squares. Our
results demonstrate the advantages of the proposed method in distributed and
federated settings.
|
2503.15301 | Huanyu Liu | Jia Li, Hao Zhu, Huanyu Liu, Xianjie Shi, He Zong, Yihong Dong, Kechi
Zhang, Siyuan Jiang, Zhi Jin, Ge Li | aiXcoder-7B-v2: Training LLMs to Fully Utilize the Long Context in
Repository-level Code Completion | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Repository-level code completion aims to complete code based on the long
contexts of the repository. Existing studies extract long contexts from the
repository as inputs and leverage Large Language Models (LLMs) to generate
code. However, we reveal a severe limitation of LLMs, i.e., LLMs may ignore the
information within long contexts in code completion. In other words, even the
contexts contain useful information (e.g., relevant APIs or similar code), LLMs
may fail to utilize this information. We think this limitation is caused by an
inherent bias in LLMs, i.e., relying on nearby contexts and ignoring long-range
contexts. To address this, we propose a novel fine-tuning approach named CoLT.
The core idea of CoLT is to provide explicit supervision signals, which
emphasize that long-range contexts may hold relevant information. Specifically,
CoLT proposes a reinforcement learning-based training, which explicitly
encourages models to utilize the information within long contexts and punishes
models for ignoring long contexts. To support CoLT, we release CoLT-132K, a
large-scale dataset with 132k samples across four languages, each containing
long-context inputs. We apply CoLT to a popular LLM - aiXcoder-7B and release
aiXcoder-7B-v2. We conduct extensive experiments on CoLT-132K and a public
benchmark - CrossCodeEval. Our experiments yield the results: 1. Effectiveness.
CoLT substantially improves aiXcoder-7B. aiXcoder-7B-v2 outperforms aiXcoder-7B
by up to 44% in exact match. aiXcoder-7B-v2 becomes the state-of-the-art 7B
model in code completion and even surpasses larger models. 2. Generalizability.
The capability learned by CoLT can generalize to new languages. Besides, CoLT
is model-agnostic and effectively improves multiple LLMs. 3. Enhanced Context
Utilization Capability. CoLT significantly improves the capability of LLMs in
utilizing the relevant information within long contexts.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:22:58 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Jia",
""
],
[
"Zhu",
"Hao",
""
],
[
"Liu",
"Huanyu",
""
],
[
"Shi",
"Xianjie",
""
],
[
"Zong",
"He",
""
],
[
"Dong",
"Yihong",
""
],
[
"Zhang",
"Kechi",
""
],
[
"Jiang",
"Siyuan",
""
],
[
"Jin",
"Zhi",
""
],
[
"Li",
"Ge",
""
]
] | TITLE: aiXcoder-7B-v2: Training LLMs to Fully Utilize the Long Context in
Repository-level Code Completion
ABSTRACT: Repository-level code completion aims to complete code based on the long
contexts of the repository. Existing studies extract long contexts from the
repository as inputs and leverage Large Language Models (LLMs) to generate
code. However, we reveal a severe limitation of LLMs, i.e., LLMs may ignore the
information within long contexts in code completion. In other words, even the
contexts contain useful information (e.g., relevant APIs or similar code), LLMs
may fail to utilize this information. We think this limitation is caused by an
inherent bias in LLMs, i.e., relying on nearby contexts and ignoring long-range
contexts. To address this, we propose a novel fine-tuning approach named CoLT.
The core idea of CoLT is to provide explicit supervision signals, which
emphasize that long-range contexts may hold relevant information. Specifically,
CoLT proposes a reinforcement learning-based training, which explicitly
encourages models to utilize the information within long contexts and punishes
models for ignoring long contexts. To support CoLT, we release CoLT-132K, a
large-scale dataset with 132k samples across four languages, each containing
long-context inputs. We apply CoLT to a popular LLM - aiXcoder-7B and release
aiXcoder-7B-v2. We conduct extensive experiments on CoLT-132K and a public
benchmark - CrossCodeEval. Our experiments yield the results: 1. Effectiveness.
CoLT substantially improves aiXcoder-7B. aiXcoder-7B-v2 outperforms aiXcoder-7B
by up to 44% in exact match. aiXcoder-7B-v2 becomes the state-of-the-art 7B
model in code completion and even surpasses larger models. 2. Generalizability.
The capability learned by CoLT can generalize to new languages. Besides, CoLT
is model-agnostic and effectively improves multiple LLMs. 3. Enhanced Context
Utilization Capability. CoLT significantly improves the capability of LLMs in
utilizing the relevant information within long contexts.
|
2503.15337 | Hao Tan | Hao Tan, Zichang Tan, Jun Li, Ajian Liu, Jun Wan, Zhen Lei | Recover and Match: Open-Vocabulary Multi-Label Recognition through
Knowledge-Constrained Optimal Transport | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying multiple novel classes in an image, known as open-vocabulary
multi-label recognition, is a challenging task in computer vision. Recent
studies explore the transfer of powerful vision-language models such as CLIP.
However, these approaches face two critical challenges: (1) The local semantics
of CLIP are disrupted due to its global pre-training objectives, resulting in
unreliable regional predictions. (2) The matching property between image
regions and candidate labels has been neglected, relying instead on naive
feature aggregation such as average pooling, which leads to spurious
predictions from irrelevant regions. In this paper, we present RAM (Recover And
Match), a novel framework that effectively addresses the above issues. To
tackle the first problem, we propose Ladder Local Adapter (LLA) to enforce
refocusing on local regions, recovering local semantics in a memory-friendly
way. For the second issue, we propose Knowledge-Constrained Optimal Transport
(KCOT) to suppress meaningless matching to non-GT labels by formulating the
task as an optimal transport problem. As a result, RAM achieves
state-of-the-art performance on various datasets from three distinct domains,
and shows great potential to boost the existing methods. Code:
https://github.com/EricTan7/RAM.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:33:44 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Tan",
"Hao",
""
],
[
"Tan",
"Zichang",
""
],
[
"Li",
"Jun",
""
],
[
"Liu",
"Ajian",
""
],
[
"Wan",
"Jun",
""
],
[
"Lei",
"Zhen",
""
]
] | TITLE: Recover and Match: Open-Vocabulary Multi-Label Recognition through
Knowledge-Constrained Optimal Transport
ABSTRACT: Identifying multiple novel classes in an image, known as open-vocabulary
multi-label recognition, is a challenging task in computer vision. Recent
studies explore the transfer of powerful vision-language models such as CLIP.
However, these approaches face two critical challenges: (1) The local semantics
of CLIP are disrupted due to its global pre-training objectives, resulting in
unreliable regional predictions. (2) The matching property between image
regions and candidate labels has been neglected, relying instead on naive
feature aggregation such as average pooling, which leads to spurious
predictions from irrelevant regions. In this paper, we present RAM (Recover And
Match), a novel framework that effectively addresses the above issues. To
tackle the first problem, we propose Ladder Local Adapter (LLA) to enforce
refocusing on local regions, recovering local semantics in a memory-friendly
way. For the second issue, we propose Knowledge-Constrained Optimal Transport
(KCOT) to suppress meaningless matching to non-GT labels by formulating the
task as an optimal transport problem. As a result, RAM achieves
state-of-the-art performance on various datasets from three distinct domains,
and shows great potential to boost the existing methods. Code:
https://github.com/EricTan7/RAM.
|
2503.15338 | Junyi Ao | Junyi Ao, Dekun Chen, Xiaohai Tian, Wenjie Feng, Jun Zhang, Lu Lu,
Yuxuan Wang, Haizhou Li, Zhizheng Wu | Solla: Towards a Speech-Oriented LLM That Hears Acoustic Context | null | null | null | null | eess.AS cs.CL cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have recently shown remarkable ability to
process not only text but also multimodal inputs such as speech and audio.
However, most existing models primarily focus on analyzing input signals using
text instructions, overlooking scenarios in which speech instructions and audio
are mixed and serve as inputs to the model. To address these challenges, we
introduce Solla, a novel framework designed to understand speech-based
questions and hear the acoustic context concurrently. Solla incorporates an
audio tagging module to effectively identify and represent audio events, as
well as an ASR-assisted prediction method to improve comprehension of spoken
content. To rigorously evaluate Solla and other publicly available models, we
propose a new benchmark dataset called SA-Eval, which includes three tasks:
audio event classification, audio captioning, and audio question answering.
SA-Eval has diverse speech instruction with various speaking styles,
encompassing two difficulty levels, easy and hard, to capture the range of
real-world acoustic conditions. Experimental results show that Solla performs
on par with or outperforms baseline models on both the easy and hard test sets,
underscoring its effectiveness in jointly understanding speech and audio.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:34:21 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Ao",
"Junyi",
""
],
[
"Chen",
"Dekun",
""
],
[
"Tian",
"Xiaohai",
""
],
[
"Feng",
"Wenjie",
""
],
[
"Zhang",
"Jun",
""
],
[
"Lu",
"Lu",
""
],
[
"Wang",
"Yuxuan",
""
],
[
"Li",
"Haizhou",
""
],
[
"Wu",
"Zhizheng",
""
]
] | TITLE: Solla: Towards a Speech-Oriented LLM That Hears Acoustic Context
ABSTRACT: Large Language Models (LLMs) have recently shown remarkable ability to
process not only text but also multimodal inputs such as speech and audio.
However, most existing models primarily focus on analyzing input signals using
text instructions, overlooking scenarios in which speech instructions and audio
are mixed and serve as inputs to the model. To address these challenges, we
introduce Solla, a novel framework designed to understand speech-based
questions and hear the acoustic context concurrently. Solla incorporates an
audio tagging module to effectively identify and represent audio events, as
well as an ASR-assisted prediction method to improve comprehension of spoken
content. To rigorously evaluate Solla and other publicly available models, we
propose a new benchmark dataset called SA-Eval, which includes three tasks:
audio event classification, audio captioning, and audio question answering.
SA-Eval has diverse speech instruction with various speaking styles,
encompassing two difficulty levels, easy and hard, to capture the range of
real-world acoustic conditions. Experimental results show that Solla performs
on par with or outperforms baseline models on both the easy and hard test sets,
underscoring its effectiveness in jointly understanding speech and audio.
|
2503.15342 | Ritabrata Chakraborty | Ritabrata Chakraborty, Rajatsubhra Chakraborty, Ali Khaleghi Rahimian
and Thomas MacDougall | TruthLens:A Training-Free Paradigm for DeepFake Detection | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The proliferation of synthetic images generated by advanced AI models poses
significant challenges in identifying and understanding manipulated visual
content. Current fake image detection methods predominantly rely on binary
classification models that focus on accuracy while often neglecting
interpretability, leaving users without clear insights into why an image is
deemed real or fake. To bridge this gap, we introduce TruthLens, a novel
training-free framework that reimagines deepfake detection as a visual
question-answering (VQA) task. TruthLens utilizes state-of-the-art large
vision-language models (LVLMs) to observe and describe visual artifacts and
combines this with the reasoning capabilities of large language models (LLMs)
like GPT-4 to analyze and aggregate evidence into informed decisions. By
adopting a multimodal approach, TruthLens seamlessly integrates visual and
semantic reasoning to not only classify images as real or fake but also provide
interpretable explanations for its decisions. This transparency enhances trust
and provides valuable insights into the artifacts that signal synthetic
content. Extensive evaluations demonstrate that TruthLens outperforms
conventional methods, achieving high accuracy on challenging datasets while
maintaining a strong emphasis on explainability. By reframing deepfake
detection as a reasoning-driven process, TruthLens establishes a new paradigm
in combating synthetic media, combining cutting-edge performance with
interpretability to address the growing threats of visual disinformation.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:41:32 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Chakraborty",
"Ritabrata",
""
],
[
"Chakraborty",
"Rajatsubhra",
""
],
[
"Rahimian",
"Ali Khaleghi",
""
],
[
"MacDougall",
"Thomas",
""
]
] | TITLE: TruthLens:A Training-Free Paradigm for DeepFake Detection
ABSTRACT: The proliferation of synthetic images generated by advanced AI models poses
significant challenges in identifying and understanding manipulated visual
content. Current fake image detection methods predominantly rely on binary
classification models that focus on accuracy while often neglecting
interpretability, leaving users without clear insights into why an image is
deemed real or fake. To bridge this gap, we introduce TruthLens, a novel
training-free framework that reimagines deepfake detection as a visual
question-answering (VQA) task. TruthLens utilizes state-of-the-art large
vision-language models (LVLMs) to observe and describe visual artifacts and
combines this with the reasoning capabilities of large language models (LLMs)
like GPT-4 to analyze and aggregate evidence into informed decisions. By
adopting a multimodal approach, TruthLens seamlessly integrates visual and
semantic reasoning to not only classify images as real or fake but also provide
interpretable explanations for its decisions. This transparency enhances trust
and provides valuable insights into the artifacts that signal synthetic
content. Extensive evaluations demonstrate that TruthLens outperforms
conventional methods, achieving high accuracy on challenging datasets while
maintaining a strong emphasis on explainability. By reframing deepfake
detection as a reasoning-driven process, TruthLens establishes a new paradigm
in combating synthetic media, combining cutting-edge performance with
interpretability to address the growing threats of visual disinformation.
|
2503.15351 | I-Fan Lin | I-Fan Lin, Faegheh Hasibi, Suzan Verberne | SPILL: Domain-Adaptive Intent Clustering based on Selection and Pooling
with Large Language Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose Selection and Pooling with Large Language Models
(SPILL), an intuitive and domain-adaptive method for intent clustering without
fine-tuning. Existing embeddings-based clustering methods rely on a few labeled
examples or unsupervised fine-tuning to optimize results for each new dataset,
which makes them less generalizable to multiple datasets. Our goal is to make
these existing embedders more generalizable to new domain datasets without
further fine-tuning. Inspired by our theoretical derivation and simulation
results on the effectiveness of sampling and pooling techniques, we view the
clustering task as a small-scale selection problem. A good solution to this
problem is associated with better clustering performance. Accordingly, we
propose a two-stage approach: First, for each utterance (referred to as the
seed), we derive its embedding using an existing embedder. Then, we apply a
distance metric to select a pool of candidates close to the seed. Because the
embedder is not optimized for new datasets, in the second stage, we use an LLM
to further select utterances from these candidates that share the same intent
as the seed. Finally, we pool these selected candidates with the seed to derive
a refined embedding for the seed. We found that our method generally
outperforms directly using an embedder, and it achieves comparable results to
other state-of-the-art studies, even those that use much larger models and
require fine-tuning, showing its strength and efficiency. Our results indicate
that our method enables existing embedders to be further improved without
additional fine-tuning, making them more adaptable to new domain datasets.
Additionally, viewing the clustering task as a small-scale selection problem
gives the potential of using LLMs to customize clustering tasks according to
the user's goals.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:48:57 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lin",
"I-Fan",
""
],
[
"Hasibi",
"Faegheh",
""
],
[
"Verberne",
"Suzan",
""
]
] | TITLE: SPILL: Domain-Adaptive Intent Clustering based on Selection and Pooling
with Large Language Models
ABSTRACT: In this paper, we propose Selection and Pooling with Large Language Models
(SPILL), an intuitive and domain-adaptive method for intent clustering without
fine-tuning. Existing embeddings-based clustering methods rely on a few labeled
examples or unsupervised fine-tuning to optimize results for each new dataset,
which makes them less generalizable to multiple datasets. Our goal is to make
these existing embedders more generalizable to new domain datasets without
further fine-tuning. Inspired by our theoretical derivation and simulation
results on the effectiveness of sampling and pooling techniques, we view the
clustering task as a small-scale selection problem. A good solution to this
problem is associated with better clustering performance. Accordingly, we
propose a two-stage approach: First, for each utterance (referred to as the
seed), we derive its embedding using an existing embedder. Then, we apply a
distance metric to select a pool of candidates close to the seed. Because the
embedder is not optimized for new datasets, in the second stage, we use an LLM
to further select utterances from these candidates that share the same intent
as the seed. Finally, we pool these selected candidates with the seed to derive
a refined embedding for the seed. We found that our method generally
outperforms directly using an embedder, and it achieves comparable results to
other state-of-the-art studies, even those that use much larger models and
require fine-tuning, showing its strength and efficiency. Our results indicate
that our method enables existing embedders to be further improved without
additional fine-tuning, making them more adaptable to new domain datasets.
Additionally, viewing the clustering task as a small-scale selection problem
gives the potential of using LLMs to customize clustering tasks according to
the user's goals.
|
2503.15354 | Yining Lu | Yining Lu, Noah Ziems, Hy Dang, Meng Jiang | Optimizing Decomposition for Optimal Claim Verification | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current research on the \textit{Decompose-Then-Verify} paradigm for
evaluating the factuality of long-form text typically treats decomposition and
verification in isolation, overlooking their interactions and potential
misalignment. We find that existing decomposition policies, typically
hand-crafted demonstrations, do not align well with downstream verifiers in
terms of atomicity -- a novel metric quantifying information density -- leading
to suboptimal verification results. We formulate finding the optimal
decomposition policy for optimal verification as a bilevel optimization
problem. To approximate a solution for this strongly NP-hard problem, we
propose dynamic decomposition, a reinforcement learning framework that
leverages verifier feedback to learn a policy for dynamically decomposing
claims to verifier-preferred atomicity. Experimental results show that dynamic
decomposition outperforms existing decomposition policies, improving
verification confidence by 0.07 and accuracy by 0.12 (on a 0-1 scale) on
average across varying verifiers, datasets, and atomcities of input claims.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:56:21 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Lu",
"Yining",
""
],
[
"Ziems",
"Noah",
""
],
[
"Dang",
"Hy",
""
],
[
"Jiang",
"Meng",
""
]
] | TITLE: Optimizing Decomposition for Optimal Claim Verification
ABSTRACT: Current research on the \textit{Decompose-Then-Verify} paradigm for
evaluating the factuality of long-form text typically treats decomposition and
verification in isolation, overlooking their interactions and potential
misalignment. We find that existing decomposition policies, typically
hand-crafted demonstrations, do not align well with downstream verifiers in
terms of atomicity -- a novel metric quantifying information density -- leading
to suboptimal verification results. We formulate finding the optimal
decomposition policy for optimal verification as a bilevel optimization
problem. To approximate a solution for this strongly NP-hard problem, we
propose dynamic decomposition, a reinforcement learning framework that
leverages verifier feedback to learn a policy for dynamically decomposing
claims to verifier-preferred atomicity. Experimental results show that dynamic
decomposition outperforms existing decomposition policies, improving
verification confidence by 0.07 and accuracy by 0.12 (on a 0-1 scale) on
average across varying verifiers, datasets, and atomcities of input claims.
|
2503.15358 | Thomas Pickard | Thomas Pickard, Aline Villavicencio, Maggie Mi, Wei He, Dylan Phelps,
Carolina Scarton, Marco Idiart | SemEval-2025 Task 1: AdMIRe -- Advancing Multimodal Idiomaticity
Representation | Preprint; SemEval-2025 proceedings to appear at ACL 2025 | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Idiomatic expressions present a unique challenge in NLP, as their meanings
are often not directly inferable from their constituent words. Despite recent
advancements in Large Language Models (LLMs), idiomaticity remains a
significant obstacle to robust semantic representation. We present datasets and
tasks for SemEval-2025 Task 1: AdMiRe (Advancing Multimodal Idiomaticity
Representation), which challenges the community to assess and improve models'
ability to interpret idiomatic expressions in multimodal contexts and in
multiple languages. Participants competed in two subtasks: ranking images based
on their alignment with idiomatic or literal meanings, and predicting the next
image in a sequence. The most effective methods achieved human-level
performance by leveraging pretrained LLMs and vision-language models in
mixture-of-experts settings, with multiple queries used to smooth over the
weaknesses in these models' representations of idiomaticity.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:58:46 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Pickard",
"Thomas",
""
],
[
"Villavicencio",
"Aline",
""
],
[
"Mi",
"Maggie",
""
],
[
"He",
"Wei",
""
],
[
"Phelps",
"Dylan",
""
],
[
"Scarton",
"Carolina",
""
],
[
"Idiart",
"Marco",
""
]
] | TITLE: SemEval-2025 Task 1: AdMIRe -- Advancing Multimodal Idiomaticity
Representation
ABSTRACT: Idiomatic expressions present a unique challenge in NLP, as their meanings
are often not directly inferable from their constituent words. Despite recent
advancements in Large Language Models (LLMs), idiomaticity remains a
significant obstacle to robust semantic representation. We present datasets and
tasks for SemEval-2025 Task 1: AdMiRe (Advancing Multimodal Idiomaticity
Representation), which challenges the community to assess and improve models'
ability to interpret idiomatic expressions in multimodal contexts and in
multiple languages. Participants competed in two subtasks: ranking images based
on their alignment with idiomatic or literal meanings, and predicting the next
image in a sequence. The most effective methods achieved human-level
performance by leveraging pretrained LLMs and vision-language models in
mixture-of-experts settings, with multiple queries used to smooth over the
weaknesses in these models' representations of idiomaticity.
|
2503.15367 | Jacopo Talpini | Jacopo Talpini and Marco Savi and Giovanni Neglia | FedBEns: One-Shot Federated Learning based on Bayesian Ensemble | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | One-Shot Federated Learning (FL) is a recent paradigm that enables multiple
clients to cooperatively learn a global model in a single round of
communication with a central server. In this paper, we analyze the One-Shot FL
problem through the lens of Bayesian inference and propose FedBEns, an
algorithm that leverages the inherent multimodality of local loss functions to
find better global models. Our algorithm leverages a mixture of Laplace
approximations for the clients' local posteriors, which the server then
aggregates to infer the global model. We conduct extensive experiments on
various datasets, demonstrating that the proposed method outperforms competing
baselines that typically rely on unimodal approximations of the local losses.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 16:05:52 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Talpini",
"Jacopo",
""
],
[
"Savi",
"Marco",
""
],
[
"Neglia",
"Giovanni",
""
]
] | TITLE: FedBEns: One-Shot Federated Learning based on Bayesian Ensemble
ABSTRACT: One-Shot Federated Learning (FL) is a recent paradigm that enables multiple
clients to cooperatively learn a global model in a single round of
communication with a central server. In this paper, we analyze the One-Shot FL
problem through the lens of Bayesian inference and propose FedBEns, an
algorithm that leverages the inherent multimodality of local loss functions to
find better global models. Our algorithm leverages a mixture of Laplace
approximations for the clients' local posteriors, which the server then
aggregates to infer the global model. We conduct extensive experiments on
various datasets, demonstrating that the proposed method outperforms competing
baselines that typically rely on unimodal approximations of the local losses.
|
2503.15369 | Yinan Liang | Yinan Liang, Ziwei Wang, Xiuwei Xu, Jie Zhou, Jiwen Lu | EfficientLLaVA:Generalizable Auto-Pruning for Large Vision-language
Models | Accepted by CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | While multimodal large language models demonstrate strong performance in
complex reasoning tasks, they pose significant challenges related to model
complexity during deployment, especially for resource-limited devices. In this
paper, we propose an automatic pruning method for large vision-language models
to enhance the efficiency of multimodal reasoning. Conventional methods rely on
the training data of the original model to select the proper pruning ratio for
different network components. However, these methods are impractical for large
vision-language models due to the unaffordable search costs caused by web-scale
training corpus. In contrast, our approach only leverages a small number of
samples to search for the desired pruning policy by maximizing its
generalization ability on unknown training data while maintaining the model
accuracy, which enables the achievement of an optimal trade-off between
accuracy and efficiency for large visual language models. Specifically, we
formulate the generalization gap of the pruning strategy using the structural
risk minimization principle. Based on both task performance and generalization
capability, we iteratively search for the optimal pruning policy within a given
search space and optimize the vision projector to evolve the search space with
higher upper bound of performance. We conduct extensive experiments on the
ScienceQA, Vizwiz, MM-vet, and LLaVA-Bench datasets for the task of visual
question answering. Using only 64 samples for pruning policy search,
EfficientLLaVA achieves an accuracy of 83.05% on ScienceQA, along with a
$\times$ 1.8 speedup compared to the dense LLaVA-v1.5-7B model.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 16:07:04 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Liang",
"Yinan",
""
],
[
"Wang",
"Ziwei",
""
],
[
"Xu",
"Xiuwei",
""
],
[
"Zhou",
"Jie",
""
],
[
"Lu",
"Jiwen",
""
]
] | TITLE: EfficientLLaVA:Generalizable Auto-Pruning for Large Vision-language
Models
ABSTRACT: While multimodal large language models demonstrate strong performance in
complex reasoning tasks, they pose significant challenges related to model
complexity during deployment, especially for resource-limited devices. In this
paper, we propose an automatic pruning method for large vision-language models
to enhance the efficiency of multimodal reasoning. Conventional methods rely on
the training data of the original model to select the proper pruning ratio for
different network components. However, these methods are impractical for large
vision-language models due to the unaffordable search costs caused by web-scale
training corpus. In contrast, our approach only leverages a small number of
samples to search for the desired pruning policy by maximizing its
generalization ability on unknown training data while maintaining the model
accuracy, which enables the achievement of an optimal trade-off between
accuracy and efficiency for large visual language models. Specifically, we
formulate the generalization gap of the pruning strategy using the structural
risk minimization principle. Based on both task performance and generalization
capability, we iteratively search for the optimal pruning policy within a given
search space and optimize the vision projector to evolve the search space with
higher upper bound of performance. We conduct extensive experiments on the
ScienceQA, Vizwiz, MM-vet, and LLaVA-Bench datasets for the task of visual
question answering. Using only 64 samples for pruning policy search,
EfficientLLaVA achieves an accuracy of 83.05% on ScienceQA, along with a
$\times$ 1.8 speedup compared to the dense LLaVA-v1.5-7B model.
|
2503.15374 | Anatole Callies | Anatole Callies (Inato), Quentin Bodinier (Inato), Philippe Ravaud
(Inato, Universit\'e Paris Cit\'e and Universit\'e Sorbonne Paris Nord,
INSERM, INRAE, Paris, France, Centre d'epid\'emiologie clinique, AP-HP,
H\^opital H\^otel Dieu, Paris, France) and Kourosh Davarpanah (Inato) | Real-world validation of a multimodal LLM-powered pipeline for
High-Accuracy Clinical Trial Patient Matching leveraging EHR data | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Background: Patient recruitment in clinical trials is hindered by complex
eligibility criteria and labor-intensive chart reviews. Prior research using
text-only models have struggled to address this problem in a reliable and
scalable way due to (1) limited reasoning capabilities, (2) information loss
from converting visual records to text, and (3) lack of a generic EHR
integration to extract patient data.
Methods: We introduce a broadly applicable, integration-free, LLM-powered
pipeline that automates patient-trial matching using unprocessed documents
extracted from EHRs. Our approach leverages (1) the new reasoning-LLM paradigm,
enabling the assessment of even the most complex criteria, (2) visual
capabilities of latest LLMs to interpret medical records without lossy
image-to-text conversions, and (3) multimodal embeddings for efficient medical
record search. The pipeline was validated on the n2c2 2018 cohort selection
dataset (288 diabetic patients) and a real-world dataset composed of 485
patients from 30 different sites matched against 36 diverse trials.
Results: On the n2c2 dataset, our method achieved a new state-of-the-art
criterion-level accuracy of 93\%. In real-world trials, the pipeline yielded an
accuracy of 87\%, undermined by the difficulty to replicate human
decision-making when medical records lack sufficient information. Nevertheless,
users were able to review overall eligibility in under 9 minutes per patient on
average, representing an 80\% improvement over traditional manual chart
reviews.
Conclusion: This pipeline demonstrates robust performance in clinical trial
patient matching without requiring custom integration with site systems or
trial-specific tailoring, thereby enabling scalable deployment across sites
seeking to leverage AI for patient matching.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 16:12:11 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Callies",
"Anatole",
"",
"Inato"
],
[
"Bodinier",
"Quentin",
"",
"Inato"
],
[
"Ravaud",
"Philippe",
"",
"Inato, Université Paris Cité and Université Sorbonne Paris Nord,\n INSERM, INRAE, Paris, France, Centre d'epidémiologie clinique, AP-HP,\n Hôpital Hôtel Dieu, Paris, France"
],
[
"Davarpanah",
"Kourosh",
"",
"Inato"
]
] | TITLE: Real-world validation of a multimodal LLM-powered pipeline for
High-Accuracy Clinical Trial Patient Matching leveraging EHR data
ABSTRACT: Background: Patient recruitment in clinical trials is hindered by complex
eligibility criteria and labor-intensive chart reviews. Prior research using
text-only models have struggled to address this problem in a reliable and
scalable way due to (1) limited reasoning capabilities, (2) information loss
from converting visual records to text, and (3) lack of a generic EHR
integration to extract patient data.
Methods: We introduce a broadly applicable, integration-free, LLM-powered
pipeline that automates patient-trial matching using unprocessed documents
extracted from EHRs. Our approach leverages (1) the new reasoning-LLM paradigm,
enabling the assessment of even the most complex criteria, (2) visual
capabilities of latest LLMs to interpret medical records without lossy
image-to-text conversions, and (3) multimodal embeddings for efficient medical
record search. The pipeline was validated on the n2c2 2018 cohort selection
dataset (288 diabetic patients) and a real-world dataset composed of 485
patients from 30 different sites matched against 36 diverse trials.
Results: On the n2c2 dataset, our method achieved a new state-of-the-art
criterion-level accuracy of 93\%. In real-world trials, the pipeline yielded an
accuracy of 87\%, undermined by the difficulty to replicate human
decision-making when medical records lack sufficient information. Nevertheless,
users were able to review overall eligibility in under 9 minutes per patient on
average, representing an 80\% improvement over traditional manual chart
reviews.
Conclusion: This pipeline demonstrates robust performance in clinical trial
patient matching without requiring custom integration with site systems or
trial-specific tailoring, thereby enabling scalable deployment across sites
seeking to leverage AI for patient matching.
|
2503.15390 | Yumin Zhang | Yumin Zhang, Yan Gao, Haoran Duan, Hanqing Guo, Tejal Shah, Rajiv
Ranjan, and Bo Wei | FedSCA: Federated Tuning with Similarity-guided Collaborative
Aggregation for Heterogeneous Medical Image Segmentation | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer-based foundation models (FMs) have recently demonstrated
remarkable performance in medical image segmentation. However, scaling these
models is challenging due to the limited size of medical image datasets within
isolated hospitals, where data centralization is restricted due to privacy
concerns. These constraints, combined with the data-intensive nature of FMs,
hinder their broader application. Integrating federated learning (FL) with
foundation models (FLFM) fine-tuning offers a potential solution to these
challenges by enabling collaborative model training without data sharing, thus
allowing FMs to take advantage of a diverse pool of sensitive medical image
data across hospitals/clients. However, non-independent and identically
distributed (non-IID) data among clients, paired with computational and
communication constraints in federated environments, presents an additional
challenge that limits further performance improvements and remains inadequately
addressed in existing studies. In this work, we propose a novel FLFM
fine-tuning framework, \underline{\textbf{Fed}}erated tuning with
\underline{\textbf{S}}imilarity-guided \underline{\textbf{C}}ollaborative
\underline{\textbf{A}}ggregation (FedSCA), encompassing all phases of the FL
process. This includes (1) specially designed parameter-efficient fine-tuning
(PEFT) for local client training to enhance computational efficiency; (2)
partial low-level adapter transmission for communication efficiency; and (3)
similarity-guided collaborative aggregation (SGCA) on the server side to
address non-IID issues. Extensive experiments on three FL benchmarks for
medical image segmentation demonstrate the effectiveness of our proposed
FedSCA, establishing new SOTA performance.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 16:27:29 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Zhang",
"Yumin",
""
],
[
"Gao",
"Yan",
""
],
[
"Duan",
"Haoran",
""
],
[
"Guo",
"Hanqing",
""
],
[
"Shah",
"Tejal",
""
],
[
"Ranjan",
"Rajiv",
""
],
[
"Wei",
"Bo",
""
]
] | TITLE: FedSCA: Federated Tuning with Similarity-guided Collaborative
Aggregation for Heterogeneous Medical Image Segmentation
ABSTRACT: Transformer-based foundation models (FMs) have recently demonstrated
remarkable performance in medical image segmentation. However, scaling these
models is challenging due to the limited size of medical image datasets within
isolated hospitals, where data centralization is restricted due to privacy
concerns. These constraints, combined with the data-intensive nature of FMs,
hinder their broader application. Integrating federated learning (FL) with
foundation models (FLFM) fine-tuning offers a potential solution to these
challenges by enabling collaborative model training without data sharing, thus
allowing FMs to take advantage of a diverse pool of sensitive medical image
data across hospitals/clients. However, non-independent and identically
distributed (non-IID) data among clients, paired with computational and
communication constraints in federated environments, presents an additional
challenge that limits further performance improvements and remains inadequately
addressed in existing studies. In this work, we propose a novel FLFM
fine-tuning framework, \underline{\textbf{Fed}}erated tuning with
\underline{\textbf{S}}imilarity-guided \underline{\textbf{C}}ollaborative
\underline{\textbf{A}}ggregation (FedSCA), encompassing all phases of the FL
process. This includes (1) specially designed parameter-efficient fine-tuning
(PEFT) for local client training to enhance computational efficiency; (2)
partial low-level adapter transmission for communication efficiency; and (3)
similarity-guided collaborative aggregation (SGCA) on the server side to
address non-IID issues. Extensive experiments on three FL benchmarks for
medical image segmentation demonstrate the effectiveness of our proposed
FedSCA, establishing new SOTA performance.
|
2503.15402 | Alejandro Pequeno Zurro | Alejandro Peque\~no-Zurro, Lyes Khacef, Stefano Panzeri, and
Elisabetta Chicca | Towards efficient keyword spotting using spike-based time difference
encoders | 26 pages, 9 figures | null | null | null | cs.NE cs.AI cs.CV cs.ET | http://creativecommons.org/licenses/by/4.0/ | Keyword spotting in edge devices is becoming increasingly important as
voice-activated assistants are widely used. However, its deployment is often
limited by the extreme low-power constraints of the target embedded systems.
Here, we explore the Temporal Difference Encoder (TDE) performance in keyword
spotting. This recent neuron model encodes the time difference in instantaneous
frequency and spike count to perform efficient keyword spotting with
neuromorphic processors. We use the TIdigits dataset of spoken digits with a
formant decomposition and rate-based encoding into spikes. We compare three
Spiking Neural Networks (SNNs) architectures to learn and classify
spatio-temporal signals. The proposed SNN architectures are made of three
layers with variation in its hidden layer composed of either (1) feedforward
TDE, (2) feedforward Current-Based Leaky Integrate-and-Fire (CuBa-LIF), or (3)
recurrent CuBa-LIF neurons. We first show that the spike trains of the
frequency-converted spoken digits have a large amount of information in the
temporal domain, reinforcing the importance of better exploiting temporal
encoding for such a task. We then train the three SNNs with the same number of
synaptic weights to quantify and compare their performance based on the
accuracy and synaptic operations. The resulting accuracy of the feedforward TDE
network (89%) is higher than the feedforward CuBa-LIF network (71%) and close
to the recurrent CuBa-LIF network (91%). However, the feedforward TDE-based
network performs 92% fewer synaptic operations than the recurrent CuBa-LIF
network with the same amount of synapses. In addition, the results of the TDE
network are highly interpretable and correlated with the frequency and
timescale features of the spoken keywords in the dataset. Our findings suggest
that the TDE is a promising neuron model for scalable event-driven processing
of spatio-temporal patterns.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 16:43:35 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Pequeño-Zurro",
"Alejandro",
""
],
[
"Khacef",
"Lyes",
""
],
[
"Panzeri",
"Stefano",
""
],
[
"Chicca",
"Elisabetta",
""
]
] | TITLE: Towards efficient keyword spotting using spike-based time difference
encoders
ABSTRACT: Keyword spotting in edge devices is becoming increasingly important as
voice-activated assistants are widely used. However, its deployment is often
limited by the extreme low-power constraints of the target embedded systems.
Here, we explore the Temporal Difference Encoder (TDE) performance in keyword
spotting. This recent neuron model encodes the time difference in instantaneous
frequency and spike count to perform efficient keyword spotting with
neuromorphic processors. We use the TIdigits dataset of spoken digits with a
formant decomposition and rate-based encoding into spikes. We compare three
Spiking Neural Networks (SNNs) architectures to learn and classify
spatio-temporal signals. The proposed SNN architectures are made of three
layers with variation in its hidden layer composed of either (1) feedforward
TDE, (2) feedforward Current-Based Leaky Integrate-and-Fire (CuBa-LIF), or (3)
recurrent CuBa-LIF neurons. We first show that the spike trains of the
frequency-converted spoken digits have a large amount of information in the
temporal domain, reinforcing the importance of better exploiting temporal
encoding for such a task. We then train the three SNNs with the same number of
synaptic weights to quantify and compare their performance based on the
accuracy and synaptic operations. The resulting accuracy of the feedforward TDE
network (89%) is higher than the feedforward CuBa-LIF network (71%) and close
to the recurrent CuBa-LIF network (91%). However, the feedforward TDE-based
network performs 92% fewer synaptic operations than the recurrent CuBa-LIF
network with the same amount of synapses. In addition, the results of the TDE
network are highly interpretable and correlated with the frequency and
timescale features of the spoken keywords in the dataset. Our findings suggest
that the TDE is a promising neuron model for scalable event-driven processing
of spatio-temporal patterns.
|
2503.15412 | Fereshteh Forghani | Fereshteh Forghani, Jason J. Yu, Tristan Aumentado-Armstrong,
Konstantinos G. Derpanis, Marcus A. Brubaker | Learn Your Scales: Towards Scale-Consistent Generative Novel View
Synthesis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional depth-free multi-view datasets are captured using a moving
monocular camera without metric calibration. The scales of camera positions in
this monocular setting are ambiguous. Previous methods have acknowledged scale
ambiguity in multi-view data via various ad-hoc normalization pre-processing
steps, but have not directly analyzed the effect of incorrect scene scales on
their application. In this paper, we seek to understand and address the effect
of scale ambiguity when used to train generative novel view synthesis methods
(GNVS). In GNVS, new views of a scene or object can be minimally synthesized
given a single image and are, thus, unconstrained, necessitating the use of
generative methods. The generative nature of these models captures all aspects
of uncertainty, including any uncertainty of scene scales, which act as
nuisance variables for the task. We study the effect of scene scale ambiguity
in GNVS when sampled from a single image by isolating its effect on the
resulting models and, based on these intuitions, define new metrics that
measure the scale inconsistency of generated views. We then propose a framework
to estimate scene scales jointly with the GNVS model in an end-to-end fashion.
Empirically, we show that our method reduces the scale inconsistency of
generated views without the complexity or downsides of previous scale
normalization methods. Further, we show that removing this ambiguity improves
generated image quality of the resulting GNVS model.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 16:56:03 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Forghani",
"Fereshteh",
""
],
[
"Yu",
"Jason J.",
""
],
[
"Aumentado-Armstrong",
"Tristan",
""
],
[
"Derpanis",
"Konstantinos G.",
""
],
[
"Brubaker",
"Marcus A.",
""
]
] | TITLE: Learn Your Scales: Towards Scale-Consistent Generative Novel View
Synthesis
ABSTRACT: Conventional depth-free multi-view datasets are captured using a moving
monocular camera without metric calibration. The scales of camera positions in
this monocular setting are ambiguous. Previous methods have acknowledged scale
ambiguity in multi-view data via various ad-hoc normalization pre-processing
steps, but have not directly analyzed the effect of incorrect scene scales on
their application. In this paper, we seek to understand and address the effect
of scale ambiguity when used to train generative novel view synthesis methods
(GNVS). In GNVS, new views of a scene or object can be minimally synthesized
given a single image and are, thus, unconstrained, necessitating the use of
generative methods. The generative nature of these models captures all aspects
of uncertainty, including any uncertainty of scene scales, which act as
nuisance variables for the task. We study the effect of scene scale ambiguity
in GNVS when sampled from a single image by isolating its effect on the
resulting models and, based on these intuitions, define new metrics that
measure the scale inconsistency of generated views. We then propose a framework
to estimate scene scales jointly with the GNVS model in an end-to-end fashion.
Empirically, we show that our method reduces the scale inconsistency of
generated views without the complexity or downsides of previous scale
normalization methods. Further, we show that removing this ambiguity improves
generated image quality of the resulting GNVS model.
|
2503.15432 | Johnathan Dimitrios Georgaras | Johnathan D. Georgaras, Akash Ramdas, Chung Hsuan Shan, Elena Halsted,
Berwyn, Tianshu Li, Felipe H. da Jornada | Accurate, transferable, and verifiable machine-learned interatomic
potentials for layered materials | 10 pages, 5 figures | null | null | null | cond-mat.mtrl-sci cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Twisted layered van-der-Waals materials often exhibit unique electronic and
optical properties absent in their non-twisted counterparts. Unfortunately,
predicting such properties is hindered by the difficulty in determining the
atomic structure in materials displaying large moir\'e domains. Here, we
introduce a split machine-learned interatomic potential and dataset curation
approach that separates intralayer and interlayer interactions and
significantly improves model accuracy -- with a tenfold increase in energy and
force prediction accuracy relative to conventional models. We further
demonstrate that traditional MLIP validation metrics -- force and energy errors
-- are inadequate for moir\'e structures and develop a more holistic,
physically-motivated metric based on the distribution of stacking
configurations. This metric effectively compares the entirety of large-scale
moir\'e domains between two structures instead of relying on conventional
measures evaluated on smaller commensurate cells. Finally, we establish that
one-dimensional instead of two-dimensional moir\'e structures can serve as
efficient surrogate systems for validating MLIPs, allowing for a practical
model validation protocol against explicit DFT calculations. Applying our
framework to HfS2/GaS bilayers reveals that accurate structural predictions
directly translate into reliable electronic properties. Our model-agnostic
approach integrates seamlessly with various intralayer and interlayer
interaction models, enabling computationally tractable relaxation of moir\'e
materials, from bilayer to complex multilayers, with rigorously validated
accuracy.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 17:14:02 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Georgaras",
"Johnathan D.",
""
],
[
"Ramdas",
"Akash",
""
],
[
"Shan",
"Chung Hsuan",
""
],
[
"Halsted",
"Elena",
""
],
[
"Berwyn",
"",
""
],
[
"Li",
"Tianshu",
""
],
[
"da Jornada",
"Felipe H.",
""
]
] | TITLE: Accurate, transferable, and verifiable machine-learned interatomic
potentials for layered materials
ABSTRACT: Twisted layered van-der-Waals materials often exhibit unique electronic and
optical properties absent in their non-twisted counterparts. Unfortunately,
predicting such properties is hindered by the difficulty in determining the
atomic structure in materials displaying large moir\'e domains. Here, we
introduce a split machine-learned interatomic potential and dataset curation
approach that separates intralayer and interlayer interactions and
significantly improves model accuracy -- with a tenfold increase in energy and
force prediction accuracy relative to conventional models. We further
demonstrate that traditional MLIP validation metrics -- force and energy errors
-- are inadequate for moir\'e structures and develop a more holistic,
physically-motivated metric based on the distribution of stacking
configurations. This metric effectively compares the entirety of large-scale
moir\'e domains between two structures instead of relying on conventional
measures evaluated on smaller commensurate cells. Finally, we establish that
one-dimensional instead of two-dimensional moir\'e structures can serve as
efficient surrogate systems for validating MLIPs, allowing for a practical
model validation protocol against explicit DFT calculations. Applying our
framework to HfS2/GaS bilayers reveals that accurate structural predictions
directly translate into reliable electronic properties. Our model-agnostic
approach integrates seamlessly with various intralayer and interlayer
interaction models, enabling computationally tractable relaxation of moir\'e
materials, from bilayer to complex multilayers, with rigorously validated
accuracy.
|
2503.15435 | Baolu Li | Baolu Li and Zongzhe Xu and Jinlong Li and Xinyu Liu and Jianwu Fang
and Xiaopeng Li and Hongkai Yu | V2X-DG: Domain Generalization for Vehicle-to-Everything Cooperative
Perception | accepted by ICRA 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | LiDAR-based Vehicle-to-Everything (V2X) cooperative perception has
demonstrated its impact on the safety and effectiveness of autonomous driving.
Since current cooperative perception algorithms are trained and tested on the
same dataset, the generalization ability of cooperative perception systems
remains underexplored. This paper is the first work to study the Domain
Generalization problem of LiDAR-based V2X cooperative perception (V2X-DG) for
3D detection based on four widely-used open source datasets: OPV2V, V2XSet,
V2V4Real and DAIR-V2X. Our research seeks to sustain high performance not only
within the source domain but also across other unseen domains, achieved solely
through training on source domain. To this end, we propose Cooperative Mixup
Augmentation based Generalization (CMAG) to improve the model generalization
capability by simulating the unseen cooperation, which is designed compactly
for the domain gaps in cooperative perception. Furthermore, we propose a
constraint for the regularization of the robust generalized feature
representation learning: Cooperation Feature Consistency (CFC), which aligns
the intermediately fused features of the generalized cooperation by CMAG and
the early fused features of the original cooperation in source domain.
Extensive experiments demonstrate that our approach achieves significant
performance gains when generalizing to other unseen datasets while it also
maintains strong performance on the source dataset.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 17:17:44 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Li",
"Baolu",
""
],
[
"Xu",
"Zongzhe",
""
],
[
"Li",
"Jinlong",
""
],
[
"Liu",
"Xinyu",
""
],
[
"Fang",
"Jianwu",
""
],
[
"Li",
"Xiaopeng",
""
],
[
"Yu",
"Hongkai",
""
]
] | TITLE: V2X-DG: Domain Generalization for Vehicle-to-Everything Cooperative
Perception
ABSTRACT: LiDAR-based Vehicle-to-Everything (V2X) cooperative perception has
demonstrated its impact on the safety and effectiveness of autonomous driving.
Since current cooperative perception algorithms are trained and tested on the
same dataset, the generalization ability of cooperative perception systems
remains underexplored. This paper is the first work to study the Domain
Generalization problem of LiDAR-based V2X cooperative perception (V2X-DG) for
3D detection based on four widely-used open source datasets: OPV2V, V2XSet,
V2V4Real and DAIR-V2X. Our research seeks to sustain high performance not only
within the source domain but also across other unseen domains, achieved solely
through training on source domain. To this end, we propose Cooperative Mixup
Augmentation based Generalization (CMAG) to improve the model generalization
capability by simulating the unseen cooperation, which is designed compactly
for the domain gaps in cooperative perception. Furthermore, we propose a
constraint for the regularization of the robust generalized feature
representation learning: Cooperation Feature Consistency (CFC), which aligns
the intermediately fused features of the generalized cooperation by CMAG and
the early fused features of the original cooperation in source domain.
Extensive experiments demonstrate that our approach achieves significant
performance gains when generalizing to other unseen datasets while it also
maintains strong performance on the source dataset.
|
2503.15438 | Yang Tan | Yang Tan, Chen Liu, Jingyuan Gao, Banghao Wu, Mingchen Li, Ruilin
Wang, Lingrong Zhang, Huiqun Yu, Guisheng Fan, Liang Hong, Bingxin Zhou | VenusFactory: A Unified Platform for Protein Engineering Data Retrieval
and Language Model Fine-Tuning | 12 pages, 1 figure, 8 tables | null | null | null | cs.CL cs.AI q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Natural language processing (NLP) has significantly influenced scientific
domains beyond human language, including protein engineering, where pre-trained
protein language models (PLMs) have demonstrated remarkable success. However,
interdisciplinary adoption remains limited due to challenges in data
collection, task benchmarking, and application. This work presents
VenusFactory, a versatile engine that integrates biological data retrieval,
standardized task benchmarking, and modular fine-tuning of PLMs. VenusFactory
supports both computer science and biology communities with choices of both a
command-line execution and a Gradio-based no-code interface, integrating $40+$
protein-related datasets and $40+$ popular PLMs. All implementations are
open-sourced on https://github.com/tyang816/VenusFactory.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 17:19:07 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Tan",
"Yang",
""
],
[
"Liu",
"Chen",
""
],
[
"Gao",
"Jingyuan",
""
],
[
"Wu",
"Banghao",
""
],
[
"Li",
"Mingchen",
""
],
[
"Wang",
"Ruilin",
""
],
[
"Zhang",
"Lingrong",
""
],
[
"Yu",
"Huiqun",
""
],
[
"Fan",
"Guisheng",
""
],
[
"Hong",
"Liang",
""
],
[
"Zhou",
"Bingxin",
""
]
] | TITLE: VenusFactory: A Unified Platform for Protein Engineering Data Retrieval
and Language Model Fine-Tuning
ABSTRACT: Natural language processing (NLP) has significantly influenced scientific
domains beyond human language, including protein engineering, where pre-trained
protein language models (PLMs) have demonstrated remarkable success. However,
interdisciplinary adoption remains limited due to challenges in data
collection, task benchmarking, and application. This work presents
VenusFactory, a versatile engine that integrates biological data retrieval,
standardized task benchmarking, and modular fine-tuning of PLMs. VenusFactory
supports both computer science and biology communities with choices of both a
command-line execution and a Gradio-based no-code interface, integrating $40+$
protein-related datasets and $40+$ popular PLMs. All implementations are
open-sourced on https://github.com/tyang816/VenusFactory.
|
2503.15456 | Keertan Balaji | Aayam Bansal, Keertan Balaji, Zeus Lalani | Temporal Encoding Strategies for Energy Time Series Prediction | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In contemporary power systems, energy consumption prediction plays a crucial
role in maintaining grid stability and resource allocation enabling power
companies to minimize energy waste and avoid overloading the grid. While there
are several research works on energy optimization, they often fail to address
the complexities of real-time fluctuations and the cyclic pattern of energy
consumption. This work proposes a novel approach to enhance the accuracy of
predictive models by employing sinusoidal encoding on periodic features of
time-series data. To demonstrate the increase in performance, several
statistical and ensemble machine learning models were trained on an energy
demand dataset, using the proposed sinusoidal encoding. The performance of
these models was then benchmarked against identical models trained on
traditional encoding methods. The results demonstrated a 12.6% improvement of
Root Mean Squared Error (from 0.5497 to 0.4802) and a 7.8% increase in the R^2
score (from 0.7530 to 0.8118), indicating that the proposed encoding better
captures the cyclic nature of temporal patterns than traditional methods. The
proposed methodology significantly improves prediction accuracy while
maintaining computational efficiency, making it suitable for real-time
applications in smart grid systems.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 17:36:53 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Bansal",
"Aayam",
""
],
[
"Balaji",
"Keertan",
""
],
[
"Lalani",
"Zeus",
""
]
] | TITLE: Temporal Encoding Strategies for Energy Time Series Prediction
ABSTRACT: In contemporary power systems, energy consumption prediction plays a crucial
role in maintaining grid stability and resource allocation enabling power
companies to minimize energy waste and avoid overloading the grid. While there
are several research works on energy optimization, they often fail to address
the complexities of real-time fluctuations and the cyclic pattern of energy
consumption. This work proposes a novel approach to enhance the accuracy of
predictive models by employing sinusoidal encoding on periodic features of
time-series data. To demonstrate the increase in performance, several
statistical and ensemble machine learning models were trained on an energy
demand dataset, using the proposed sinusoidal encoding. The performance of
these models was then benchmarked against identical models trained on
traditional encoding methods. The results demonstrated a 12.6% improvement of
Root Mean Squared Error (from 0.5497 to 0.4802) and a 7.8% increase in the R^2
score (from 0.7530 to 0.8118), indicating that the proposed encoding better
captures the cyclic nature of temporal patterns than traditional methods. The
proposed methodology significantly improves prediction accuracy while
maintaining computational efficiency, making it suitable for real-time
applications in smart grid systems.
|
2503.15466 | Brice Coffer | Brice Coffer, Matthew Parker, Michael Coniglio, Cameron Homeyer | Supercell environments using GridRad-Severe and the HRRR: Addressing
discrepancies between prior tornado datasets | null | null | null | null | physics.ao-ph | http://creativecommons.org/licenses/by/4.0/ | Storm-relative helicity (SRH) is an important ingredient in supercell
development, as well as mesocyclone intensity, and is linked to tornadogenesis
and tornado potential. Derived from the storm-relative wind profile, SRH is
composed of both the vertical wind shear and storm-relative flow. Recent
studies have come to conflicting findings regarding whether shallower or deeper
layers of SRH have more skill in tornado forecasting. Possible causes of this
discrepancy include the use of observed versus model-based proximity soundings,
as well as whether the storm-relative wind profile is determined via observed
versus estimated storm motions. This study uses a new dataset of objectively
identified supercells, with observed storm motions, paired with high-resolution
model analyses to address the discrepancies among prior studies. Unlike in
previous model-based tornado environmental datasets, the present approach
reveals substantive differences in storm-relative flow, vertical wind shear,
and SRH within the low-to-mid-levels between nontornadic and tornadic
supercells. Using observed storm motions for storm-relative variables further
magnifies differences in the low-to-mid-level storm-relative winds between
nontornadic and tornadic supercells, ultimately leading to deeper layers of SRH
having more forecast skill than near-ground SRH. Thus, the combination of a
higher-resolution model analyses, which better represents the near-storm
environment, with observed storm motions appears to explain why many past
tornado climatologies using model-based environmental analyses have failed to
find significant differences in the storm-relative wind profile. These results
help bridge the gap between previous studies that employed coarser model-based
analyses with those that aggregated observed soundings from field projects.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 17:44:36 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Coffer",
"Brice",
""
],
[
"Parker",
"Matthew",
""
],
[
"Coniglio",
"Michael",
""
],
[
"Homeyer",
"Cameron",
""
]
] | TITLE: Supercell environments using GridRad-Severe and the HRRR: Addressing
discrepancies between prior tornado datasets
ABSTRACT: Storm-relative helicity (SRH) is an important ingredient in supercell
development, as well as mesocyclone intensity, and is linked to tornadogenesis
and tornado potential. Derived from the storm-relative wind profile, SRH is
composed of both the vertical wind shear and storm-relative flow. Recent
studies have come to conflicting findings regarding whether shallower or deeper
layers of SRH have more skill in tornado forecasting. Possible causes of this
discrepancy include the use of observed versus model-based proximity soundings,
as well as whether the storm-relative wind profile is determined via observed
versus estimated storm motions. This study uses a new dataset of objectively
identified supercells, with observed storm motions, paired with high-resolution
model analyses to address the discrepancies among prior studies. Unlike in
previous model-based tornado environmental datasets, the present approach
reveals substantive differences in storm-relative flow, vertical wind shear,
and SRH within the low-to-mid-levels between nontornadic and tornadic
supercells. Using observed storm motions for storm-relative variables further
magnifies differences in the low-to-mid-level storm-relative winds between
nontornadic and tornadic supercells, ultimately leading to deeper layers of SRH
having more forecast skill than near-ground SRH. Thus, the combination of a
higher-resolution model analyses, which better represents the near-storm
environment, with observed storm motions appears to explain why many past
tornado climatologies using model-based environmental analyses have failed to
find significant differences in the storm-relative wind profile. These results
help bridge the gap between previous studies that employed coarser model-based
analyses with those that aggregated observed soundings from field projects.
|
2503.15482 | Richard Barney | Richard Barney, Djamil Lakhdar-Hamina, Victor Galitski | Natural Quantization of Neural Networks | 7 pages, 8 figures, 1 table | null | null | null | quant-ph cond-mat.dis-nn cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose a natural quantization of a standard neural network, where the
neurons correspond to qubits and the activation functions are implemented via
quantum gates and measurements. The simplest quantized neural network
corresponds to applying single-qubit rotations, with the rotation angles being
dependent on the weights and measurement outcomes of the previous layer. This
realization has the advantage of being smoothly tunable from the purely
classical limit with no quantum uncertainty (thereby reproducing the classical
neural network exactly) to a quantum case, where superpositions introduce an
intrinsic uncertainty in the network. We benchmark this architecture on a
subset of the standard MNIST dataset and find a regime of "quantum advantage,"
where the validation error rate in the quantum realization is smaller than that
in the classical model. We also consider another approach where quantumness is
introduced via weak measurements of ancilla qubits entangled with the neuron
qubits. This quantum neural network also allows for smooth tuning of the degree
of quantumness by controlling an entanglement angle, $g$, with $g=\frac\pi 2$
replicating the classical regime. We find that validation error is also
minimized within the quantum regime in this approach. We also observe a quantum
transition, with sharp loss of the quantum network's ability to learn at a
critical point $g_c$. The proposed quantum neural networks are readily
realizable in present-day quantum computers on commercial datasets.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 17:57:11 GMT"
}
] | 2025-03-20T00:00:00 | [
[
"Barney",
"Richard",
""
],
[
"Lakhdar-Hamina",
"Djamil",
""
],
[
"Galitski",
"Victor",
""
]
] | TITLE: Natural Quantization of Neural Networks
ABSTRACT: We propose a natural quantization of a standard neural network, where the
neurons correspond to qubits and the activation functions are implemented via
quantum gates and measurements. The simplest quantized neural network
corresponds to applying single-qubit rotations, with the rotation angles being
dependent on the weights and measurement outcomes of the previous layer. This
realization has the advantage of being smoothly tunable from the purely
classical limit with no quantum uncertainty (thereby reproducing the classical
neural network exactly) to a quantum case, where superpositions introduce an
intrinsic uncertainty in the network. We benchmark this architecture on a
subset of the standard MNIST dataset and find a regime of "quantum advantage,"
where the validation error rate in the quantum realization is smaller than that
in the classical model. We also consider another approach where quantumness is
introduced via weak measurements of ancilla qubits entangled with the neuron
qubits. This quantum neural network also allows for smooth tuning of the degree
of quantumness by controlling an entanglement angle, $g$, with $g=\frac\pi 2$
replicating the classical regime. We find that validation error is also
minimized within the quantum regime in this approach. We also observe a quantum
transition, with sharp loss of the quantum network's ability to learn at a
critical point $g_c$. The proposed quantum neural networks are readily
realizable in present-day quantum computers on commercial datasets.
|
2208.06648 | Vincent Jeanselme | Vincent Jeanselme, Maria De-Arteaga, Zhe Zhang, Jessica Barrett and
Brian Tom | Imputation Strategies Under Clinical Presence: Impact on Algorithmic
Fairness | Full Journal Version under review; Presented at the conference
Machine Learning for Health (ML4H) 2022 Published in the Proceedings of
Machine Learning Research (193) | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine learning risks reinforcing biases present in data and, as we argue in
this work, in what is absent from data. In healthcare, societal and decision
biases shape patterns in missing data, yet the algorithmic fairness
implications of group-specific missingness are poorly understood. The way we
address missingness in healthcare can have detrimental impacts on downstream
algorithmic fairness. Our work questions current recommendations and practices
aimed at handling missing data with a focus on their effect on algorithmic
fairness, and offers a path forward. Specifically, we consider the theoretical
underpinnings of existing recommendations as well as their empirical predictive
performance and corresponding algorithmic fairness measured through subgroup
performances. Our results show that current practices for handling missingness
lack principled foundations, are disconnected from the realities of missingness
mechanisms in healthcare, and can be counterproductive. For example, we show
that favouring group-specific imputation strategy can be misguided and
exacerbate prediction disparities. We then build on our findings to propose a
framework for empirically guiding imputation choices, and an accompanying
reporting framework. Our work constitutes an important contribution to recent
efforts by regulators and practitioners to grapple with the realities of
real-world data, and to foster the responsible and transparent deployment of
machine learning systems. We demonstrate the practical utility of the proposed
framework through experimentation on widely used datasets, where we show how
the proposed framework can guide the selection of imputation strategies,
allowing us to choose among strategies that yield equal overall predictive
performance but present different algorithmic fairness properties.
| [
{
"version": "v1",
"created": "Sat, 13 Aug 2022 13:34:05 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2022 18:08:04 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Jun 2023 21:42:26 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Mar 2025 23:15:24 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Jeanselme",
"Vincent",
""
],
[
"De-Arteaga",
"Maria",
""
],
[
"Zhang",
"Zhe",
""
],
[
"Barrett",
"Jessica",
""
],
[
"Tom",
"Brian",
""
]
] | TITLE: Imputation Strategies Under Clinical Presence: Impact on Algorithmic
Fairness
ABSTRACT: Machine learning risks reinforcing biases present in data and, as we argue in
this work, in what is absent from data. In healthcare, societal and decision
biases shape patterns in missing data, yet the algorithmic fairness
implications of group-specific missingness are poorly understood. The way we
address missingness in healthcare can have detrimental impacts on downstream
algorithmic fairness. Our work questions current recommendations and practices
aimed at handling missing data with a focus on their effect on algorithmic
fairness, and offers a path forward. Specifically, we consider the theoretical
underpinnings of existing recommendations as well as their empirical predictive
performance and corresponding algorithmic fairness measured through subgroup
performances. Our results show that current practices for handling missingness
lack principled foundations, are disconnected from the realities of missingness
mechanisms in healthcare, and can be counterproductive. For example, we show
that favouring group-specific imputation strategy can be misguided and
exacerbate prediction disparities. We then build on our findings to propose a
framework for empirically guiding imputation choices, and an accompanying
reporting framework. Our work constitutes an important contribution to recent
efforts by regulators and practitioners to grapple with the realities of
real-world data, and to foster the responsible and transparent deployment of
machine learning systems. We demonstrate the practical utility of the proposed
framework through experimentation on widely used datasets, where we show how
the proposed framework can guide the selection of imputation strategies,
allowing us to choose among strategies that yield equal overall predictive
performance but present different algorithmic fairness properties.
|
2209.06428 | Kaiqi Chen | Kaiqi Chen, Junhao Xiao, Jialing Liu, Qiyi Tong, Heng Zhang, Ruyu Liu,
Jianhua Zhang, Arash Ajoudani, Shengyong Chen | Semantic Visual Simultaneous Localization and Mapping: A Survey | 14 pages,3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Simultaneous Localization and Mapping (vSLAM) has achieved great
progress in the computer vision and robotics communities, and has been
successfully used in many fields such as autonomous robot navigation and AR/VR.
However, vSLAM cannot achieve good localization in dynamic and complex
environments. Numerous publications have reported that, by combining with the
semantic information with vSLAM, the semantic vSLAM systems have the capability
of solving the above problems in recent years. Nevertheless, there is no
comprehensive survey about semantic vSLAM. To fill the gap, this paper first
reviews the development of semantic vSLAM, explicitly focusing on its strengths
and differences. Secondly, we explore three main issues of semantic vSLAM: the
extraction and association of semantic information, the application of semantic
information, and the advantages of semantic vSLAM. Then, we collect and analyze
the current state-of-the-art SLAM datasets which have been widely used in
semantic vSLAM systems. Finally, we discuss future directions that will provide
a blueprint for the future development of semantic vSLAM.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2022 05:45:26 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 01:34:43 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chen",
"Kaiqi",
""
],
[
"Xiao",
"Junhao",
""
],
[
"Liu",
"Jialing",
""
],
[
"Tong",
"Qiyi",
""
],
[
"Zhang",
"Heng",
""
],
[
"Liu",
"Ruyu",
""
],
[
"Zhang",
"Jianhua",
""
],
[
"Ajoudani",
"Arash",
""
],
[
"Chen",
"Shengyong",
""
]
] | TITLE: Semantic Visual Simultaneous Localization and Mapping: A Survey
ABSTRACT: Visual Simultaneous Localization and Mapping (vSLAM) has achieved great
progress in the computer vision and robotics communities, and has been
successfully used in many fields such as autonomous robot navigation and AR/VR.
However, vSLAM cannot achieve good localization in dynamic and complex
environments. Numerous publications have reported that, by combining with the
semantic information with vSLAM, the semantic vSLAM systems have the capability
of solving the above problems in recent years. Nevertheless, there is no
comprehensive survey about semantic vSLAM. To fill the gap, this paper first
reviews the development of semantic vSLAM, explicitly focusing on its strengths
and differences. Secondly, we explore three main issues of semantic vSLAM: the
extraction and association of semantic information, the application of semantic
information, and the advantages of semantic vSLAM. Then, we collect and analyze
the current state-of-the-art SLAM datasets which have been widely used in
semantic vSLAM systems. Finally, we discuss future directions that will provide
a blueprint for the future development of semantic vSLAM.
|
2303.10440 | Herv\'e Turlier | Sacha Ichbiah, Anshuman Sinha, Fabrice Delbary, Herv\'e Turlier | Inverse 3D microscopy rendering for cell shape inference with active
mesh | 11 pages, 9 figures | null | null | null | physics.bio-ph q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Traditional methods for biological shape inference, such as deep learning
(DL) and active contour models, face limitations in 3D. DL requires large
labeled datasets, which are difficult to obtain, while active contour models
rely on fine-tuned hyperparameters for intensity attraction and regularization.
We introduce deltaMic, a novel 3D differentiable renderer for fluorescence
microscopy. By leveraging differentiable Fourier-space convolution, deltaMic
accurately models the image formation process, integrating a parameterized
microscope point spread function and a mesh-based object representation. Unlike
DL-based segmentation, it directly optimizes shape and microscopy parameters to
fit real microscopy data, removing the need for large datasets or heuristic
priors. To enhance efficiency, we develop a GPU-accelerated Fourier transform
for triangle meshes, significantly improving speed. We demonstrate deltaMic's
ability to reconstruct cellular shapes from synthetic and real microscopy
images, providing a robust tool for 3D segmentation and biophysical modeling.
This work bridges physics-based rendering with modern optimization techniques,
offering a new paradigm for microscopy image analysis and inverse biophysical
modeling.
| [
{
"version": "v1",
"created": "Sat, 18 Mar 2023 15:45:10 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 21:54:17 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Ichbiah",
"Sacha",
""
],
[
"Sinha",
"Anshuman",
""
],
[
"Delbary",
"Fabrice",
""
],
[
"Turlier",
"Hervé",
""
]
] | TITLE: Inverse 3D microscopy rendering for cell shape inference with active
mesh
ABSTRACT: Traditional methods for biological shape inference, such as deep learning
(DL) and active contour models, face limitations in 3D. DL requires large
labeled datasets, which are difficult to obtain, while active contour models
rely on fine-tuned hyperparameters for intensity attraction and regularization.
We introduce deltaMic, a novel 3D differentiable renderer for fluorescence
microscopy. By leveraging differentiable Fourier-space convolution, deltaMic
accurately models the image formation process, integrating a parameterized
microscope point spread function and a mesh-based object representation. Unlike
DL-based segmentation, it directly optimizes shape and microscopy parameters to
fit real microscopy data, removing the need for large datasets or heuristic
priors. To enhance efficiency, we develop a GPU-accelerated Fourier transform
for triangle meshes, significantly improving speed. We demonstrate deltaMic's
ability to reconstruct cellular shapes from synthetic and real microscopy
images, providing a robust tool for 3D segmentation and biophysical modeling.
This work bridges physics-based rendering with modern optimization techniques,
offering a new paradigm for microscopy image analysis and inverse biophysical
modeling.
|
2304.13343 | Xinnian Liang | Bing Wang, Xinnian Liang, Jian Yang, Hui Huang, Shuangzhi Wu, Peihao
Wu, Lu Lu, Zejun Ma, Zhoujun Li | SCM: Enhancing Large Language Model with Self-Controlled Memory
Framework | Accepted by DASFAA 2025 main conference | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) are constrained by their inability to process
lengthy inputs, resulting in the loss of critical historical information. To
address this limitation, in this paper, we propose the Self-Controlled Memory
(SCM) framework to enhance the ability of LLMs to maintain long-term memory and
recall relevant information. Our SCM framework comprises three key components:
an LLM-based agent serving as the backbone of the framework, a memory stream
storing agent memories, and a memory controller updating memories and
determining when and how to utilize memories from memory stream. Additionally,
the proposed SCM is able to process ultra-long texts without any modification
or fine-tuning, which can integrate with any instruction following LLMs in a
plug-and-play paradigm. Furthermore, we annotate a dataset to evaluate the
effectiveness of SCM for handling lengthy inputs. The annotated dataset covers
three tasks: long-term dialogues, book summarization, and meeting
summarization. Experimental results demonstrate that our method achieves better
retrieval recall and generates more informative responses compared to
competitive baselines in long-term dialogues.
(https://github.com/wbbeyourself/SCM4LLMs)
| [
{
"version": "v1",
"created": "Wed, 26 Apr 2023 07:25:31 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Feb 2024 16:01:39 GMT"
},
{
"version": "v3",
"created": "Thu, 19 Sep 2024 13:38:51 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2025 02:16:56 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Wang",
"Bing",
""
],
[
"Liang",
"Xinnian",
""
],
[
"Yang",
"Jian",
""
],
[
"Huang",
"Hui",
""
],
[
"Wu",
"Shuangzhi",
""
],
[
"Wu",
"Peihao",
""
],
[
"Lu",
"Lu",
""
],
[
"Ma",
"Zejun",
""
],
[
"Li",
"Zhoujun",
""
]
] | TITLE: SCM: Enhancing Large Language Model with Self-Controlled Memory
Framework
ABSTRACT: Large Language Models (LLMs) are constrained by their inability to process
lengthy inputs, resulting in the loss of critical historical information. To
address this limitation, in this paper, we propose the Self-Controlled Memory
(SCM) framework to enhance the ability of LLMs to maintain long-term memory and
recall relevant information. Our SCM framework comprises three key components:
an LLM-based agent serving as the backbone of the framework, a memory stream
storing agent memories, and a memory controller updating memories and
determining when and how to utilize memories from memory stream. Additionally,
the proposed SCM is able to process ultra-long texts without any modification
or fine-tuning, which can integrate with any instruction following LLMs in a
plug-and-play paradigm. Furthermore, we annotate a dataset to evaluate the
effectiveness of SCM for handling lengthy inputs. The annotated dataset covers
three tasks: long-term dialogues, book summarization, and meeting
summarization. Experimental results demonstrate that our method achieves better
retrieval recall and generates more informative responses compared to
competitive baselines in long-term dialogues.
(https://github.com/wbbeyourself/SCM4LLMs)
|
2307.16530 | Novel Certad | Novel Certad, Sebastian Tschernuth, Cristina Olaverri-Monreal | Extraction of Road Users' Behavior From Realistic Data According to
Assumptions in Safety-Related Models for Automated Driving Systems | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we utilized the methodology outlined in the IEEE Standard
2846-2022 for "Assumptions in Safety-Related Models for Automated Driving
Systems" to extract information on the behavior of other road users in driving
scenarios. This method includes defining high-level scenarios, determining
kinematic characteristics, evaluating safety relevance, and making assumptions
on reasonably predictable behaviors. The assumptions were expressed as
kinematic bounds. The numerical values for these bounds were extracted using
Python scripts to process realistic data from the UniD dataset. The resulting
information enables Automated Driving Systems designers to specify the
parameters and limits of a road user's state in a specific scenario. This
information can be utilized to establish starting conditions for testing a
vehicle that is equipped with an Automated Driving System in simulations or on
actual roads.
| [
{
"version": "v1",
"created": "Mon, 31 Jul 2023 09:50:50 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 14:55:01 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Certad",
"Novel",
""
],
[
"Tschernuth",
"Sebastian",
""
],
[
"Olaverri-Monreal",
"Cristina",
""
]
] | TITLE: Extraction of Road Users' Behavior From Realistic Data According to
Assumptions in Safety-Related Models for Automated Driving Systems
ABSTRACT: In this work, we utilized the methodology outlined in the IEEE Standard
2846-2022 for "Assumptions in Safety-Related Models for Automated Driving
Systems" to extract information on the behavior of other road users in driving
scenarios. This method includes defining high-level scenarios, determining
kinematic characteristics, evaluating safety relevance, and making assumptions
on reasonably predictable behaviors. The assumptions were expressed as
kinematic bounds. The numerical values for these bounds were extracted using
Python scripts to process realistic data from the UniD dataset. The resulting
information enables Automated Driving Systems designers to specify the
parameters and limits of a road user's state in a specific scenario. This
information can be utilized to establish starting conditions for testing a
vehicle that is equipped with an Automated Driving System in simulations or on
actual roads.
|
2308.02000 | Junyan Cheng | Junyan Cheng and Peter Chin | Bridging Neural and Symbolic Representations with Transitional
Dictionary Learning | ICLR 2024 | null | null | null | cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a novel Transitional Dictionary Learning (TDL)
framework that can implicitly learn symbolic knowledge, such as visual parts
and relations, by reconstructing the input as a combination of parts with
implicit relations. We propose a game-theoretic diffusion model to decompose
the input into visual parts using the dictionaries learned by the Expectation
Maximization (EM) algorithm, implemented as the online prototype clustering,
based on the decomposition results. Additionally, two metrics, clustering
information gain, and heuristic shape score are proposed to evaluate the model.
Experiments are conducted on three abstract compositional visual object
datasets, which require the model to utilize the compositionality of data
instead of simply exploiting visual features. Then, three tasks on symbol
grounding to predefined classes of parts and relations, as well as transfer
learning to unseen classes, followed by a human evaluation, were carried out on
these datasets. The results show that the proposed method discovers
compositional patterns, which significantly outperforms the state-of-the-art
unsupervised part segmentation methods that rely on visual features from
pre-trained backbones. Furthermore, the proposed metrics are consistent with
human evaluations.
| [
{
"version": "v1",
"created": "Thu, 3 Aug 2023 19:29:35 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 23:44:57 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Cheng",
"Junyan",
""
],
[
"Chin",
"Peter",
""
]
] | TITLE: Bridging Neural and Symbolic Representations with Transitional
Dictionary Learning
ABSTRACT: This paper introduces a novel Transitional Dictionary Learning (TDL)
framework that can implicitly learn symbolic knowledge, such as visual parts
and relations, by reconstructing the input as a combination of parts with
implicit relations. We propose a game-theoretic diffusion model to decompose
the input into visual parts using the dictionaries learned by the Expectation
Maximization (EM) algorithm, implemented as the online prototype clustering,
based on the decomposition results. Additionally, two metrics, clustering
information gain, and heuristic shape score are proposed to evaluate the model.
Experiments are conducted on three abstract compositional visual object
datasets, which require the model to utilize the compositionality of data
instead of simply exploiting visual features. Then, three tasks on symbol
grounding to predefined classes of parts and relations, as well as transfer
learning to unseen classes, followed by a human evaluation, were carried out on
these datasets. The results show that the proposed method discovers
compositional patterns, which significantly outperforms the state-of-the-art
unsupervised part segmentation methods that rely on visual features from
pre-trained backbones. Furthermore, the proposed metrics are consistent with
human evaluations.
|
2309.15329 | Zekai Liang | Shreya Saha, Zekai Liang, Shan Lin, Jingpei Lu, Michael Yip, Sainan
Liu | BASED: Bundle-Adjusting Surgical Endoscopic Dynamic Video Reconstruction
using Neural Radiance Fields | Accepted to WACV 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Reconstruction of deformable scenes from endoscopic videos is important for
many applications such as intraoperative navigation, surgical visual
perception, and robotic surgery. It is a foundational requirement for realizing
autonomous robotic interventions for minimally invasive surgery. However,
previous approaches in this domain have been limited by their modular nature
and are confined to specific camera and scene settings. Our work adopts the
Neural Radiance Fields (NeRF) approach to learning 3D implicit representations
of scenes that are both dynamic and deformable over time, and furthermore with
unknown camera poses. We demonstrate this approach on endoscopic surgical
scenes from robotic surgery. This work removes the constraints of known camera
poses and overcomes the drawbacks of the state-of-the-art unstructured dynamic
scene reconstruction technique, which relies on the static part of the scene
for accurate reconstruction. Through several experimental datasets, we
demonstrate the versatility of our proposed model to adapt to diverse camera
and scene settings, and show its promise for both current and future robotic
surgical systems.
| [
{
"version": "v1",
"created": "Wed, 27 Sep 2023 00:20:36 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Aug 2024 19:51:49 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Mar 2025 19:24:04 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Saha",
"Shreya",
""
],
[
"Liang",
"Zekai",
""
],
[
"Lin",
"Shan",
""
],
[
"Lu",
"Jingpei",
""
],
[
"Yip",
"Michael",
""
],
[
"Liu",
"Sainan",
""
]
] | TITLE: BASED: Bundle-Adjusting Surgical Endoscopic Dynamic Video Reconstruction
using Neural Radiance Fields
ABSTRACT: Reconstruction of deformable scenes from endoscopic videos is important for
many applications such as intraoperative navigation, surgical visual
perception, and robotic surgery. It is a foundational requirement for realizing
autonomous robotic interventions for minimally invasive surgery. However,
previous approaches in this domain have been limited by their modular nature
and are confined to specific camera and scene settings. Our work adopts the
Neural Radiance Fields (NeRF) approach to learning 3D implicit representations
of scenes that are both dynamic and deformable over time, and furthermore with
unknown camera poses. We demonstrate this approach on endoscopic surgical
scenes from robotic surgery. This work removes the constraints of known camera
poses and overcomes the drawbacks of the state-of-the-art unstructured dynamic
scene reconstruction technique, which relies on the static part of the scene
for accurate reconstruction. Through several experimental datasets, we
demonstrate the versatility of our proposed model to adapt to diverse camera
and scene settings, and show its promise for both current and future robotic
surgical systems.
|
2310.07684 | Lev Telyatnikov | Lev Telyatnikov, Maria Sofia Bucarelli, Guillermo Bernardez, Olga
Zaghen, Simone Scardapane, Pietro Lio | Hypergraph Neural Networks through the Lens of Message Passing: A Common
Perspective to Homophily and Architecture Design | This work has been published in Transactions on Machine Learning
Research (TMLR). Please cite the journal version:
https://openreview.net/forum?id=8rxtL0kZnX Link to bib:
https://jmlr.org/tmlr/papers/bib/8rxtL0kZnX.bib | Transactions on Machine Learning Research, 2025 | null | null | cs.AI cs.SI | http://creativecommons.org/licenses/by/4.0/ | Most of the current hypergraph learning methodologies and benchmarking
datasets in the hypergraph realm are obtained by lifting procedures from their
graph analogs, leading to overshadowing specific characteristics of
hypergraphs. This paper attempts to confront some pending questions in that
regard: Q1 Can the concept of homophily play a crucial role in Hypergraph
Neural Networks (HNNs)? Q2 Is there room for improving current HNN
architectures by carefully addressing specific characteristics of higher-order
networks? Q3 Do existing datasets provide a meaningful benchmark for HNNs? To
address them, we first introduce a novel conceptualization of homophily in
higher-order networks based on a Message Passing (MP) scheme, unifying both the
analytical examination and the modeling of higher-order networks. Further, we
investigate some natural, yet mostly unexplored, strategies for processing
higher-order structures within HNNs such as keeping hyperedge-dependent node
representations, or performing node/hyperedge stochastic samplings, leading us
to the most general MP formulation up to date -MultiSet-, as well as to an
original architecture design, MultiSetMixer. Finally, we conduct an extensive
set of experiments that contextualize our proposals and successfully provide
insights about our inquiries.
| [
{
"version": "v1",
"created": "Wed, 11 Oct 2023 17:35:20 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Feb 2024 12:45:15 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 10:45:21 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Telyatnikov",
"Lev",
""
],
[
"Bucarelli",
"Maria Sofia",
""
],
[
"Bernardez",
"Guillermo",
""
],
[
"Zaghen",
"Olga",
""
],
[
"Scardapane",
"Simone",
""
],
[
"Lio",
"Pietro",
""
]
] | TITLE: Hypergraph Neural Networks through the Lens of Message Passing: A Common
Perspective to Homophily and Architecture Design
ABSTRACT: Most of the current hypergraph learning methodologies and benchmarking
datasets in the hypergraph realm are obtained by lifting procedures from their
graph analogs, leading to overshadowing specific characteristics of
hypergraphs. This paper attempts to confront some pending questions in that
regard: Q1 Can the concept of homophily play a crucial role in Hypergraph
Neural Networks (HNNs)? Q2 Is there room for improving current HNN
architectures by carefully addressing specific characteristics of higher-order
networks? Q3 Do existing datasets provide a meaningful benchmark for HNNs? To
address them, we first introduce a novel conceptualization of homophily in
higher-order networks based on a Message Passing (MP) scheme, unifying both the
analytical examination and the modeling of higher-order networks. Further, we
investigate some natural, yet mostly unexplored, strategies for processing
higher-order structures within HNNs such as keeping hyperedge-dependent node
representations, or performing node/hyperedge stochastic samplings, leading us
to the most general MP formulation up to date -MultiSet-, as well as to an
original architecture design, MultiSetMixer. Finally, we conduct an extensive
set of experiments that contextualize our proposals and successfully provide
insights about our inquiries.
|
2310.11040 | Yang Liu | Yang Liu, Shi Gu | Co-Learning Semantic-aware Unsupervised Segmentation for Pathological
Image Registration | 13 pages, 7 figures, published in Medical Image Computing and
Computer Assisted Intervention (MICCAI) 2023 | International Conference on Medical Image Computing and
Computer-Assisted Intervention, pp. 537-547. Cham: Springer Nature
Switzerland, 2023 | 10.1007/978-3-031-43999-5_51 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The registration of pathological images plays an important role in medical
applications. Despite its significance, most researchers in this field
primarily focus on the registration of normal tissue into normal tissue. The
negative impact of focal tissue, such as the loss of spatial correspondence
information and the abnormal distortion of tissue, are rarely considered. In
this paper, we propose GIRNet, a novel unsupervised approach for pathological
image registration by incorporating segmentation and inpainting through the
principles of Generation, Inpainting, and Registration (GIR). The registration,
segmentation, and inpainting modules are trained simultaneously in a
co-learning manner so that the segmentation of the focal area and the
registration of inpainted pairs can improve collaboratively. Overall, the
registration of pathological images is achieved in a completely unsupervised
learning framework. Experimental results on multiple datasets, including
Magnetic Resonance Imaging (MRI) of T1 sequences, demonstrate the efficacy of
our proposed method. Our results show that our method can accurately achieve
the registration of pathological images and identify lesions even in
challenging imaging modalities. Our unsupervised approach offers a promising
solution for the efficient and cost-effective registration of pathological
images. Our code is available at
https://github.com/brain-intelligence-lab/GIRNet.
| [
{
"version": "v1",
"created": "Tue, 17 Oct 2023 07:13:28 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Oct 2023 06:54:58 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Mar 2025 11:26:12 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Liu",
"Yang",
""
],
[
"Gu",
"Shi",
""
]
] | TITLE: Co-Learning Semantic-aware Unsupervised Segmentation for Pathological
Image Registration
ABSTRACT: The registration of pathological images plays an important role in medical
applications. Despite its significance, most researchers in this field
primarily focus on the registration of normal tissue into normal tissue. The
negative impact of focal tissue, such as the loss of spatial correspondence
information and the abnormal distortion of tissue, are rarely considered. In
this paper, we propose GIRNet, a novel unsupervised approach for pathological
image registration by incorporating segmentation and inpainting through the
principles of Generation, Inpainting, and Registration (GIR). The registration,
segmentation, and inpainting modules are trained simultaneously in a
co-learning manner so that the segmentation of the focal area and the
registration of inpainted pairs can improve collaboratively. Overall, the
registration of pathological images is achieved in a completely unsupervised
learning framework. Experimental results on multiple datasets, including
Magnetic Resonance Imaging (MRI) of T1 sequences, demonstrate the efficacy of
our proposed method. Our results show that our method can accurately achieve
the registration of pathological images and identify lesions even in
challenging imaging modalities. Our unsupervised approach offers a promising
solution for the efficient and cost-effective registration of pathological
images. Our code is available at
https://github.com/brain-intelligence-lab/GIRNet.
|
2310.17042 | Juyoung Yun | Juyoung Yun | Stochastic Gradient Sampling for Enhancing Neural Networks Training | null | null | null | null | cs.LG cs.AI cs.CV cs.NE | http://creativecommons.org/publicdomain/zero/1.0/ | In this paper, we introduce StochGradAdam, a novel optimizer designed as an
extension of the Adam algorithm, incorporating stochastic gradient sampling
techniques to improve computational efficiency while maintaining robust
performance. StochGradAdam optimizes by selectively sampling a subset of
gradients during training, reducing the computational cost while preserving the
advantages of adaptive learning rates and bias corrections found in Adam. Our
experimental results, applied to image classification and segmentation tasks,
demonstrate that StochGradAdam can achieve comparable or superior performance
to Adam, even when using fewer gradient updates per iteration. By focusing on
key gradient updates, StochGradAdam offers stable convergence and enhanced
exploration of the loss landscape, while mitigating the impact of noisy
gradients. The results suggest that this approach is particularly effective for
large-scale models and datasets, providing a promising alternative to
traditional optimization techniques for deep learning applications.
| [
{
"version": "v1",
"created": "Wed, 25 Oct 2023 22:45:31 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Feb 2024 23:39:47 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Oct 2024 21:54:46 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Mar 2025 04:05:56 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Yun",
"Juyoung",
""
]
] | TITLE: Stochastic Gradient Sampling for Enhancing Neural Networks Training
ABSTRACT: In this paper, we introduce StochGradAdam, a novel optimizer designed as an
extension of the Adam algorithm, incorporating stochastic gradient sampling
techniques to improve computational efficiency while maintaining robust
performance. StochGradAdam optimizes by selectively sampling a subset of
gradients during training, reducing the computational cost while preserving the
advantages of adaptive learning rates and bias corrections found in Adam. Our
experimental results, applied to image classification and segmentation tasks,
demonstrate that StochGradAdam can achieve comparable or superior performance
to Adam, even when using fewer gradient updates per iteration. By focusing on
key gradient updates, StochGradAdam offers stable convergence and enhanced
exploration of the loss landscape, while mitigating the impact of noisy
gradients. The results suggest that this approach is particularly effective for
large-scale models and datasets, providing a promising alternative to
traditional optimization techniques for deep learning applications.
|
2312.02167 | Felix Terhag | F. Terhag, P. Knechtges, A. Basermann, R. Tempone | Uncertainty Quantification in Machine Learning Based Segmentation: A
Post-Hoc Approach for Left Ventricle Volume Estimation in MRI | null | SIAM/ASA Journal on Uncertainty Quantification 13 (1), 2025,
90-113 | 10.1137/23M161433X | null | cs.CV stat.ME | http://creativecommons.org/licenses/by/4.0/ | Recent studies have confirmed cardiovascular diseases remain responsible for
highest death toll amongst non-communicable diseases. Accurate left ventricular
(LV) volume estimation is critical for valid diagnosis and management of
various cardiovascular conditions, but poses significant challenge due to
inherent uncertainties associated with segmentation algorithms in magnetic
resonance imaging (MRI). Recent machine learning advancements, particularly
U-Net-like convolutional networks, have facilitated automated segmentation for
medical images, but struggles under certain pathologies and/or different
scanner vendors and imaging protocols. This study proposes a novel methodology
for post-hoc uncertainty estimation in LV volume prediction using It\^{o}
stochastic differential equations (SDEs) to model path-wise behavior for the
prediction error. The model describes the area of the left ventricle along the
heart's long axis. The method is agnostic to the underlying segmentation
algorithm, facilitating its use with various existing and future segmentation
technologies. The proposed approach provides a mechanism for quantifying
uncertainty, enabling medical professionals to intervene for unreliable
predictions. This is of utmost importance in critical applications such as
medical diagnosis, where prediction accuracy and reliability can directly
impact patient outcomes. The method is also robust to dataset changes, enabling
application for medical centers with limited access to labeled data. Our
findings highlight the proposed uncertainty estimation methodology's potential
to enhance automated segmentation robustness and generalizability, paving the
way for more reliable and accurate LV volume estimation in clinical settings as
well as opening new avenues for uncertainty quantification in biomedical image
segmentation, providing promising directions for future research.
| [
{
"version": "v1",
"created": "Mon, 30 Oct 2023 13:44:55 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 14:50:51 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Terhag",
"F.",
""
],
[
"Knechtges",
"P.",
""
],
[
"Basermann",
"A.",
""
],
[
"Tempone",
"R.",
""
]
] | TITLE: Uncertainty Quantification in Machine Learning Based Segmentation: A
Post-Hoc Approach for Left Ventricle Volume Estimation in MRI
ABSTRACT: Recent studies have confirmed cardiovascular diseases remain responsible for
highest death toll amongst non-communicable diseases. Accurate left ventricular
(LV) volume estimation is critical for valid diagnosis and management of
various cardiovascular conditions, but poses significant challenge due to
inherent uncertainties associated with segmentation algorithms in magnetic
resonance imaging (MRI). Recent machine learning advancements, particularly
U-Net-like convolutional networks, have facilitated automated segmentation for
medical images, but struggles under certain pathologies and/or different
scanner vendors and imaging protocols. This study proposes a novel methodology
for post-hoc uncertainty estimation in LV volume prediction using It\^{o}
stochastic differential equations (SDEs) to model path-wise behavior for the
prediction error. The model describes the area of the left ventricle along the
heart's long axis. The method is agnostic to the underlying segmentation
algorithm, facilitating its use with various existing and future segmentation
technologies. The proposed approach provides a mechanism for quantifying
uncertainty, enabling medical professionals to intervene for unreliable
predictions. This is of utmost importance in critical applications such as
medical diagnosis, where prediction accuracy and reliability can directly
impact patient outcomes. The method is also robust to dataset changes, enabling
application for medical centers with limited access to labeled data. Our
findings highlight the proposed uncertainty estimation methodology's potential
to enhance automated segmentation robustness and generalizability, paving the
way for more reliable and accurate LV volume estimation in clinical settings as
well as opening new avenues for uncertainty quantification in biomedical image
segmentation, providing promising directions for future research.
|
2312.15686 | Soumick Chatterjee | Soumick Chatterjee, Franziska Gaidzik, Alessandro Sciarra, Hendrik
Mattern, G\'abor Janiga, Oliver Speck, Andreas N\"urnberger and Sahani
Pathiraja | PULASki: Learning inter-rater variability using statistical distances to
improve probabilistic segmentation | null | null | null | null | cs.CV cs.AI cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the domain of medical imaging, many supervised learning based methods for
segmentation face several challenges such as high variability in annotations
from multiple experts, paucity of labelled data and class imbalanced datasets.
These issues may result in segmentations that lack the requisite precision for
clinical analysis and can be misleadingly overconfident without associated
uncertainty quantification. This work proposes the PULASki method as a
computationally efficient generative tool for biomedical image segmentation
that accurately captures variability in expert annotations, even in small
datasets. This approach makes use of an improved loss function based on
statistical distances in a conditional variational autoencoder structure
(Probabilistic UNet), which improves learning of the conditional decoder
compared to the standard cross-entropy particularly in class imbalanced
problems. The proposed method was analysed for two structurally different
segmentation tasks (intracranial vessel and multiple sclerosis (MS) lesion) and
compare our results to four well-established baselines in terms of quantitative
metrics and qualitative output. These experiments involve class-imbalanced
datasets characterised by challenging features, including suboptimal
signal-to-noise ratios and high ambiguity. Empirical results demonstrate the
PULASKi method outperforms all baselines at the 5\% significance level. Our
experiments are also of the first to present a comparative study of the
computationally feasible segmentation of complex geometries using 3D patches
and the traditional use of 2D slices. The generated segmentations are shown to
be much more anatomically plausible than in the 2D case, particularly for the
vessel task.
| [
{
"version": "v1",
"created": "Mon, 25 Dec 2023 10:31:22 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 16:50:49 GMT"
}
] | 2025-03-19T00:00:00 | [
[
"Chatterjee",
"Soumick",
""
],
[
"Gaidzik",
"Franziska",
""
],
[
"Sciarra",
"Alessandro",
""
],
[
"Mattern",
"Hendrik",
""
],
[
"Janiga",
"Gábor",
""
],
[
"Speck",
"Oliver",
""
],
[
"Nürnberger",
"Andreas",
""
],
[
"Pathiraja",
"Sahani",
""
]
] | TITLE: PULASki: Learning inter-rater variability using statistical distances to
improve probabilistic segmentation
ABSTRACT: In the domain of medical imaging, many supervised learning based methods for
segmentation face several challenges such as high variability in annotations
from multiple experts, paucity of labelled data and class imbalanced datasets.
These issues may result in segmentations that lack the requisite precision for
clinical analysis and can be misleadingly overconfident without associated
uncertainty quantification. This work proposes the PULASki method as a
computationally efficient generative tool for biomedical image segmentation
that accurately captures variability in expert annotations, even in small
datasets. This approach makes use of an improved loss function based on
statistical distances in a conditional variational autoencoder structure
(Probabilistic UNet), which improves learning of the conditional decoder
compared to the standard cross-entropy particularly in class imbalanced
problems. The proposed method was analysed for two structurally different
segmentation tasks (intracranial vessel and multiple sclerosis (MS) lesion) and
compare our results to four well-established baselines in terms of quantitative
metrics and qualitative output. These experiments involve class-imbalanced
datasets characterised by challenging features, including suboptimal
signal-to-noise ratios and high ambiguity. Empirical results demonstrate the
PULASKi method outperforms all baselines at the 5\% significance level. Our
experiments are also of the first to present a comparative study of the
computationally feasible segmentation of complex geometries using 3D patches
and the traditional use of 2D slices. The generated segmentations are shown to
be much more anatomically plausible than in the 2D case, particularly for the
vessel task.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.