id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.06683 | Xuechao Zou | Xuechao Zou, Yue Li, Shun Zhang, Kai Li, Shiying Wang, Pin Tao,
Junliang Xing, Congyan Lang | Dynamic Dictionary Learning for Remote Sensing Image Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote sensing image segmentation faces persistent challenges in
distinguishing morphologically similar categories and adapting to diverse scene
variations. While existing methods rely on implicit representation learning
paradigms, they often fail to dynamically adjust semantic embeddings according
to contextual cues, leading to suboptimal performance in fine-grained scenarios
such as cloud thickness differentiation. This work introduces a dynamic
dictionary learning framework that explicitly models class ID embeddings
through iterative refinement. The core contribution lies in a novel dictionary
construction mechanism, where class-aware semantic embeddings are progressively
updated via multi-stage alternating cross-attention querying between image
features and dictionary embeddings. This process enables adaptive
representation learning tailored to input-specific characteristics, effectively
resolving ambiguities in intra-class heterogeneity and inter-class homogeneity.
To further enhance discriminability, a contrastive constraint is applied to the
dictionary space, ensuring compact intra-class distributions while maximizing
inter-class separability. Extensive experiments across both coarse- and
fine-grained datasets demonstrate consistent improvements over state-of-the-art
methods, particularly in two online test benchmarks (LoveDA and UAVid). Code is
available at https://anonymous.4open.science/r/D2LS-8267/.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 16:25:16 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zou",
"Xuechao",
""
],
[
"Li",
"Yue",
""
],
[
"Zhang",
"Shun",
""
],
[
"Li",
"Kai",
""
],
[
"Wang",
"Shiying",
""
],
[
"Tao",
"Pin",
""
],
[
"Xing",
"Junliang",
""
],
[
"Lang",
"Congyan",
""
]
]
| TITLE: Dynamic Dictionary Learning for Remote Sensing Image Segmentation
ABSTRACT: Remote sensing image segmentation faces persistent challenges in
distinguishing morphologically similar categories and adapting to diverse scene
variations. While existing methods rely on implicit representation learning
paradigms, they often fail to dynamically adjust semantic embeddings according
to contextual cues, leading to suboptimal performance in fine-grained scenarios
such as cloud thickness differentiation. This work introduces a dynamic
dictionary learning framework that explicitly models class ID embeddings
through iterative refinement. The core contribution lies in a novel dictionary
construction mechanism, where class-aware semantic embeddings are progressively
updated via multi-stage alternating cross-attention querying between image
features and dictionary embeddings. This process enables adaptive
representation learning tailored to input-specific characteristics, effectively
resolving ambiguities in intra-class heterogeneity and inter-class homogeneity.
To further enhance discriminability, a contrastive constraint is applied to the
dictionary space, ensuring compact intra-class distributions while maximizing
inter-class separability. Extensive experiments across both coarse- and
fine-grained datasets demonstrate consistent improvements over state-of-the-art
methods, particularly in two online test benchmarks (LoveDA and UAVid). Code is
available at https://anonymous.4open.science/r/D2LS-8267/.
| no_new_dataset | 0.943608 |
2503.06684 | Qingdong He | Yanjie Pan, Qingdong He, Zhengkai Jiang, Pengcheng Xu, Chaoyi Wang,
Jinlong Peng, Haoxuan Wang, Yun Cao, Zhenye Gan, Mingmin Chi, Bo Peng, Yabiao
Wang | PixelPonder: Dynamic Patch Adaptation for Enhanced Multi-Conditional
Text-to-Image Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in diffusion-based text-to-image generation have demonstrated
promising results through visual condition control. However, existing
ControlNet-like methods struggle with compositional visual conditioning -
simultaneously preserving semantic fidelity across multiple heterogeneous
control signals while maintaining high visual quality, where they employ
separate control branches that often introduce conflicting guidance during the
denoising process, leading to structural distortions and artifacts in generated
images. To address this issue, we present PixelPonder, a novel unified control
framework, which allows for effective control of multiple visual conditions
under a single control structure. Specifically, we design a patch-level
adaptive condition selection mechanism that dynamically prioritizes spatially
relevant control signals at the sub-region level, enabling precise local
guidance without global interference. Additionally, a time-aware control
injection scheme is deployed to modulate condition influence according to
denoising timesteps, progressively transitioning from structural preservation
to texture refinement and fully utilizing the control information from
different categories to promote more harmonious image generation. Extensive
experiments demonstrate that PixelPonder surpasses previous methods across
different benchmark datasets, showing superior improvement in spatial alignment
accuracy while maintaining high textual semantic consistency.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 16:27:02 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Pan",
"Yanjie",
""
],
[
"He",
"Qingdong",
""
],
[
"Jiang",
"Zhengkai",
""
],
[
"Xu",
"Pengcheng",
""
],
[
"Wang",
"Chaoyi",
""
],
[
"Peng",
"Jinlong",
""
],
[
"Wang",
"Haoxuan",
""
],
[
"Cao",
"Yun",
""
],
[
"Gan",
"Zhenye",
""
],
[
"Chi",
"Mingmin",
""
],
[
"Peng",
"Bo",
""
],
[
"Wang",
"Yabiao",
""
]
]
| TITLE: PixelPonder: Dynamic Patch Adaptation for Enhanced Multi-Conditional
Text-to-Image Generation
ABSTRACT: Recent advances in diffusion-based text-to-image generation have demonstrated
promising results through visual condition control. However, existing
ControlNet-like methods struggle with compositional visual conditioning -
simultaneously preserving semantic fidelity across multiple heterogeneous
control signals while maintaining high visual quality, where they employ
separate control branches that often introduce conflicting guidance during the
denoising process, leading to structural distortions and artifacts in generated
images. To address this issue, we present PixelPonder, a novel unified control
framework, which allows for effective control of multiple visual conditions
under a single control structure. Specifically, we design a patch-level
adaptive condition selection mechanism that dynamically prioritizes spatially
relevant control signals at the sub-region level, enabling precise local
guidance without global interference. Additionally, a time-aware control
injection scheme is deployed to modulate condition influence according to
denoising timesteps, progressively transitioning from structural preservation
to texture refinement and fully utilizing the control information from
different categories to promote more harmonious image generation. Extensive
experiments demonstrate that PixelPonder surpasses previous methods across
different benchmark datasets, showing superior improvement in spatial alignment
accuracy while maintaining high textual semantic consistency.
| no_new_dataset | 0.944485 |
2503.06686 | Sheng Song | Sheng Song, Yiting Chen, Duo Xu, Songhan Ge, Yunqian Huang, Junni Shi,
Man Chen, Hongbo Chen, Rui Zheng | ImplicitCell: Resolution Cell Modeling of Joint Implicit Volume
Reconstruction and Pose Refinement in Freehand 3D Ultrasound | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Freehand 3D ultrasound enables volumetric imaging by tracking a conventional
ultrasound probe during freehand scanning, offering enriched spatial
information that improves clinical diagnosis. However, the quality of
reconstructed volumes is often compromised by tracking system noise and
irregular probe movements, leading to artifacts in the final reconstruction. To
address these challenges, we propose ImplicitCell, a novel framework that
integrates Implicit Neural Representation (INR) with an ultrasound resolution
cell model for joint optimization of volume reconstruction and pose refinement.
Three distinct datasets are used for comprehensive validation, including
phantom, common carotid artery, and carotid atherosclerosis. Experimental
results demonstrate that ImplicitCell significantly reduces reconstruction
artifacts and improves volume quality compared to existing methods,
particularly in challenging scenarios with noisy tracking data. These
improvements enhance the clinical utility of freehand 3D ultrasound by
providing more reliable and precise diagnostic information.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 16:40:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Song",
"Sheng",
""
],
[
"Chen",
"Yiting",
""
],
[
"Xu",
"Duo",
""
],
[
"Ge",
"Songhan",
""
],
[
"Huang",
"Yunqian",
""
],
[
"Shi",
"Junni",
""
],
[
"Chen",
"Man",
""
],
[
"Chen",
"Hongbo",
""
],
[
"Zheng",
"Rui",
""
]
]
| TITLE: ImplicitCell: Resolution Cell Modeling of Joint Implicit Volume
Reconstruction and Pose Refinement in Freehand 3D Ultrasound
ABSTRACT: Freehand 3D ultrasound enables volumetric imaging by tracking a conventional
ultrasound probe during freehand scanning, offering enriched spatial
information that improves clinical diagnosis. However, the quality of
reconstructed volumes is often compromised by tracking system noise and
irregular probe movements, leading to artifacts in the final reconstruction. To
address these challenges, we propose ImplicitCell, a novel framework that
integrates Implicit Neural Representation (INR) with an ultrasound resolution
cell model for joint optimization of volume reconstruction and pose refinement.
Three distinct datasets are used for comprehensive validation, including
phantom, common carotid artery, and carotid atherosclerosis. Experimental
results demonstrate that ImplicitCell significantly reduces reconstruction
artifacts and improves volume quality compared to existing methods,
particularly in challenging scenarios with noisy tracking data. These
improvements enhance the clinical utility of freehand 3D ultrasound by
providing more reliable and precise diagnostic information.
| no_new_dataset | 0.952794 |
2503.06690 | Animesh Kumar Paul | Animesh Kumar Paul and Russell Greiner | Censoring-Aware Tree-Based Reinforcement Learning for Estimating Dynamic
Treatment Regimes with Censored Outcomes | null | null | null | null | cs.LG cs.AI stat.ME | http://creativecommons.org/licenses/by/4.0/ | Dynamic Treatment Regimes (DTRs) provide a systematic approach for making
sequential treatment decisions that adapt to individual patient
characteristics, particularly in clinical contexts where survival outcomes are
of interest. Censoring-Aware Tree-Based Reinforcement Learning (CA-TRL) is a
novel framework to address the complexities associated with censored data when
estimating optimal DTRs. We explore ways to learn effective DTRs, from
observational data. By enhancing traditional tree-based reinforcement learning
methods with augmented inverse probability weighting (AIPW) and censoring-aware
modifications, CA-TRL delivers robust and interpretable treatment strategies.
We demonstrate its effectiveness through extensive simulations and real-world
applications using the SANAD epilepsy dataset, where it outperformed the
recently proposed ASCL method in key metrics such as restricted mean survival
time (RMST) and decision-making accuracy. This work represents a step forward
in advancing personalized and data-driven treatment strategies across diverse
healthcare settings.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 16:53:09 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Paul",
"Animesh Kumar",
""
],
[
"Greiner",
"Russell",
""
]
]
| TITLE: Censoring-Aware Tree-Based Reinforcement Learning for Estimating Dynamic
Treatment Regimes with Censored Outcomes
ABSTRACT: Dynamic Treatment Regimes (DTRs) provide a systematic approach for making
sequential treatment decisions that adapt to individual patient
characteristics, particularly in clinical contexts where survival outcomes are
of interest. Censoring-Aware Tree-Based Reinforcement Learning (CA-TRL) is a
novel framework to address the complexities associated with censored data when
estimating optimal DTRs. We explore ways to learn effective DTRs, from
observational data. By enhancing traditional tree-based reinforcement learning
methods with augmented inverse probability weighting (AIPW) and censoring-aware
modifications, CA-TRL delivers robust and interpretable treatment strategies.
We demonstrate its effectiveness through extensive simulations and real-world
applications using the SANAD epilepsy dataset, where it outperformed the
recently proposed ASCL method in key metrics such as restricted mean survival
time (RMST) and decision-making accuracy. This work represents a step forward
in advancing personalized and data-driven treatment strategies across diverse
healthcare settings.
| no_new_dataset | 0.943764 |
2503.06698 | Xavier Thomas | Xavier Thomas, Deepti Ghadiyaram | What's in a Latent? Leveraging Diffusion Latent Space for Domain
Generalization | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Domain Generalization aims to develop models that can generalize to novel and
unseen data distributions. In this work, we study how model architectures and
pre-training objectives impact feature richness and propose a method to
effectively leverage them for domain generalization. Specifically, given a
pre-trained feature space, we first discover latent domain structures, referred
to as pseudo-domains, that capture domain-specific variations in an
unsupervised manner. Next, we augment existing classifiers with these
complementary pseudo-domain representations making them more amenable to
diverse unseen test domains. We analyze how different pre-training feature
spaces differ in the domain-specific variances they capture. Our empirical
studies reveal that features from diffusion models excel at separating domains
in the absence of explicit domain labels and capture nuanced domain-specific
information. On 5 datasets, we show that our very simple framework improves
generalization to unseen domains by a maximum test accuracy improvement of over
4% compared to the standard baseline Empirical Risk Minimization (ERM).
Crucially, our method outperforms most algorithms that access domain labels
during training.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 17:29:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Thomas",
"Xavier",
""
],
[
"Ghadiyaram",
"Deepti",
""
]
]
| TITLE: What's in a Latent? Leveraging Diffusion Latent Space for Domain
Generalization
ABSTRACT: Domain Generalization aims to develop models that can generalize to novel and
unseen data distributions. In this work, we study how model architectures and
pre-training objectives impact feature richness and propose a method to
effectively leverage them for domain generalization. Specifically, given a
pre-trained feature space, we first discover latent domain structures, referred
to as pseudo-domains, that capture domain-specific variations in an
unsupervised manner. Next, we augment existing classifiers with these
complementary pseudo-domain representations making them more amenable to
diverse unseen test domains. We analyze how different pre-training feature
spaces differ in the domain-specific variances they capture. Our empirical
studies reveal that features from diffusion models excel at separating domains
in the absence of explicit domain labels and capture nuanced domain-specific
information. On 5 datasets, we show that our very simple framework improves
generalization to unseen domains by a maximum test accuracy improvement of over
4% compared to the standard baseline Empirical Risk Minimization (ERM).
Crucially, our method outperforms most algorithms that access domain labels
during training.
| no_new_dataset | 0.952574 |
2503.06699 | Arnaud Demortiere Dr. | Junhao Cao, Nicolas Folastre, Gozde Oney, Edgar Rauch, Stavros
Nicolopoulos, Partha Pratim Das, Arnaud Demorti\`ere | Unsupervised Multi-Clustering and Decision-Making Strategies for 4D-STEM
Orientation Mapping | 32 pages, 5 figures, 5 figures in SI | null | null | null | cs.LG cs.CV eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | This study presents a novel integration of unsupervised learning and
decision-making strategies for the advanced analysis of 4D-STEM datasets, with
a focus on non-negative matrix factorization (NMF) as the primary clustering
method. Our approach introduces a systematic framework to determine the optimal
number of components (k) required for robust and interpretable orientation
mapping. By leveraging the K-Component Loss method and Image Quality Assessment
(IQA) metrics, we effectively balance reconstruction fidelity and model
complexity. Additionally, we highlight the critical role of dataset
preprocessing in improving clustering stability and accuracy. Furthermore, our
spatial weight matrix analysis provides insights into overlapping regions
within the dataset by employing threshold-based visualization, facilitating a
detailed understanding of cluster interactions. The results demonstrate the
potential of combining NMF with advanced IQA metrics and preprocessing
techniques for reliable orientation mapping and structural analysis in 4D-STEM
datasets, paving the way for future applications in multi-dimensional material
characterization.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 17:31:57 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Cao",
"Junhao",
""
],
[
"Folastre",
"Nicolas",
""
],
[
"Oney",
"Gozde",
""
],
[
"Rauch",
"Edgar",
""
],
[
"Nicolopoulos",
"Stavros",
""
],
[
"Das",
"Partha Pratim",
""
],
[
"Demortière",
"Arnaud",
""
]
]
| TITLE: Unsupervised Multi-Clustering and Decision-Making Strategies for 4D-STEM
Orientation Mapping
ABSTRACT: This study presents a novel integration of unsupervised learning and
decision-making strategies for the advanced analysis of 4D-STEM datasets, with
a focus on non-negative matrix factorization (NMF) as the primary clustering
method. Our approach introduces a systematic framework to determine the optimal
number of components (k) required for robust and interpretable orientation
mapping. By leveraging the K-Component Loss method and Image Quality Assessment
(IQA) metrics, we effectively balance reconstruction fidelity and model
complexity. Additionally, we highlight the critical role of dataset
preprocessing in improving clustering stability and accuracy. Furthermore, our
spatial weight matrix analysis provides insights into overlapping regions
within the dataset by employing threshold-based visualization, facilitating a
detailed understanding of cluster interactions. The results demonstrate the
potential of combining NMF with advanced IQA metrics and preprocessing
techniques for reliable orientation mapping and structural analysis in 4D-STEM
datasets, paving the way for future applications in multi-dimensional material
characterization.
| no_new_dataset | 0.949342 |
2503.06705 | Apostolos Angelis | Apostolos Angelis and George Kousiouris | A Survey on the Landscape of Self-adaptive Cloud Design and Operations
Patterns: Goals, Strategies, Tooling, Evaluation and Dataset Perspectives | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud-native applications have significantly advanced the development and
scalability of online services through the use of microservices and modular
architectures. However, achieving adaptability, resilience, and efficient
performance management within cloud environments remains a key challenge. This
survey provides an overview of self-adaptive cloud design and operations
patterns published over the last seven years, focusing on a taxonomy of their
objectives, scope of control, decision-making mechanisms approach, automation
level and validation methodologies. Overall, 96 papers have been taken under
consideration, indicating a significant increase in the years since 2023 in the
produced output. The analysis highlights the prevalence of feedback loop
structures, with both reactive and proactive implementations, and underscores
the increasing role of machine learning techniques in predictive management,
especially when it comes to resource provisioning and management of the
executed applications. On the other hand, adaptive application architectures
through direct application-level pattern-based management seem significantly
underrepresented in the current field of research, thus serving as an
uninvestigated area for future research. Furthermore, the current work
highlights practical aspects such as validation datasets per category
(application, resource, network, etc.), tools, technologies and frameworks
usage during the experimentation, in order to guide researchers in the
validation process for comparative and robust experimentation.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 17:41:47 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Angelis",
"Apostolos",
""
],
[
"Kousiouris",
"George",
""
]
]
| TITLE: A Survey on the Landscape of Self-adaptive Cloud Design and Operations
Patterns: Goals, Strategies, Tooling, Evaluation and Dataset Perspectives
ABSTRACT: Cloud-native applications have significantly advanced the development and
scalability of online services through the use of microservices and modular
architectures. However, achieving adaptability, resilience, and efficient
performance management within cloud environments remains a key challenge. This
survey provides an overview of self-adaptive cloud design and operations
patterns published over the last seven years, focusing on a taxonomy of their
objectives, scope of control, decision-making mechanisms approach, automation
level and validation methodologies. Overall, 96 papers have been taken under
consideration, indicating a significant increase in the years since 2023 in the
produced output. The analysis highlights the prevalence of feedback loop
structures, with both reactive and proactive implementations, and underscores
the increasing role of machine learning techniques in predictive management,
especially when it comes to resource provisioning and management of the
executed applications. On the other hand, adaptive application architectures
through direct application-level pattern-based management seem significantly
underrepresented in the current field of research, thus serving as an
uninvestigated area for future research. Furthermore, the current work
highlights practical aspects such as validation datasets per category
(application, resource, network, etc.), tools, technologies and frameworks
usage during the experimentation, in order to guide researchers in the
validation process for comparative and robust experimentation.
| no_new_dataset | 0.940844 |
2503.06706 | Ming Zhang | Ming Zhang, Yuhui Wang, Yujiong Shen, Tingyi Yang, Changhao Jiang,
Yilong Wu, Shihan Dou, Qinhao Chen, Zhiheng Xi, Zhihao Zhang, Yi Dong, Zhen
Wang, Zhihui Fei, Mingyang Wan, Tao Liang, Guojun Ma, Qi Zhang, Tao Gui and
Xuanjing Huang | PFDial: A Structured Dialogue Instruction Fine-tuning Method Based on
UML Flowcharts | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Process-driven dialogue systems, which operate under strict predefined
process constraints, are essential in customer service and equipment
maintenance scenarios. Although Large Language Models (LLMs) have shown
remarkable progress in dialogue and reasoning, they still struggle to solve
these strictly constrained dialogue tasks. To address this challenge, we
construct Process Flow Dialogue (PFDial) dataset, which contains 12,705
high-quality Chinese dialogue instructions derived from 440 flowcharts
containing 5,055 process nodes. Based on PlantUML specification, each UML
flowchart is converted into atomic dialogue units i.e., structured five-tuples.
Experimental results demonstrate that a 7B model trained with merely 800
samples, and a 0.5B model trained on total data both can surpass 90% accuracy.
Additionally, the 8B model can surpass GPT-4o up to 43.88% with an average of
11.00%. We further evaluate models' performance on challenging backward
transitions in process flows and conduct an in-depth analysis of various
dataset formats to reveal their impact on model performance in handling
decision and sequential branches. The data is released in
https://github.com/KongLongGeFDU/PFDial.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 17:43:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Ming",
""
],
[
"Wang",
"Yuhui",
""
],
[
"Shen",
"Yujiong",
""
],
[
"Yang",
"Tingyi",
""
],
[
"Jiang",
"Changhao",
""
],
[
"Wu",
"Yilong",
""
],
[
"Dou",
"Shihan",
""
],
[
"Chen",
"Qinhao",
""
],
[
"Xi",
"Zhiheng",
""
],
[
"Zhang",
"Zhihao",
""
],
[
"Dong",
"Yi",
""
],
[
"Wang",
"Zhen",
""
],
[
"Fei",
"Zhihui",
""
],
[
"Wan",
"Mingyang",
""
],
[
"Liang",
"Tao",
""
],
[
"Ma",
"Guojun",
""
],
[
"Zhang",
"Qi",
""
],
[
"Gui",
"Tao",
""
],
[
"Huang",
"Xuanjing",
""
]
]
| TITLE: PFDial: A Structured Dialogue Instruction Fine-tuning Method Based on
UML Flowcharts
ABSTRACT: Process-driven dialogue systems, which operate under strict predefined
process constraints, are essential in customer service and equipment
maintenance scenarios. Although Large Language Models (LLMs) have shown
remarkable progress in dialogue and reasoning, they still struggle to solve
these strictly constrained dialogue tasks. To address this challenge, we
construct Process Flow Dialogue (PFDial) dataset, which contains 12,705
high-quality Chinese dialogue instructions derived from 440 flowcharts
containing 5,055 process nodes. Based on PlantUML specification, each UML
flowchart is converted into atomic dialogue units i.e., structured five-tuples.
Experimental results demonstrate that a 7B model trained with merely 800
samples, and a 0.5B model trained on total data both can surpass 90% accuracy.
Additionally, the 8B model can surpass GPT-4o up to 43.88% with an average of
11.00%. We further evaluate models' performance on challenging backward
transitions in process flows and conduct an in-depth analysis of various
dataset formats to reveal their impact on model performance in handling
decision and sequential branches. The data is released in
https://github.com/KongLongGeFDU/PFDial.
| new_dataset | 0.962532 |
2503.06709 | Hongshen Xu | Hongshen Xu, Zixv yang, Zichen Zhu, Kunyao Lan, Zihan Wang, Mengyue
Wu, Ziwei Ji, Lu Chen, Pascale Fung, Kai Yu | Delusions of Large Language Models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models often generate factually incorrect but plausible
outputs, known as hallucinations. We identify a more insidious phenomenon, LLM
delusion, defined as high belief hallucinations, incorrect outputs with
abnormally high confidence, making them harder to detect and mitigate. Unlike
ordinary hallucinations, delusions persist with low uncertainty, posing
significant challenges to model reliability. Through empirical analysis across
different model families and sizes on several Question Answering tasks, we show
that delusions are prevalent and distinct from hallucinations. LLMs exhibit
lower honesty with delusions, which are harder to override via finetuning or
self reflection. We link delusion formation with training dynamics and dataset
noise and explore mitigation strategies such as retrieval augmented generation
and multi agent debating to mitigate delusions. By systematically investigating
the nature, prevalence, and mitigation of LLM delusions, our study provides
insights into the underlying causes of this phenomenon and outlines future
directions for improving model reliability.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 17:59:16 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xu",
"Hongshen",
""
],
[
"yang",
"Zixv",
""
],
[
"Zhu",
"Zichen",
""
],
[
"Lan",
"Kunyao",
""
],
[
"Wang",
"Zihan",
""
],
[
"Wu",
"Mengyue",
""
],
[
"Ji",
"Ziwei",
""
],
[
"Chen",
"Lu",
""
],
[
"Fung",
"Pascale",
""
],
[
"Yu",
"Kai",
""
]
]
| TITLE: Delusions of Large Language Models
ABSTRACT: Large Language Models often generate factually incorrect but plausible
outputs, known as hallucinations. We identify a more insidious phenomenon, LLM
delusion, defined as high belief hallucinations, incorrect outputs with
abnormally high confidence, making them harder to detect and mitigate. Unlike
ordinary hallucinations, delusions persist with low uncertainty, posing
significant challenges to model reliability. Through empirical analysis across
different model families and sizes on several Question Answering tasks, we show
that delusions are prevalent and distinct from hallucinations. LLMs exhibit
lower honesty with delusions, which are harder to override via finetuning or
self reflection. We link delusion formation with training dynamics and dataset
noise and explore mitigation strategies such as retrieval augmented generation
and multi agent debating to mitigate delusions. By systematically investigating
the nature, prevalence, and mitigation of LLM delusions, our study provides
insights into the underlying causes of this phenomenon and outlines future
directions for improving model reliability.
| no_new_dataset | 0.948202 |
2503.06730 | Matthew Shen | Matthew Shen, Aliyah Hsu, Abhineet Agarwal, Bin Yu | Enhancing CBMs Through Binary Distillation with Applications to
Test-Time Intervention | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Concept bottleneck models~(CBM) aim to improve model interpretability by
predicting human level ``concepts" in a bottleneck within a deep learning model
architecture. However, how the predicted concepts are used in predicting the
target still either remains black-box or is simplified to maintain
interpretability at the cost of prediction performance. We propose to use Fast
Interpretable Greedy Sum-Trees~(FIGS) to obtain Binary Distillation~(BD). This
new method, called FIGS-BD, distills a binary-augmented concept-to-target
portion of the CBM into an interpretable tree-based model, while mimicking the
competitive prediction performance of the CBM teacher. FIGS-BD can be used in
downstream tasks to explain and decompose CBM predictions into interpretable
binary-concept-interaction attributions and guide adaptive test-time
intervention. Across $4$ datasets, we demonstrate that adaptive test-time
intervention identifies key concepts that significantly improve performance for
realistic human-in-the-loop settings that allow for limited concept
interventions.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 19:03:48 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Shen",
"Matthew",
""
],
[
"Hsu",
"Aliyah",
""
],
[
"Agarwal",
"Abhineet",
""
],
[
"Yu",
"Bin",
""
]
]
| TITLE: Enhancing CBMs Through Binary Distillation with Applications to
Test-Time Intervention
ABSTRACT: Concept bottleneck models~(CBM) aim to improve model interpretability by
predicting human level ``concepts" in a bottleneck within a deep learning model
architecture. However, how the predicted concepts are used in predicting the
target still either remains black-box or is simplified to maintain
interpretability at the cost of prediction performance. We propose to use Fast
Interpretable Greedy Sum-Trees~(FIGS) to obtain Binary Distillation~(BD). This
new method, called FIGS-BD, distills a binary-augmented concept-to-target
portion of the CBM into an interpretable tree-based model, while mimicking the
competitive prediction performance of the CBM teacher. FIGS-BD can be used in
downstream tasks to explain and decompose CBM predictions into interpretable
binary-concept-interaction attributions and guide adaptive test-time
intervention. Across $4$ datasets, we demonstrate that adaptive test-time
intervention identifies key concepts that significantly improve performance for
realistic human-in-the-loop settings that allow for limited concept
interventions.
| no_new_dataset | 0.9455 |
2503.06737 | Rameshwar Pratap | Bhisham Dev Verma, Rameshwar Pratap | Faster and Space Efficient Indexing for Locality Sensitive Hashing | null | null | null | null | cs.DS cs.LG | http://creativecommons.org/licenses/by/4.0/ | This work suggests faster and space-efficient index construction algorithms
for LSH for Euclidean distance (\textit{a.k.a.}~\ELSH) and cosine similarity
(\textit{a.k.a.}~\SRP). The index construction step of these LSHs relies on
grouping data points into several bins of hash tables based on their hashcode.
To generate an $m$-dimensional hashcode of the $d$-dimensional data point,
these LSHs first project the data point onto a $d$-dimensional random Gaussian
vector and then discretise the resulting inner product. The time and space
complexity of both \ELSH~and \SRP~for computing an $m$-sized hashcode of a
$d$-dimensional vector is $O(md)$, which becomes impractical for large values
of $m$ and $d$. To overcome this problem, we propose two alternative LSH
hashcode generation algorithms both for Euclidean distance and cosine
similarity, namely, \CSELSH, \HCSELSH~and \CSSRP, \HCSSRP, respectively.
\CSELSH~and \CSSRP~are based on count sketch \cite{count_sketch} and
\HCSELSH~and \HCSSRP~utilize higher-order count sketch \cite{shi2019higher}.
These proposals significantly reduce the hashcode computation time from $O(md)$
to $O(d)$. Additionally, both \CSELSH~and \CSSRP~reduce the space complexity
from $O(md)$ to $O(d)$; ~and \HCSELSH, \HCSSRP~ reduce the space complexity
from $O(md)$ to $O(N \sqrt[N]{d})$ respectively, where $N\geq 1$ denotes the
size of the input/reshaped tensor. Our proposals are backed by strong
mathematical guarantees, and we validate their performance through simulations
on various real-world datasets.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 19:33:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Verma",
"Bhisham Dev",
""
],
[
"Pratap",
"Rameshwar",
""
]
]
| TITLE: Faster and Space Efficient Indexing for Locality Sensitive Hashing
ABSTRACT: This work suggests faster and space-efficient index construction algorithms
for LSH for Euclidean distance (\textit{a.k.a.}~\ELSH) and cosine similarity
(\textit{a.k.a.}~\SRP). The index construction step of these LSHs relies on
grouping data points into several bins of hash tables based on their hashcode.
To generate an $m$-dimensional hashcode of the $d$-dimensional data point,
these LSHs first project the data point onto a $d$-dimensional random Gaussian
vector and then discretise the resulting inner product. The time and space
complexity of both \ELSH~and \SRP~for computing an $m$-sized hashcode of a
$d$-dimensional vector is $O(md)$, which becomes impractical for large values
of $m$ and $d$. To overcome this problem, we propose two alternative LSH
hashcode generation algorithms both for Euclidean distance and cosine
similarity, namely, \CSELSH, \HCSELSH~and \CSSRP, \HCSSRP, respectively.
\CSELSH~and \CSSRP~are based on count sketch \cite{count_sketch} and
\HCSELSH~and \HCSSRP~utilize higher-order count sketch \cite{shi2019higher}.
These proposals significantly reduce the hashcode computation time from $O(md)$
to $O(d)$. Additionally, both \CSELSH~and \CSSRP~reduce the space complexity
from $O(md)$ to $O(d)$; ~and \HCSELSH, \HCSSRP~ reduce the space complexity
from $O(md)$ to $O(N \sqrt[N]{d})$ respectively, where $N\geq 1$ denotes the
size of the input/reshaped tensor. Our proposals are backed by strong
mathematical guarantees, and we validate their performance through simulations
on various real-world datasets.
| no_new_dataset | 0.946547 |
2503.06748 | Hantao Zhang | Hantao Zhang, Yuhe Liu, Jiancheng Yang, Weidong Guo, Xinyuan Wang, and
Pascal Fua | DiffAtlas: GenAI-fying Atlas Segmentation via Image-Mask Diffusion | 11 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate medical image segmentation is crucial for precise anatomical
delineation. Deep learning models like U-Net have shown great success but
depend heavily on large datasets and struggle with domain shifts, complex
structures, and limited training samples. Recent studies have explored
diffusion models for segmentation by iteratively refining masks. However, these
methods still retain the conventional image-to-mask mapping, making them highly
sensitive to input data, which hampers stability and generalization. In
contrast, we introduce DiffAtlas, a novel generative framework that models both
images and masks through diffusion during training, effectively ``GenAI-fying''
atlas-based segmentation. During testing, the model is guided to generate a
specific target image-mask pair, from which the corresponding mask is obtained.
DiffAtlas retains the robustness of the atlas paradigm while overcoming its
scalability and domain-specific limitations. Extensive experiments on CT and
MRI across same-domain, cross-modality, varying-domain, and different
data-scale settings using the MMWHS and TotalSegmentator datasets demonstrate
that our approach outperforms existing methods, particularly in limited-data
and zero-shot modality segmentation. Code is available at
https://github.com/M3DV/DiffAtlas.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 20:06:40 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Hantao",
""
],
[
"Liu",
"Yuhe",
""
],
[
"Yang",
"Jiancheng",
""
],
[
"Guo",
"Weidong",
""
],
[
"Wang",
"Xinyuan",
""
],
[
"Fua",
"Pascal",
""
]
]
| TITLE: DiffAtlas: GenAI-fying Atlas Segmentation via Image-Mask Diffusion
ABSTRACT: Accurate medical image segmentation is crucial for precise anatomical
delineation. Deep learning models like U-Net have shown great success but
depend heavily on large datasets and struggle with domain shifts, complex
structures, and limited training samples. Recent studies have explored
diffusion models for segmentation by iteratively refining masks. However, these
methods still retain the conventional image-to-mask mapping, making them highly
sensitive to input data, which hampers stability and generalization. In
contrast, we introduce DiffAtlas, a novel generative framework that models both
images and masks through diffusion during training, effectively ``GenAI-fying''
atlas-based segmentation. During testing, the model is guided to generate a
specific target image-mask pair, from which the corresponding mask is obtained.
DiffAtlas retains the robustness of the atlas paradigm while overcoming its
scalability and domain-specific limitations. Extensive experiments on CT and
MRI across same-domain, cross-modality, varying-domain, and different
data-scale settings using the MMWHS and TotalSegmentator datasets demonstrate
that our approach outperforms existing methods, particularly in limited-data
and zero-shot modality segmentation. Code is available at
https://github.com/M3DV/DiffAtlas.
| no_new_dataset | 0.947672 |
2503.06754 | Dominik Szcz\c{e}\'sniak PhD | Ewa A. Drzazga-Szcz\c{e}\'sniak and Adam Z. Kaczmarek and Marta Kielak
and Shivam Gupta and Jakub T. Gnyp and Katarzyna Pluta and Zygmunt B\c{a}k
and Piotr Szczepanik and Dominik Szcz\c{e}\'sniak | Signatures of extreme events in the cumulative entropic spectrum | 8 pages, 3 figures | null | null | null | physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, the cumulative effect of the empirical probability
distribution of a random variable is identified as a factor that amplifies the
occurrence of extreme events in datasets. To quantify this observation, a
corresponding information measure is introduced, drawing upon Shannon entropy
for joint probabilities. The proposed approach is validated using selected
market data as case studies, encompassing various instances of extreme events.
In particular, the results indicate that the introduced cumulative measure
exhibits distinctive signatures of such events, even when the data is
relatively noisy. These findings highlight the potential of the discussed
concept for developing a new class of related indicators or classifiers.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 20:19:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Drzazga-Szczȩśniak",
"Ewa A.",
""
],
[
"Kaczmarek",
"Adam Z.",
""
],
[
"Kielak",
"Marta",
""
],
[
"Gupta",
"Shivam",
""
],
[
"Gnyp",
"Jakub T.",
""
],
[
"Pluta",
"Katarzyna",
""
],
[
"Bcak",
"Zygmunt",
""
],
[
"Szczepanik",
"Piotr",
""
],
[
"Szczȩśniak",
"Dominik",
""
]
]
| TITLE: Signatures of extreme events in the cumulative entropic spectrum
ABSTRACT: In this study, the cumulative effect of the empirical probability
distribution of a random variable is identified as a factor that amplifies the
occurrence of extreme events in datasets. To quantify this observation, a
corresponding information measure is introduced, drawing upon Shannon entropy
for joint probabilities. The proposed approach is validated using selected
market data as case studies, encompassing various instances of extreme events.
In particular, the results indicate that the introduced cumulative measure
exhibits distinctive signatures of such events, even when the data is
relatively noisy. These findings highlight the potential of the discussed
concept for developing a new class of related indicators or classifiers.
| no_new_dataset | 0.955486 |
2503.06757 | Zachary Kingston | Chih H. Huang, Pranav Jadhav, Brian Plancher, Zachary Kingston | pRRTC: GPU-Parallel RRT-Connect for Fast, Consistent, and Low-Cost
Motion Planning | 7 pages, 6 figures, 1 table. Submitted to IEEE/RSJ International
Conference on Intelligent Robots and Systems 2025 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sampling-based motion planning algorithms, like the Rapidly-Exploring Random
Tree (RRT) and its widely used variant, RRT-Connect, provide efficient
solutions for high-dimensional planning problems faced by real-world robots.
However, these methods remain computationally intensive, particularly in
complex environments that require many collision checks. As such, to improve
performance, recent efforts have explored parallelizing specific components of
RRT, such as collision checking or running multiple planners independently, but
no prior work has integrated parallelism at multiple levels of the algorithm
for robotic manipulation. In this work, we present pRRTC, a GPU-accelerated
implementation of RRT-Connect that achieves parallelism across the entire
algorithm through multithreaded expansion and connection, SIMT-optimized
collision checking, and hierarchical parallelism optimization, improving
efficiency, consistency, and initial solution cost. We evaluate the
effectiveness of pRRTC on the MotionBenchMaker dataset using robots with 7, 8,
and 14 degrees-of-freedom, demonstrating up to 6x average speedup on
constrained reaching tasks at high collision checking resolution compared to
state-of-the-art. pRRTC also demonstrates a 5x reduction in solution time
variance and 1.5x improvement in initial path costs compared to
state-of-the-art motion planners in complex environments across all robots.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 20:23:12 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Huang",
"Chih H.",
""
],
[
"Jadhav",
"Pranav",
""
],
[
"Plancher",
"Brian",
""
],
[
"Kingston",
"Zachary",
""
]
]
| TITLE: pRRTC: GPU-Parallel RRT-Connect for Fast, Consistent, and Low-Cost
Motion Planning
ABSTRACT: Sampling-based motion planning algorithms, like the Rapidly-Exploring Random
Tree (RRT) and its widely used variant, RRT-Connect, provide efficient
solutions for high-dimensional planning problems faced by real-world robots.
However, these methods remain computationally intensive, particularly in
complex environments that require many collision checks. As such, to improve
performance, recent efforts have explored parallelizing specific components of
RRT, such as collision checking or running multiple planners independently, but
no prior work has integrated parallelism at multiple levels of the algorithm
for robotic manipulation. In this work, we present pRRTC, a GPU-accelerated
implementation of RRT-Connect that achieves parallelism across the entire
algorithm through multithreaded expansion and connection, SIMT-optimized
collision checking, and hierarchical parallelism optimization, improving
efficiency, consistency, and initial solution cost. We evaluate the
effectiveness of pRRTC on the MotionBenchMaker dataset using robots with 7, 8,
and 14 degrees-of-freedom, demonstrating up to 6x average speedup on
constrained reaching tasks at high collision checking resolution compared to
state-of-the-art. pRRTC also demonstrates a 5x reduction in solution time
variance and 1.5x improvement in initial path costs compared to
state-of-the-art motion planners in complex environments across all robots.
| no_new_dataset | 0.947235 |
2503.06759 | Hung Vo | Hung Q. Vo, Samira Zare, Son T. Ly, Lin Wang, Chika F. Ezeana, Xiaohui
Yu, Kelvin K. Wong, Stephen T.C. Wong, and Hien V. Nguyen | Revisiting Invariant Learning for Out-of-Domain Generalization on
Multi-Site Mammogram Datasets | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Despite significant progress in robust deep learning techniques for mammogram
breast cancer classification, their reliability in real-world clinical
development settings remains uncertain. The translation of these models to
clinical practice faces challenges due to variations in medical centers,
imaging protocols, and patient populations. To enhance their robustness,
invariant learning methods have been proposed, prioritizing causal factors over
misleading features. However, their effectiveness in clinical development and
impact on mammogram classification require investigation. This paper reassesses
the application of invariant learning for breast cancer risk estimation based
on mammograms. Utilizing diverse multi-site public datasets, it represents the
first study in this area. The objective is to evaluate invariant learning's
benefits in developing robust models. Invariant learning methods, including
Invariant Risk Minimization and Variance Risk Extrapolation, are compared
quantitatively against Empirical Risk Minimization. Evaluation metrics include
accuracy, average precision, and area under the curve. Additionally,
interpretability is examined through class activation maps and visualization of
learned representations. This research examines the advantages, limitations,
and challenges of invariant learning for mammogram classification, guiding
future studies to develop generalized methods for breast cancer prediction on
whole mammograms in out-of-domain scenarios.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 20:28:04 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Vo",
"Hung Q.",
""
],
[
"Zare",
"Samira",
""
],
[
"Ly",
"Son T.",
""
],
[
"Wang",
"Lin",
""
],
[
"Ezeana",
"Chika F.",
""
],
[
"Yu",
"Xiaohui",
""
],
[
"Wong",
"Kelvin K.",
""
],
[
"Wong",
"Stephen T. C.",
""
],
[
"Nguyen",
"Hien V.",
""
]
]
| TITLE: Revisiting Invariant Learning for Out-of-Domain Generalization on
Multi-Site Mammogram Datasets
ABSTRACT: Despite significant progress in robust deep learning techniques for mammogram
breast cancer classification, their reliability in real-world clinical
development settings remains uncertain. The translation of these models to
clinical practice faces challenges due to variations in medical centers,
imaging protocols, and patient populations. To enhance their robustness,
invariant learning methods have been proposed, prioritizing causal factors over
misleading features. However, their effectiveness in clinical development and
impact on mammogram classification require investigation. This paper reassesses
the application of invariant learning for breast cancer risk estimation based
on mammograms. Utilizing diverse multi-site public datasets, it represents the
first study in this area. The objective is to evaluate invariant learning's
benefits in developing robust models. Invariant learning methods, including
Invariant Risk Minimization and Variance Risk Extrapolation, are compared
quantitatively against Empirical Risk Minimization. Evaluation metrics include
accuracy, average precision, and area under the curve. Additionally,
interpretability is examined through class activation maps and visualization of
learned representations. This research examines the advantages, limitations,
and challenges of invariant learning for mammogram classification, guiding
future studies to develop generalized methods for breast cancer prediction on
whole mammograms in out-of-domain scenarios.
| no_new_dataset | 0.942507 |
2503.06779 | Kai Ren | Kai Ren, Heejin Ahn, Maryam Kamgarpour | Chance-Constrained Trajectory Planning with Multimodal Environmental
Uncertainty | Published in IEEE Control Systems Letters | in IEEE Control Systems Letters, vol. 7, pp. 13-18, 2023 | 10.1109/LCSYS.2022.3186269 | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | We tackle safe trajectory planning under Gaussian mixture model (GMM)
uncertainty. Specifically, we use a GMM to model the multimodal behaviors of
obstacles' uncertain states. Then, we develop a mixed-integer conic
approximation to the chance-constrained trajectory planning problem with
deterministic linear systems and polyhedral obstacles. When the GMM moments are
estimated via finite samples, we develop a tight concentration bound to ensure
the chance constraint with a desired confidence. Moreover, to limit the amount
of constraint violation, we develop a Conditional Value-at-Risk (CVaR) approach
corresponding to the chance constraints and derive a tractable approximation
for known and estimated GMM moments. We verify our methods with
state-of-the-art trajectory prediction algorithms and autonomous driving
datasets.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 21:18:35 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ren",
"Kai",
""
],
[
"Ahn",
"Heejin",
""
],
[
"Kamgarpour",
"Maryam",
""
]
]
| TITLE: Chance-Constrained Trajectory Planning with Multimodal Environmental
Uncertainty
ABSTRACT: We tackle safe trajectory planning under Gaussian mixture model (GMM)
uncertainty. Specifically, we use a GMM to model the multimodal behaviors of
obstacles' uncertain states. Then, we develop a mixed-integer conic
approximation to the chance-constrained trajectory planning problem with
deterministic linear systems and polyhedral obstacles. When the GMM moments are
estimated via finite samples, we develop a tight concentration bound to ensure
the chance constraint with a desired confidence. Moreover, to limit the amount
of constraint violation, we develop a Conditional Value-at-Risk (CVaR) approach
corresponding to the chance constraints and derive a tractable approximation
for known and estimated GMM moments. We verify our methods with
state-of-the-art trajectory prediction algorithms and autonomous driving
datasets.
| no_new_dataset | 0.946547 |
2503.06781 | Yufei Li | Yufei Li, John Nham, Ganesh Jawahar, Lei Shu, David Uthus, Yun-Hsuan
Sung, Chengrun Yang, Itai Rolnick, Yi Qiao, Cong Liu | Dr Genre: Reinforcement Learning from Decoupled LLM Feedback for Generic
Text Rewriting | 29 pages, 4 figures, 25 tables | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Generic text rewriting is a prevalent large language model (LLM) application
that covers diverse real-world tasks, such as style transfer, fact correction,
and email editing. These tasks vary in rewriting objectives (e.g., factual
consistency vs. semantic preservation), making it challenging to develop a
unified model that excels across all dimensions. Existing methods often
specialize in either a single task or a specific objective, limiting their
generalizability. In this work, we introduce a generic model proficient in
factuality, stylistic, and conversational rewriting tasks. To simulate
real-world user rewrite requests, we construct a conversational rewrite
dataset, ChatRewrite, that presents ``natural''-sounding instructions, from raw
emails using LLMs. Combined with other popular rewrite datasets, including
LongFact for the factuality rewrite task and RewriteLM for the stylistic
rewrite task, this forms a broad benchmark for training and evaluating generic
rewrite models. To align with task-specific objectives, we propose Dr Genre, a
Decoupled-reward learning framework for Generic rewriting, that utilizes
objective-oriented reward models with a task-specific weighting. Evaluation
shows that \approach delivers higher-quality rewrites across all targeted
tasks, improving objectives including instruction following (agreement),
internal consistency (coherence), and minimal unnecessary edits (conciseness).
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 21:23:52 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Yufei",
""
],
[
"Nham",
"John",
""
],
[
"Jawahar",
"Ganesh",
""
],
[
"Shu",
"Lei",
""
],
[
"Uthus",
"David",
""
],
[
"Sung",
"Yun-Hsuan",
""
],
[
"Yang",
"Chengrun",
""
],
[
"Rolnick",
"Itai",
""
],
[
"Qiao",
"Yi",
""
],
[
"Liu",
"Cong",
""
]
]
| TITLE: Dr Genre: Reinforcement Learning from Decoupled LLM Feedback for Generic
Text Rewriting
ABSTRACT: Generic text rewriting is a prevalent large language model (LLM) application
that covers diverse real-world tasks, such as style transfer, fact correction,
and email editing. These tasks vary in rewriting objectives (e.g., factual
consistency vs. semantic preservation), making it challenging to develop a
unified model that excels across all dimensions. Existing methods often
specialize in either a single task or a specific objective, limiting their
generalizability. In this work, we introduce a generic model proficient in
factuality, stylistic, and conversational rewriting tasks. To simulate
real-world user rewrite requests, we construct a conversational rewrite
dataset, ChatRewrite, that presents ``natural''-sounding instructions, from raw
emails using LLMs. Combined with other popular rewrite datasets, including
LongFact for the factuality rewrite task and RewriteLM for the stylistic
rewrite task, this forms a broad benchmark for training and evaluating generic
rewrite models. To align with task-specific objectives, we propose Dr Genre, a
Decoupled-reward learning framework for Generic rewriting, that utilizes
objective-oriented reward models with a task-specific weighting. Evaluation
shows that \approach delivers higher-quality rewrites across all targeted
tasks, improving objectives including instruction following (agreement),
internal consistency (coherence), and minimal unnecessary edits (conciseness).
| new_dataset | 0.958421 |
2503.06795 | Lidia Al-Zogbi | Lidia Al-Zogbi, Deepak Raina, Vinciya Pandian, Thorsten Fleiter, Axel
Krieger | Robotic Ultrasound-Guided Femoral Artery Reconstruction of
Anatomically-Representative Phantoms | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Femoral artery access is essential for numerous clinical procedures,
including diagnostic angiography, therapeutic catheterization, and emergency
interventions. Despite its critical role, successful vascular access remains
challenging due to anatomical variability, overlying adipose tissue, and the
need for precise ultrasound (US) guidance. Errors in needle placement can lead
to severe complications, restricting the procedure to highly skilled clinicians
in controlled hospital settings. While robotic systems have shown promise in
addressing these challenges through autonomous scanning and vessel
reconstruction, clinical translation remains limited due to reliance on
simplified phantom models that fail to capture human anatomical complexity. In
this work, we present a method for autonomous robotic US scanning of bifurcated
femoral arteries, and validate it on five vascular phantoms created from real
patient computed tomography (CT) data. Additionally, we introduce a video-based
deep learning US segmentation network tailored for vascular imaging, enabling
improved 3D arterial reconstruction. The proposed network achieves a Dice score
of 89.21% and an Intersection over Union of 80.54% on a newly developed
vascular dataset. The quality of the reconstructed artery centerline is
evaluated against ground truth CT data, demonstrating an average L2 deviation
of 0.91+/-0.70 mm, with an average Hausdorff distance of 4.36+/-1.11mm. This
study is the first to validate an autonomous robotic system for US scanning of
the femoral artery on a diverse set of patient-specific phantoms, introducing a
more advanced framework for evaluating robotic performance in vascular imaging
and intervention.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 22:20:25 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Al-Zogbi",
"Lidia",
""
],
[
"Raina",
"Deepak",
""
],
[
"Pandian",
"Vinciya",
""
],
[
"Fleiter",
"Thorsten",
""
],
[
"Krieger",
"Axel",
""
]
]
| TITLE: Robotic Ultrasound-Guided Femoral Artery Reconstruction of
Anatomically-Representative Phantoms
ABSTRACT: Femoral artery access is essential for numerous clinical procedures,
including diagnostic angiography, therapeutic catheterization, and emergency
interventions. Despite its critical role, successful vascular access remains
challenging due to anatomical variability, overlying adipose tissue, and the
need for precise ultrasound (US) guidance. Errors in needle placement can lead
to severe complications, restricting the procedure to highly skilled clinicians
in controlled hospital settings. While robotic systems have shown promise in
addressing these challenges through autonomous scanning and vessel
reconstruction, clinical translation remains limited due to reliance on
simplified phantom models that fail to capture human anatomical complexity. In
this work, we present a method for autonomous robotic US scanning of bifurcated
femoral arteries, and validate it on five vascular phantoms created from real
patient computed tomography (CT) data. Additionally, we introduce a video-based
deep learning US segmentation network tailored for vascular imaging, enabling
improved 3D arterial reconstruction. The proposed network achieves a Dice score
of 89.21% and an Intersection over Union of 80.54% on a newly developed
vascular dataset. The quality of the reconstructed artery centerline is
evaluated against ground truth CT data, demonstrating an average L2 deviation
of 0.91+/-0.70 mm, with an average Hausdorff distance of 4.36+/-1.11mm. This
study is the first to validate an autonomous robotic system for US scanning of
the femoral artery on a diverse set of patient-specific phantoms, introducing a
more advanced framework for evaluating robotic performance in vascular imaging
and intervention.
| new_dataset | 0.969353 |
2503.06796 | Anh Nguyen | Tri Le, Toan Nguyen, Quang Tran, Quang Nguyen, Baoru Huang, Hoan
Nguyen, Minh Nhat Vu, Tung D. Ta, Anh Nguyen | RoboDesign1M: A Large-scale Dataset for Robot Design Understanding | 8 pages | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Robot design is a complex and time-consuming process that requires
specialized expertise. Gaining a deeper understanding of robot design data can
enable various applications, including automated design generation, retrieving
example designs from text, and developing AI-powered design assistants. While
recent advancements in foundation models present promising approaches to
addressing these challenges, progress in this field is hindered by the lack of
large-scale design datasets. In this paper, we introduce RoboDesign1M, a
large-scale dataset comprising 1 million samples. Our dataset features
multimodal data collected from scientific literature, covering various robotics
domains. We propose a semi-automated data collection pipeline, enabling
efficient and diverse data acquisition. To assess the effectiveness of
RoboDesign1M, we conduct extensive experiments across multiple tasks, including
design image generation, visual question answering about designs, and design
image retrieval. The results demonstrate that our dataset serves as a
challenging new benchmark for design understanding tasks and has the potential
to advance research in this field. RoboDesign1M will be released to support
further developments in AI-driven robotic design automation.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 22:29:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Le",
"Tri",
""
],
[
"Nguyen",
"Toan",
""
],
[
"Tran",
"Quang",
""
],
[
"Nguyen",
"Quang",
""
],
[
"Huang",
"Baoru",
""
],
[
"Nguyen",
"Hoan",
""
],
[
"Vu",
"Minh Nhat",
""
],
[
"Ta",
"Tung D.",
""
],
[
"Nguyen",
"Anh",
""
]
]
| TITLE: RoboDesign1M: A Large-scale Dataset for Robot Design Understanding
ABSTRACT: Robot design is a complex and time-consuming process that requires
specialized expertise. Gaining a deeper understanding of robot design data can
enable various applications, including automated design generation, retrieving
example designs from text, and developing AI-powered design assistants. While
recent advancements in foundation models present promising approaches to
addressing these challenges, progress in this field is hindered by the lack of
large-scale design datasets. In this paper, we introduce RoboDesign1M, a
large-scale dataset comprising 1 million samples. Our dataset features
multimodal data collected from scientific literature, covering various robotics
domains. We propose a semi-automated data collection pipeline, enabling
efficient and diverse data acquisition. To assess the effectiveness of
RoboDesign1M, we conduct extensive experiments across multiple tasks, including
design image generation, visual question answering about designs, and design
image retrieval. The results demonstrate that our dataset serves as a
challenging new benchmark for design understanding tasks and has the potential
to advance research in this field. RoboDesign1M will be released to support
further developments in AI-driven robotic design automation.
| new_dataset | 0.961714 |
2503.06797 | Sabeen Ahmed | Sabeen Ahmed, Nathan Parker, Margaret Park, Evan W. Davis, Jennifer B.
Permuth, Matthew B. Schabath, Yasin Yilmaz, Ghulam Rasool | Multimodal AI-driven Biomarker for Early Detection of Cancer Cachexia | 17 pages, 6 figures, 3 Tables | null | null | null | eess.IV cs.AI q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Cancer cachexia is a multifactorial syndrome characterized by progressive
muscle wasting, metabolic dysfunction, and systemic inflammation, leading to
reduced quality of life and increased mortality. Despite extensive research, no
single definitive biomarker exists, as cachexia-related indicators such as
serum biomarkers, skeletal muscle measurements, and metabolic abnormalities
often overlap with other conditions. Existing composite indices, including the
Cancer Cachexia Index (CXI), Modified CXI (mCXI), and Cachexia Score (CASCO),
integrate multiple biomarkers but lack standardized thresholds, limiting their
clinical utility. This study proposes a multimodal AI-based biomarker for early
cancer cachexia detection, leveraging open-source large language models (LLMs)
and foundation models trained on medical data. The approach integrates
heterogeneous patient data, including demographics, disease status, lab
reports, radiological imaging (CT scans), and clinical notes, using a machine
learning framework that can handle missing data. Unlike previous AI-based
models trained on curated datasets, this method utilizes routinely collected
clinical data, enhancing real-world applicability. Additionally, the model
incorporates confidence estimation, allowing the identification of cases
requiring expert review for precise clinical interpretation. Preliminary
findings demonstrate that integrating multiple data modalities improves
cachexia prediction accuracy at the time of cancer diagnosis. The AI-based
biomarker dynamically adapts to patient-specific factors such as age, race,
ethnicity, weight, cancer type, and stage, avoiding the limitations of
fixed-threshold biomarkers. This multimodal AI biomarker provides a scalable
and clinically viable solution for early cancer cachexia detection,
facilitating personalized interventions and potentially improving treatment
outcomes and patient survival.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 22:32:37 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ahmed",
"Sabeen",
""
],
[
"Parker",
"Nathan",
""
],
[
"Park",
"Margaret",
""
],
[
"Davis",
"Evan W.",
""
],
[
"Permuth",
"Jennifer B.",
""
],
[
"Schabath",
"Matthew B.",
""
],
[
"Yilmaz",
"Yasin",
""
],
[
"Rasool",
"Ghulam",
""
]
]
| TITLE: Multimodal AI-driven Biomarker for Early Detection of Cancer Cachexia
ABSTRACT: Cancer cachexia is a multifactorial syndrome characterized by progressive
muscle wasting, metabolic dysfunction, and systemic inflammation, leading to
reduced quality of life and increased mortality. Despite extensive research, no
single definitive biomarker exists, as cachexia-related indicators such as
serum biomarkers, skeletal muscle measurements, and metabolic abnormalities
often overlap with other conditions. Existing composite indices, including the
Cancer Cachexia Index (CXI), Modified CXI (mCXI), and Cachexia Score (CASCO),
integrate multiple biomarkers but lack standardized thresholds, limiting their
clinical utility. This study proposes a multimodal AI-based biomarker for early
cancer cachexia detection, leveraging open-source large language models (LLMs)
and foundation models trained on medical data. The approach integrates
heterogeneous patient data, including demographics, disease status, lab
reports, radiological imaging (CT scans), and clinical notes, using a machine
learning framework that can handle missing data. Unlike previous AI-based
models trained on curated datasets, this method utilizes routinely collected
clinical data, enhancing real-world applicability. Additionally, the model
incorporates confidence estimation, allowing the identification of cases
requiring expert review for precise clinical interpretation. Preliminary
findings demonstrate that integrating multiple data modalities improves
cachexia prediction accuracy at the time of cancer diagnosis. The AI-based
biomarker dynamically adapts to patient-specific factors such as age, race,
ethnicity, weight, cancer type, and stage, avoiding the limitations of
fixed-threshold biomarkers. This multimodal AI biomarker provides a scalable
and clinically viable solution for early cancer cachexia detection,
facilitating personalized interventions and potentially improving treatment
outcomes and patient survival.
| no_new_dataset | 0.949949 |
2503.06800 | Hritik Bansal | Hritik Bansal, Clark Peng, Yonatan Bitton, Roman Goldenberg, Aditya
Grover, Kai-Wei Chang | VideoPhy-2: A Challenging Action-Centric Physical Commonsense Evaluation
in Video Generation | 41 pages, 33 Figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale video generative models, capable of creating realistic videos of
diverse visual concepts, are strong candidates for general-purpose physical
world simulators. However, their adherence to physical commonsense across
real-world actions remains unclear (e.g., playing tennis, backflip). Existing
benchmarks suffer from limitations such as limited size, lack of human
evaluation, sim-to-real gaps, and absence of fine-grained physical rule
analysis. To address this, we introduce VideoPhy-2, an action-centric dataset
for evaluating physical commonsense in generated videos. We curate 200 diverse
actions and detailed prompts for video synthesis from modern generative models.
We perform human evaluation that assesses semantic adherence, physical
commonsense, and grounding of physical rules in the generated videos. Our
findings reveal major shortcomings, with even the best model achieving only 22%
joint performance (i.e., high semantic and physical commonsense adherence) on
the hard subset of VideoPhy-2. We find that the models particularly struggle
with conservation laws like mass and momentum. Finally, we also train
VideoPhy-AutoEval, an automatic evaluator for fast, reliable assessment on our
dataset. Overall, VideoPhy-2 serves as a rigorous benchmark, exposing critical
gaps in video generative models and guiding future research in
physically-grounded video generation. The data and code is available at
https://videophy2.github.io/.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 22:49:12 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Bansal",
"Hritik",
""
],
[
"Peng",
"Clark",
""
],
[
"Bitton",
"Yonatan",
""
],
[
"Goldenberg",
"Roman",
""
],
[
"Grover",
"Aditya",
""
],
[
"Chang",
"Kai-Wei",
""
]
]
| TITLE: VideoPhy-2: A Challenging Action-Centric Physical Commonsense Evaluation
in Video Generation
ABSTRACT: Large-scale video generative models, capable of creating realistic videos of
diverse visual concepts, are strong candidates for general-purpose physical
world simulators. However, their adherence to physical commonsense across
real-world actions remains unclear (e.g., playing tennis, backflip). Existing
benchmarks suffer from limitations such as limited size, lack of human
evaluation, sim-to-real gaps, and absence of fine-grained physical rule
analysis. To address this, we introduce VideoPhy-2, an action-centric dataset
for evaluating physical commonsense in generated videos. We curate 200 diverse
actions and detailed prompts for video synthesis from modern generative models.
We perform human evaluation that assesses semantic adherence, physical
commonsense, and grounding of physical rules in the generated videos. Our
findings reveal major shortcomings, with even the best model achieving only 22%
joint performance (i.e., high semantic and physical commonsense adherence) on
the hard subset of VideoPhy-2. We find that the models particularly struggle
with conservation laws like mass and momentum. Finally, we also train
VideoPhy-AutoEval, an automatic evaluator for fast, reliable assessment on our
dataset. Overall, VideoPhy-2 serves as a rigorous benchmark, exposing critical
gaps in video generative models and guiding future research in
physically-grounded video generation. The data and code is available at
https://videophy2.github.io/.
| new_dataset | 0.962143 |
2503.06805 | Aref Farhadipour | Aref Farhadipour, Hossein Ranjbar, Masoumeh Chapariniya, Teodora
Vukovic, Sarah Ebling, Volker Dellwo | Multimodal Emotion Recognition and Sentiment Analysis in Multi-Party
Conversation Contexts | 5 pages | null | null | null | cs.CV cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Emotion recognition and sentiment analysis are pivotal tasks in speech and
language processing, particularly in real-world scenarios involving
multi-party, conversational data. This paper presents a multimodal approach to
tackle these challenges on a well-known dataset. We propose a system that
integrates four key modalities/channels using pre-trained models: RoBERTa for
text, Wav2Vec2 for speech, a proposed FacialNet for facial expressions, and a
CNN+Transformer architecture trained from scratch for video analysis. Feature
embeddings from each modality are concatenated to form a multimodal vector,
which is then used to predict emotion and sentiment labels. The multimodal
system demonstrates superior performance compared to unimodal approaches,
achieving an accuracy of 66.36% for emotion recognition and 72.15% for
sentiment analysis.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 23:14:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Farhadipour",
"Aref",
""
],
[
"Ranjbar",
"Hossein",
""
],
[
"Chapariniya",
"Masoumeh",
""
],
[
"Vukovic",
"Teodora",
""
],
[
"Ebling",
"Sarah",
""
],
[
"Dellwo",
"Volker",
""
]
]
| TITLE: Multimodal Emotion Recognition and Sentiment Analysis in Multi-Party
Conversation Contexts
ABSTRACT: Emotion recognition and sentiment analysis are pivotal tasks in speech and
language processing, particularly in real-world scenarios involving
multi-party, conversational data. This paper presents a multimodal approach to
tackle these challenges on a well-known dataset. We propose a system that
integrates four key modalities/channels using pre-trained models: RoBERTa for
text, Wav2Vec2 for speech, a proposed FacialNet for facial expressions, and a
CNN+Transformer architecture trained from scratch for video analysis. Feature
embeddings from each modality are concatenated to form a multimodal vector,
which is then used to predict emotion and sentiment labels. The multimodal
system demonstrates superior performance compared to unimodal approaches,
achieving an accuracy of 66.36% for emotion recognition and 72.15% for
sentiment analysis.
| no_new_dataset | 0.947866 |
2503.06809 | Gexin Huang | Gexin Huang, Ruinan Jin, Yucheng Tang, Can Zhao, Tatsuya Harada,
Xiaoxiao Li, Gu Lin | Interactive Tumor Progression Modeling via Sketch-Based Image Editing | 9 pages, 4 figures | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Accurately visualizing and editing tumor progression in medical imaging is
crucial for diagnosis, treatment planning, and clinical communication. To
address the challenges of subjectivity and limited precision in existing
methods, we propose SkEditTumor, a sketch-based diffusion model for
controllable tumor progression editing. By leveraging sketches as structural
priors, our method enables precise modifications of tumor regions while
maintaining structural integrity and visual realism. We evaluate SkEditTumor on
four public datasets - BraTS, LiTS, KiTS, and MSD-Pancreas - covering diverse
organs and imaging modalities. Experimental results demonstrate that our method
outperforms state-of-the-art baselines, achieving superior image fidelity and
segmentation accuracy. Our contributions include a novel integration of
sketches with diffusion models for medical image editing, fine-grained control
over tumor progression visualization, and extensive validation across multiple
datasets, setting a new benchmark in the field.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 00:04:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Huang",
"Gexin",
""
],
[
"Jin",
"Ruinan",
""
],
[
"Tang",
"Yucheng",
""
],
[
"Zhao",
"Can",
""
],
[
"Harada",
"Tatsuya",
""
],
[
"Li",
"Xiaoxiao",
""
],
[
"Lin",
"Gu",
""
]
]
| TITLE: Interactive Tumor Progression Modeling via Sketch-Based Image Editing
ABSTRACT: Accurately visualizing and editing tumor progression in medical imaging is
crucial for diagnosis, treatment planning, and clinical communication. To
address the challenges of subjectivity and limited precision in existing
methods, we propose SkEditTumor, a sketch-based diffusion model for
controllable tumor progression editing. By leveraging sketches as structural
priors, our method enables precise modifications of tumor regions while
maintaining structural integrity and visual realism. We evaluate SkEditTumor on
four public datasets - BraTS, LiTS, KiTS, and MSD-Pancreas - covering diverse
organs and imaging modalities. Experimental results demonstrate that our method
outperforms state-of-the-art baselines, achieving superior image fidelity and
segmentation accuracy. Our contributions include a novel integration of
sketches with diffusion models for medical image editing, fine-grained control
over tumor progression visualization, and extensive validation across multiple
datasets, setting a new benchmark in the field.
| no_new_dataset | 0.951278 |
2503.06810 | Dhawal Gupta | Dhawal Gupta, Adam Fisch, Christoph Dann, Alekh Agarwal | Mitigating Preference Hacking in Policy Optimization with Pessimism | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This work tackles the problem of overoptimization in reinforcement learning
from human feedback (RLHF), a prevalent technique for aligning models with
human preferences. RLHF relies on reward or preference models trained on
\emph{fixed preference datasets}, and these models are unreliable when
evaluated outside the support of this preference data, leading to the common
reward or preference hacking phenomenon. We propose novel, pessimistic
objectives for RLHF which are provably robust to overoptimization through the
use of pessimism in the face of uncertainty, and design practical algorithms,
P3O and PRPO, to optimize these objectives. Our approach is derived for the
general preference optimization setting, but can be used with reward models as
well. We evaluate P3O and PRPO on the tasks of fine-tuning language models for
document summarization and creating helpful assistants, demonstrating
remarkable resilience to overoptimization.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 00:13:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Gupta",
"Dhawal",
""
],
[
"Fisch",
"Adam",
""
],
[
"Dann",
"Christoph",
""
],
[
"Agarwal",
"Alekh",
""
]
]
| TITLE: Mitigating Preference Hacking in Policy Optimization with Pessimism
ABSTRACT: This work tackles the problem of overoptimization in reinforcement learning
from human feedback (RLHF), a prevalent technique for aligning models with
human preferences. RLHF relies on reward or preference models trained on
\emph{fixed preference datasets}, and these models are unreliable when
evaluated outside the support of this preference data, leading to the common
reward or preference hacking phenomenon. We propose novel, pessimistic
objectives for RLHF which are provably robust to overoptimization through the
use of pessimism in the face of uncertainty, and design practical algorithms,
P3O and PRPO, to optimize these objectives. Our approach is derived for the
general preference optimization setting, but can be used with reward models as
well. We evaluate P3O and PRPO on the tasks of fine-tuning language models for
document summarization and creating helpful assistants, demonstrating
remarkable resilience to overoptimization.
| no_new_dataset | 0.950365 |
2503.06816 | Yuchen Mao | Yuchen Mao, Hongwei Li, Yinyi Lai, Giorgos Papanastasiou, Peng Qi,
Yunjie Yang, Chengjia Wang | Semi-Supervised Medical Image Segmentation via Knowledge Mining from
Large Models | 18 pages, 2 figures | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale vision models like SAM have extensive visual knowledge, yet their
general nature and computational demands limit their use in specialized tasks
like medical image segmentation. In contrast, task-specific models such as
U-Net++ often underperform due to sparse labeled data. This study introduces a
strategic knowledge mining method that leverages SAM's broad understanding to
boost the performance of small, locally hosted deep learning models.
In our approach, we trained a U-Net++ model on a limited labeled dataset and
extend its capabilities by converting SAM's output infered on unlabeled images
into prompts. This process not only harnesses SAM's generalized visual
knowledge but also iteratively improves SAM's prediction to cater specialized
medical segmentation tasks via U-Net++. The mined knowledge, serving as "pseudo
labels", enriches the training dataset, enabling the fine-tuning of the local
network.
Applied to the Kvasir SEG and COVID-QU-Ex datasets which consist of
gastrointestinal polyp and lung X-ray images respectively, our proposed method
consistently enhanced the segmentation performance on Dice by 3% and 1%
respectively over the baseline U-Net++ model, when the same amount of labelled
data were used during training (75% and 50% of labelled data). Remarkably, our
proposed method surpassed the baseline U-Net++ model even when the latter was
trained exclusively on labeled data (100% of labelled data). These results
underscore the potential of knowledge mining to overcome data limitations in
specialized models by leveraging the broad, albeit general, knowledge of
large-scale models like SAM, all while maintaining operational efficiency
essential for clinical applications.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 00:43:45 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mao",
"Yuchen",
""
],
[
"Li",
"Hongwei",
""
],
[
"Lai",
"Yinyi",
""
],
[
"Papanastasiou",
"Giorgos",
""
],
[
"Qi",
"Peng",
""
],
[
"Yang",
"Yunjie",
""
],
[
"Wang",
"Chengjia",
""
]
]
| TITLE: Semi-Supervised Medical Image Segmentation via Knowledge Mining from
Large Models
ABSTRACT: Large-scale vision models like SAM have extensive visual knowledge, yet their
general nature and computational demands limit their use in specialized tasks
like medical image segmentation. In contrast, task-specific models such as
U-Net++ often underperform due to sparse labeled data. This study introduces a
strategic knowledge mining method that leverages SAM's broad understanding to
boost the performance of small, locally hosted deep learning models.
In our approach, we trained a U-Net++ model on a limited labeled dataset and
extend its capabilities by converting SAM's output infered on unlabeled images
into prompts. This process not only harnesses SAM's generalized visual
knowledge but also iteratively improves SAM's prediction to cater specialized
medical segmentation tasks via U-Net++. The mined knowledge, serving as "pseudo
labels", enriches the training dataset, enabling the fine-tuning of the local
network.
Applied to the Kvasir SEG and COVID-QU-Ex datasets which consist of
gastrointestinal polyp and lung X-ray images respectively, our proposed method
consistently enhanced the segmentation performance on Dice by 3% and 1%
respectively over the baseline U-Net++ model, when the same amount of labelled
data were used during training (75% and 50% of labelled data). Remarkably, our
proposed method surpassed the baseline U-Net++ model even when the latter was
trained exclusively on labeled data (100% of labelled data). These results
underscore the potential of knowledge mining to overcome data limitations in
specialized models by leveraging the broad, albeit general, knowledge of
large-scale models like SAM, all while maintaining operational efficiency
essential for clinical applications.
| no_new_dataset | 0.956022 |
2503.06820 | Wei Dai | Wei Dai, Alan Luo, Zane Durante, Debadutta Dash, Arnold Milstein,
Kevin Schulman, Ehsan Adeli, Li Fei-Fei | Towards Fine-Grained Video Question Answering | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | In the rapidly evolving domain of video understanding, Video Question
Answering (VideoQA) remains a focal point. However, existing datasets exhibit
gaps in temporal and spatial granularity, which consequently limits the
capabilities of existing VideoQA methods. This paper introduces the
Multi-Object Multi-Actor Question Answering (MOMA-QA) dataset, which is
designed to address these shortcomings by emphasizing temporal localization,
spatial relationship reasoning, and entity-centric queries. With ground truth
scene graphs and temporal interval annotations, MOMA-QA is ideal for developing
models for fine-grained video understanding. Furthermore, we present a novel
video-language model, SGVLM, which incorporates a scene graph predictor, an
efficient frame retriever, and a pre-trained large language model for temporal
localization and fine-grained relationship understanding. Evaluations on
MOMA-QA and other public datasets demonstrate the superior performance of our
model, setting new benchmarks for VideoQA.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 01:02:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Dai",
"Wei",
""
],
[
"Luo",
"Alan",
""
],
[
"Durante",
"Zane",
""
],
[
"Dash",
"Debadutta",
""
],
[
"Milstein",
"Arnold",
""
],
[
"Schulman",
"Kevin",
""
],
[
"Adeli",
"Ehsan",
""
],
[
"Fei-Fei",
"Li",
""
]
]
| TITLE: Towards Fine-Grained Video Question Answering
ABSTRACT: In the rapidly evolving domain of video understanding, Video Question
Answering (VideoQA) remains a focal point. However, existing datasets exhibit
gaps in temporal and spatial granularity, which consequently limits the
capabilities of existing VideoQA methods. This paper introduces the
Multi-Object Multi-Actor Question Answering (MOMA-QA) dataset, which is
designed to address these shortcomings by emphasizing temporal localization,
spatial relationship reasoning, and entity-centric queries. With ground truth
scene graphs and temporal interval annotations, MOMA-QA is ideal for developing
models for fine-grained video understanding. Furthermore, we present a novel
video-language model, SGVLM, which incorporates a scene graph predictor, an
efficient frame retriever, and a pre-trained large language model for temporal
localization and fine-grained relationship understanding. Evaluations on
MOMA-QA and other public datasets demonstrate the superior performance of our
model, setting new benchmarks for VideoQA.
| new_dataset | 0.956022 |
2503.06828 | Somayeh Farahani Ph.D. student | Somayeh Farahani, Marjaneh Hejazi, Antonio Di Ieva, Emad Fatemizadeh,
Sidong Liu | Towards a Multimodal MRI-Based Foundation Model for Multi-Level Feature
Exploration in Segmentation, Molecular Subtyping, and Grading of Glioma | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate, noninvasive glioma characterization is crucial for effective
clinical management. Traditional methods, dependent on invasive tissue
sampling, often fail to capture the spatial heterogeneity of the tumor. While
deep learning has improved segmentation and molecular profiling, few approaches
simultaneously integrate tumor morphology and molecular features. Foundation
deep learning models, which learn robust, task-agnostic representations from
large-scale datasets, hold great promise but remain underutilized in glioma
imaging biomarkers. We propose the Multi-Task SWIN-UNETR (MTS-UNET) model, a
novel foundation-based framework built on the BrainSegFounder model, pretrained
on large-scale neuroimaging data. MTS-UNET simultaneously performs glioma
segmentation, histological grading, and molecular subtyping (IDH mutation and
1p/19q co-deletion). It incorporates two key modules: Tumor-Aware Feature
Encoding (TAFE) for multi-scale, tumor-focused feature extraction and
Cross-Modality Differential (CMD) for highlighting subtle T2-FLAIR mismatch
signals associated with IDH mutation. The model was trained and validated on a
diverse, multi-center cohort of 2,249 glioma patients from seven public
datasets. MTS-UNET achieved a mean Dice score of 84% for segmentation, along
with AUCs of 90.58% for IDH mutation, 69.22% for 1p/19q co-deletion prediction,
and 87.54% for grading, significantly outperforming baseline models (p<=0.05).
Ablation studies validated the essential contributions of the TAFE and CMD
modules and demonstrated the robustness of the framework. The foundation-based
MTS-UNET model effectively integrates tumor segmentation with multi-level
classification, exhibiting strong generalizability across diverse MRI datasets.
This framework shows significant potential for advancing noninvasive,
personalized glioma management by improving predictive accuracy and
interpretability.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 01:27:09 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Farahani",
"Somayeh",
""
],
[
"Hejazi",
"Marjaneh",
""
],
[
"Di Ieva",
"Antonio",
""
],
[
"Fatemizadeh",
"Emad",
""
],
[
"Liu",
"Sidong",
""
]
]
| TITLE: Towards a Multimodal MRI-Based Foundation Model for Multi-Level Feature
Exploration in Segmentation, Molecular Subtyping, and Grading of Glioma
ABSTRACT: Accurate, noninvasive glioma characterization is crucial for effective
clinical management. Traditional methods, dependent on invasive tissue
sampling, often fail to capture the spatial heterogeneity of the tumor. While
deep learning has improved segmentation and molecular profiling, few approaches
simultaneously integrate tumor morphology and molecular features. Foundation
deep learning models, which learn robust, task-agnostic representations from
large-scale datasets, hold great promise but remain underutilized in glioma
imaging biomarkers. We propose the Multi-Task SWIN-UNETR (MTS-UNET) model, a
novel foundation-based framework built on the BrainSegFounder model, pretrained
on large-scale neuroimaging data. MTS-UNET simultaneously performs glioma
segmentation, histological grading, and molecular subtyping (IDH mutation and
1p/19q co-deletion). It incorporates two key modules: Tumor-Aware Feature
Encoding (TAFE) for multi-scale, tumor-focused feature extraction and
Cross-Modality Differential (CMD) for highlighting subtle T2-FLAIR mismatch
signals associated with IDH mutation. The model was trained and validated on a
diverse, multi-center cohort of 2,249 glioma patients from seven public
datasets. MTS-UNET achieved a mean Dice score of 84% for segmentation, along
with AUCs of 90.58% for IDH mutation, 69.22% for 1p/19q co-deletion prediction,
and 87.54% for grading, significantly outperforming baseline models (p<=0.05).
Ablation studies validated the essential contributions of the TAFE and CMD
modules and demonstrated the robustness of the framework. The foundation-based
MTS-UNET model effectively integrates tumor segmentation with multi-level
classification, exhibiting strong generalizability across diverse MRI datasets.
This framework shows significant potential for advancing noninvasive,
personalized glioma management by improving predictive accuracy and
interpretability.
| no_new_dataset | 0.951369 |
2503.06832 | Sungsik Kim | Sungsik Kim, Janghyun Baek, Jinkyu Kim and Jaekoo Lee | GUIDE-CoT: Goal-driven and User-Informed Dynamic Estimation for
Pedestrian Trajectory using Chain-of-Thought | 10 pages, 5 figures, will be published on The 24th International
Conference on Autonomous Agents and Multiagent Systems (AAMAS 2025) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | While Large Language Models (LLMs) have recently shown impressive results in
reasoning tasks, their application to pedestrian trajectory prediction remains
challenging due to two key limitations: insufficient use of visual information
and the difficulty of predicting entire trajectories. To address these
challenges, we propose Goal-driven and User-Informed Dynamic Estimation for
pedestrian trajectory using Chain-of-Thought (GUIDE-CoT). Our approach
integrates two innovative modules: (1) a goal-oriented visual prompt, which
enhances goal prediction accuracy combining visual prompts with a pretrained
visual encoder, and (2) a chain-of-thought (CoT) LLM for trajectory generation,
which generates realistic trajectories toward the predicted goal. Moreover, our
method introduces controllable trajectory generation, allowing for flexible and
user-guided modifications to the predicted paths. Through extensive experiments
on the ETH/UCY benchmark datasets, our method achieves state-of-the-art
performance, delivering both high accuracy and greater adaptability in
pedestrian trajectory prediction. Our code is publicly available at
https://github.com/ai-kmu/GUIDE-CoT.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 01:39:24 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kim",
"Sungsik",
""
],
[
"Baek",
"Janghyun",
""
],
[
"Kim",
"Jinkyu",
""
],
[
"Lee",
"Jaekoo",
""
]
]
| TITLE: GUIDE-CoT: Goal-driven and User-Informed Dynamic Estimation for
Pedestrian Trajectory using Chain-of-Thought
ABSTRACT: While Large Language Models (LLMs) have recently shown impressive results in
reasoning tasks, their application to pedestrian trajectory prediction remains
challenging due to two key limitations: insufficient use of visual information
and the difficulty of predicting entire trajectories. To address these
challenges, we propose Goal-driven and User-Informed Dynamic Estimation for
pedestrian trajectory using Chain-of-Thought (GUIDE-CoT). Our approach
integrates two innovative modules: (1) a goal-oriented visual prompt, which
enhances goal prediction accuracy combining visual prompts with a pretrained
visual encoder, and (2) a chain-of-thought (CoT) LLM for trajectory generation,
which generates realistic trajectories toward the predicted goal. Moreover, our
method introduces controllable trajectory generation, allowing for flexible and
user-guided modifications to the predicted paths. Through extensive experiments
on the ETH/UCY benchmark datasets, our method achieves state-of-the-art
performance, delivering both high accuracy and greater adaptability in
pedestrian trajectory prediction. Our code is publicly available at
https://github.com/ai-kmu/GUIDE-CoT.
| no_new_dataset | 0.948822 |
2503.06839 | Zhuowen Zheng | Zhuowen Zheng, Yain-Whar Si, Xiaochen Yuan, Junwei Duan, Ke Wang,
Xiaofan Li, Xinyuan Zhang, Xueyuan Gong | AttFC: Attention Fully-Connected Layer for Large-Scale Face Recognition
with One GPU | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, with the advancement of deep neural networks (DNNs) and the
availability of large-scale datasets, the face recognition (FR) model has
achieved exceptional performance. However, since the parameter magnitude of the
fully connected (FC) layer directly depends on the number of identities in the
dataset. If training the FR model on large-scale datasets, the size of the
model parameter will be excessively huge, leading to substantial demand for
computational resources, such as time and memory. This paper proposes the
attention fully connected (AttFC) layer, which could significantly reduce
computational resources. AttFC employs an attention loader to generate the
generative class center (GCC), and dynamically store the class center with
Dynamic Class Container (DCC). DCC only stores a small subset of all class
centers in FC, thus its parameter count is substantially less than the FC
layer. Also, training face recognition models on large-scale datasets with one
GPU often encounter out-of-memory (OOM) issues. AttFC overcomes this and
achieves comparable performance to state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 01:59:11 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zheng",
"Zhuowen",
""
],
[
"Si",
"Yain-Whar",
""
],
[
"Yuan",
"Xiaochen",
""
],
[
"Duan",
"Junwei",
""
],
[
"Wang",
"Ke",
""
],
[
"Li",
"Xiaofan",
""
],
[
"Zhang",
"Xinyuan",
""
],
[
"Gong",
"Xueyuan",
""
]
]
| TITLE: AttFC: Attention Fully-Connected Layer for Large-Scale Face Recognition
with One GPU
ABSTRACT: Nowadays, with the advancement of deep neural networks (DNNs) and the
availability of large-scale datasets, the face recognition (FR) model has
achieved exceptional performance. However, since the parameter magnitude of the
fully connected (FC) layer directly depends on the number of identities in the
dataset. If training the FR model on large-scale datasets, the size of the
model parameter will be excessively huge, leading to substantial demand for
computational resources, such as time and memory. This paper proposes the
attention fully connected (AttFC) layer, which could significantly reduce
computational resources. AttFC employs an attention loader to generate the
generative class center (GCC), and dynamically store the class center with
Dynamic Class Container (DCC). DCC only stores a small subset of all class
centers in FC, thus its parameter count is substantially less than the FC
layer. Also, training face recognition models on large-scale datasets with one
GPU often encounter out-of-memory (OOM) issues. AttFC overcomes this and
achieves comparable performance to state-of-the-art methods.
| no_new_dataset | 0.946498 |
2503.06840 | Somayeh Hussaini | Somayeh Hussaini, Tobias Fischer and Michael Milford | Improving Visual Place Recognition with Sequence-Matching Receptiveness
Prediction | 8 pages, 5 figures, under review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In visual place recognition (VPR), filtering and sequence-based matching
approaches can improve performance by integrating temporal information across
image sequences, especially in challenging conditions. While these methods are
commonly applied, their effects on system behavior can be unpredictable and can
actually make performance worse in certain situations. In this work, we present
a new supervised learning approach that learns to predict the per-frame
sequence matching receptiveness (SMR) of VPR techniques, enabling the system to
selectively decide when to trust the output of a sequence matching system. The
approach is agnostic to the underlying VPR technique. Our approach predicts
SMR-and hence significantly improves VPR performance-across a large range of
state-of-the-art and classical VPR techniques (namely CosPlace, MixVPR,
EigenPlaces, SALAD, AP-GeM, NetVLAD and SAD), and across three benchmark VPR
datasets (Nordland, Oxford RobotCar, and SFU-Mountain). We also provide
insights into a complementary approach that uses the predictor to replace
discarded matches, as well as ablation studies, including an analysis of the
interactions between our SMR predictor and the selected sequence length. We
will release our code upon acceptance.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 02:01:24 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hussaini",
"Somayeh",
""
],
[
"Fischer",
"Tobias",
""
],
[
"Milford",
"Michael",
""
]
]
| TITLE: Improving Visual Place Recognition with Sequence-Matching Receptiveness
Prediction
ABSTRACT: In visual place recognition (VPR), filtering and sequence-based matching
approaches can improve performance by integrating temporal information across
image sequences, especially in challenging conditions. While these methods are
commonly applied, their effects on system behavior can be unpredictable and can
actually make performance worse in certain situations. In this work, we present
a new supervised learning approach that learns to predict the per-frame
sequence matching receptiveness (SMR) of VPR techniques, enabling the system to
selectively decide when to trust the output of a sequence matching system. The
approach is agnostic to the underlying VPR technique. Our approach predicts
SMR-and hence significantly improves VPR performance-across a large range of
state-of-the-art and classical VPR techniques (namely CosPlace, MixVPR,
EigenPlaces, SALAD, AP-GeM, NetVLAD and SAD), and across three benchmark VPR
datasets (Nordland, Oxford RobotCar, and SFU-Mountain). We also provide
insights into a complementary approach that uses the predictor to replace
discarded matches, as well as ablation studies, including an analysis of the
interactions between our SMR predictor and the selected sequence length. We
will release our code upon acceptance.
| no_new_dataset | 0.944536 |
2503.06860 | Cagri Gungor | Cagri Gungor, Derek Eppinger, Adriana Kovashka | Towards Generalization of Tactile Image Generation: Reference-Free
Evaluation in a Leakage-Free Setting | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Tactile sensing, which relies on direct physical contact, is critical for
human perception and underpins applications in computer vision, robotics, and
multimodal learning. Because tactile data is often scarce and costly to
acquire, generating synthetic tactile images provides a scalable solution to
augment real-world measurements. However, ensuring robust generalization in
synthesizing tactile images-capturing subtle, material-specific contact
features-remains challenging. We demonstrate that overlapping training and test
samples in commonly used datasets inflate performance metrics, obscuring the
true generalizability of tactile models. To address this, we propose a
leakage-free evaluation protocol coupled with novel, reference-free
metrics-TMMD, I-TMMD, CI-TMMD, and D-TMMD-tailored for tactile generation.
Moreover, we propose a vision-to-touch generation method that leverages text as
an intermediate modality by incorporating concise, material-specific
descriptions during training to better capture essential tactile features.
Experiments on two popular visuo-tactile datasets, Touch and Go and HCT, show
that our approach achieves superior performance and enhanced generalization in
a leakage-free setting.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 02:37:22 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Gungor",
"Cagri",
""
],
[
"Eppinger",
"Derek",
""
],
[
"Kovashka",
"Adriana",
""
]
]
| TITLE: Towards Generalization of Tactile Image Generation: Reference-Free
Evaluation in a Leakage-Free Setting
ABSTRACT: Tactile sensing, which relies on direct physical contact, is critical for
human perception and underpins applications in computer vision, robotics, and
multimodal learning. Because tactile data is often scarce and costly to
acquire, generating synthetic tactile images provides a scalable solution to
augment real-world measurements. However, ensuring robust generalization in
synthesizing tactile images-capturing subtle, material-specific contact
features-remains challenging. We demonstrate that overlapping training and test
samples in commonly used datasets inflate performance metrics, obscuring the
true generalizability of tactile models. To address this, we propose a
leakage-free evaluation protocol coupled with novel, reference-free
metrics-TMMD, I-TMMD, CI-TMMD, and D-TMMD-tailored for tactile generation.
Moreover, we propose a vision-to-touch generation method that leverages text as
an intermediate modality by incorporating concise, material-specific
descriptions during training to better capture essential tactile features.
Experiments on two popular visuo-tactile datasets, Touch and Go and HCT, show
that our approach achieves superior performance and enhanced generalization in
a leakage-free setting.
| no_new_dataset | 0.950041 |
2503.06861 | Mengzhe Hei | Mengzhe Hei, Zhouran Zhang, Qingbao Liu, Yan Pan, Xiang Zhao, Yongqian
Peng, Yicong Ye, Xin Zhang, Shuxin Bai | Enhanced Multi-Tuple Extraction for Alloys: Integrating Pointer Networks
and Augmented Attention | 17 pages, 5 figures | null | null | 410072 | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Extracting high-quality structured information from scientific literature is
crucial for advancing material design through data-driven methods. Despite the
considerable research in natural language processing for dataset extraction,
effective approaches for multi-tuple extraction in scientific literature remain
scarce due to the complex interrelations of tuples and contextual ambiguities.
In the study, we illustrate the multi-tuple extraction of mechanical properties
from multi-principal-element alloys and presents a novel framework that
combines an entity extraction model based on MatSciBERT with pointer networks
and an allocation model utilizing inter- and intra-entity attention. Our
rigorous experiments on tuple extraction demonstrate impressive F1 scores of
0.963, 0.947, 0.848, and 0.753 across datasets with 1, 2, 3, and 4 tuples,
confirming the effectiveness of the model. Furthermore, an F1 score of 0.854
was achieved on a randomly curated dataset. These results highlight the model's
capacity to deliver precise and structured information, offering a robust
alternative to large language models and equipping researchers with essential
data for fostering data-driven innovations.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 02:39:06 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hei",
"Mengzhe",
""
],
[
"Zhang",
"Zhouran",
""
],
[
"Liu",
"Qingbao",
""
],
[
"Pan",
"Yan",
""
],
[
"Zhao",
"Xiang",
""
],
[
"Peng",
"Yongqian",
""
],
[
"Ye",
"Yicong",
""
],
[
"Zhang",
"Xin",
""
],
[
"Bai",
"Shuxin",
""
]
]
| TITLE: Enhanced Multi-Tuple Extraction for Alloys: Integrating Pointer Networks
and Augmented Attention
ABSTRACT: Extracting high-quality structured information from scientific literature is
crucial for advancing material design through data-driven methods. Despite the
considerable research in natural language processing for dataset extraction,
effective approaches for multi-tuple extraction in scientific literature remain
scarce due to the complex interrelations of tuples and contextual ambiguities.
In the study, we illustrate the multi-tuple extraction of mechanical properties
from multi-principal-element alloys and presents a novel framework that
combines an entity extraction model based on MatSciBERT with pointer networks
and an allocation model utilizing inter- and intra-entity attention. Our
rigorous experiments on tuple extraction demonstrate impressive F1 scores of
0.963, 0.947, 0.848, and 0.753 across datasets with 1, 2, 3, and 4 tuples,
confirming the effectiveness of the model. Furthermore, an F1 score of 0.854
was achieved on a randomly curated dataset. These results highlight the model's
capacity to deliver precise and structured information, offering a robust
alternative to large language models and equipping researchers with essential
data for fostering data-driven innovations.
| no_new_dataset | 0.933613 |
2503.06863 | Tao Jiang | Shufang Zhang, Tao Jiang, Jiazheng Wu, Ziyu Meng, Ziyang Zhang and
Shan An | HIF: Height Interval Filtering for Efficient Dynamic Points Removal | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D point cloud mapping plays a essential role in localization and autonomous
navigation. However, dynamic objects often leave residual traces during the map
construction process, which undermine the performance of subsequent tasks.
Therefore, dynamic object removal has become a critical challenge in point
cloud based map construction within dynamic scenarios. Existing approaches,
however, often incur significant computational overhead, making it difficult to
meet the real-time processing requirements. To address this issue, we introduce
the Height Interval Filtering (HIF) method. This approach constructs
pillar-based height interval representations to probabilistically model the
vertical dimension, with interval probabilities updated through Bayesian
inference. It ensures real-time performance while achieving high accuracy and
improving robustness in complex environments. Additionally, we propose a
low-height preservation strategy that enhances the detection of unknown spaces,
reducing misclassification in areas blocked by obstacles (occluded regions).
Experiments on public datasets demonstrate that HIF delivers a 7.7 times
improvement in time efficiency with comparable accuracy to existing SOTA
methods. The code will be publicly available.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 02:40:49 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Shufang",
""
],
[
"Jiang",
"Tao",
""
],
[
"Wu",
"Jiazheng",
""
],
[
"Meng",
"Ziyu",
""
],
[
"Zhang",
"Ziyang",
""
],
[
"An",
"Shan",
""
]
]
| TITLE: HIF: Height Interval Filtering for Efficient Dynamic Points Removal
ABSTRACT: 3D point cloud mapping plays a essential role in localization and autonomous
navigation. However, dynamic objects often leave residual traces during the map
construction process, which undermine the performance of subsequent tasks.
Therefore, dynamic object removal has become a critical challenge in point
cloud based map construction within dynamic scenarios. Existing approaches,
however, often incur significant computational overhead, making it difficult to
meet the real-time processing requirements. To address this issue, we introduce
the Height Interval Filtering (HIF) method. This approach constructs
pillar-based height interval representations to probabilistically model the
vertical dimension, with interval probabilities updated through Bayesian
inference. It ensures real-time performance while achieving high accuracy and
improving robustness in complex environments. Additionally, we propose a
low-height preservation strategy that enhances the detection of unknown spaces,
reducing misclassification in areas blocked by obstacles (occluded regions).
Experiments on public datasets demonstrate that HIF delivers a 7.7 times
improvement in time efficiency with comparable accuracy to existing SOTA
methods. The code will be publicly available.
| no_new_dataset | 0.949902 |
2503.06868 | Junhao Zhang | Junhao Zhang, Richong Zhang, Fanshuang Kong, Ziyang Miao, Yanhan Ye,
Yaowei Zheng | Lost-in-the-Middle in Long-Text Generation: Synthetic Dataset,
Evaluation Framework, and Mitigation | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Existing long-text generation methods primarily concentrate on producing
lengthy texts from short inputs, neglecting the long-input and long-output
tasks. Such tasks have numerous practical applications while lacking available
benchmarks. Moreover, as the input grows in length, existing methods inevitably
encounter the "lost-in-the-middle" phenomenon. In this paper, we first
introduce a Long Input and Output Benchmark (LongInOutBench), including a
synthetic dataset and a comprehensive evaluation framework, addressing the
challenge of the missing benchmark. We then develop the Retrieval-Augmented
Long-Text Writer (RAL-Writer), which retrieves and restates important yet
overlooked content, mitigating the "lost-in-the-middle" issue by constructing
explicit prompts. We finally employ the proposed LongInOutBench to evaluate our
RAL-Writer against comparable baselines, and the results demonstrate the
effectiveness of our approach. Our code has been released at
https://github.com/OnlyAR/RAL-Writer.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 02:44:36 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Junhao",
""
],
[
"Zhang",
"Richong",
""
],
[
"Kong",
"Fanshuang",
""
],
[
"Miao",
"Ziyang",
""
],
[
"Ye",
"Yanhan",
""
],
[
"Zheng",
"Yaowei",
""
]
]
| TITLE: Lost-in-the-Middle in Long-Text Generation: Synthetic Dataset,
Evaluation Framework, and Mitigation
ABSTRACT: Existing long-text generation methods primarily concentrate on producing
lengthy texts from short inputs, neglecting the long-input and long-output
tasks. Such tasks have numerous practical applications while lacking available
benchmarks. Moreover, as the input grows in length, existing methods inevitably
encounter the "lost-in-the-middle" phenomenon. In this paper, we first
introduce a Long Input and Output Benchmark (LongInOutBench), including a
synthetic dataset and a comprehensive evaluation framework, addressing the
challenge of the missing benchmark. We then develop the Retrieval-Augmented
Long-Text Writer (RAL-Writer), which retrieves and restates important yet
overlooked content, mitigating the "lost-in-the-middle" issue by constructing
explicit prompts. We finally employ the proposed LongInOutBench to evaluate our
RAL-Writer against comparable baselines, and the results demonstrate the
effectiveness of our approach. Our code has been released at
https://github.com/OnlyAR/RAL-Writer.
| new_dataset | 0.962214 |
2503.06882 | Tingyang Chen | Tingyang Chen, Cong Fu, Kun Wang, Xiangyu Ke, Yunjun Gao, Wenchao
Zhou, Yabo Ni, Anxiang Zeng | Maximum Inner Product is Query-Scaled Nearest Neighbor | Accepted by VLDB 2025 | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Maximum Inner Product Search (MIPS) for high-dimensional vectors is pivotal
across databases, information retrieval, and artificial intelligence. Existing
methods either reduce MIPS to Nearest Neighbor Search (NNS) while suffering
from harmful vector space transformations, or attempt to tackle MIPS directly
but struggle to mitigate redundant computations due to the absence of the
triangle inequality. This paper presents a novel theoretical framework that
equates MIPS with NNS without requiring space transformation, thereby allowing
us to leverage advanced graph-based indices for NNS and efficient edge pruning
strategies, significantly reducing unnecessary computations. Despite a strong
baseline set by our theoretical analysis, we identify and address two
persistent challenges to further refine our method: the introduction of the
Proximity Graph with Spherical Pathway (PSP), designed to mitigate the issue of
MIPS solutions clustering around large-norm vectors, and the implementation of
Adaptive Early Termination (AET), which efficiently curtails the excessive
exploration once an accuracy bottleneck is reached. Extensive experiments
reveal the superiority of our method over existing state-of-the-art techniques
in search efficiency, scalability, and practical applicability. Compared with
state-of-the-art graph based methods, it achieves an average 35% speed-up in
query processing and a 3x reduction in index size. Notably, our approach has
been validated and deployed in the search engines of Shopee, a well-known
online shopping platform. Our code and an industrial-scale dataset for offline
evaluation will also be released to address the absence of e-commerce data in
public benchmarks.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 03:17:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Tingyang",
""
],
[
"Fu",
"Cong",
""
],
[
"Wang",
"Kun",
""
],
[
"Ke",
"Xiangyu",
""
],
[
"Gao",
"Yunjun",
""
],
[
"Zhou",
"Wenchao",
""
],
[
"Ni",
"Yabo",
""
],
[
"Zeng",
"Anxiang",
""
]
]
| TITLE: Maximum Inner Product is Query-Scaled Nearest Neighbor
ABSTRACT: Maximum Inner Product Search (MIPS) for high-dimensional vectors is pivotal
across databases, information retrieval, and artificial intelligence. Existing
methods either reduce MIPS to Nearest Neighbor Search (NNS) while suffering
from harmful vector space transformations, or attempt to tackle MIPS directly
but struggle to mitigate redundant computations due to the absence of the
triangle inequality. This paper presents a novel theoretical framework that
equates MIPS with NNS without requiring space transformation, thereby allowing
us to leverage advanced graph-based indices for NNS and efficient edge pruning
strategies, significantly reducing unnecessary computations. Despite a strong
baseline set by our theoretical analysis, we identify and address two
persistent challenges to further refine our method: the introduction of the
Proximity Graph with Spherical Pathway (PSP), designed to mitigate the issue of
MIPS solutions clustering around large-norm vectors, and the implementation of
Adaptive Early Termination (AET), which efficiently curtails the excessive
exploration once an accuracy bottleneck is reached. Extensive experiments
reveal the superiority of our method over existing state-of-the-art techniques
in search efficiency, scalability, and practical applicability. Compared with
state-of-the-art graph based methods, it achieves an average 35% speed-up in
query processing and a 3x reduction in index size. Notably, our approach has
been validated and deployed in the search engines of Shopee, a well-known
online shopping platform. Our code and an industrial-scale dataset for offline
evaluation will also be released to address the absence of e-commerce data in
public benchmarks.
| no_new_dataset | 0.943815 |
2503.06897 | Xingzu Zhan | Xingzu Zhan, Chen Xie, Haoran Sun, Xiaochun Mai | HiSTF Mamba: Hierarchical Spatiotemporal Fusion with Multi-Granular
Body-Spatial Modeling for High-Fidelity Text-to-Motion Generation | 11pages,3figures, | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-motion generation is a rapidly growing field at the nexus of
multimodal learning and computer graphics, promising flexible and
cost-effective applications in gaming, animation, robotics, and virtual
reality. Existing approaches often rely on simple spatiotemporal stacking,
which introduces feature redundancy, while subtle joint-level details remain
overlooked from a spatial perspective. To this end, we propose a novel HiSTF
Mamba framework. The framework is composed of three key modules: Dual-Spatial
Mamba, Bi-Temporal Mamba, and Dynamic Spatiotemporal Fusion Module (DSFM).
Dual-Spatial Mamba incorporates ``Part-based + Whole-based'' parallel modeling
to represent both whole-body coordination and fine-grained joint dynamics.
Bi-Temporal Mamba adopts a bidirectional scanning strategy, effectively
encoding short-term motion details and long-term dependencies. DSFM further
performs redundancy removal and extraction of complementary information for
temporal features, then fuses them with spatial features, yielding an
expressive spatio-temporal representation. Experimental results on the
HumanML3D dataset demonstrate that HiSTF Mamba achieves state-of-the-art
performance across multiple metrics. In particular, it reduces the FID score
from 0.283 to 0.189, a relative decrease of nearly 30%. These findings validate
the effectiveness of HiSTF Mamba in achieving high fidelity and strong semantic
alignment in text-to-motion generation.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 04:01:48 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhan",
"Xingzu",
""
],
[
"Xie",
"Chen",
""
],
[
"Sun",
"Haoran",
""
],
[
"Mai",
"Xiaochun",
""
]
]
| TITLE: HiSTF Mamba: Hierarchical Spatiotemporal Fusion with Multi-Granular
Body-Spatial Modeling for High-Fidelity Text-to-Motion Generation
ABSTRACT: Text-to-motion generation is a rapidly growing field at the nexus of
multimodal learning and computer graphics, promising flexible and
cost-effective applications in gaming, animation, robotics, and virtual
reality. Existing approaches often rely on simple spatiotemporal stacking,
which introduces feature redundancy, while subtle joint-level details remain
overlooked from a spatial perspective. To this end, we propose a novel HiSTF
Mamba framework. The framework is composed of three key modules: Dual-Spatial
Mamba, Bi-Temporal Mamba, and Dynamic Spatiotemporal Fusion Module (DSFM).
Dual-Spatial Mamba incorporates ``Part-based + Whole-based'' parallel modeling
to represent both whole-body coordination and fine-grained joint dynamics.
Bi-Temporal Mamba adopts a bidirectional scanning strategy, effectively
encoding short-term motion details and long-term dependencies. DSFM further
performs redundancy removal and extraction of complementary information for
temporal features, then fuses them with spatial features, yielding an
expressive spatio-temporal representation. Experimental results on the
HumanML3D dataset demonstrate that HiSTF Mamba achieves state-of-the-art
performance across multiple metrics. In particular, it reduces the FID score
from 0.283 to 0.189, a relative decrease of nearly 30%. These findings validate
the effectiveness of HiSTF Mamba in achieving high fidelity and strong semantic
alignment in text-to-motion generation.
| no_new_dataset | 0.952618 |
2503.06898 | Sharif S M A | S M A Sharif, Abdur Rehman, Zain Ul Abidin, Rizwan Ali Naqvi, Fayaz
Ali Dharejo, Radu Timofte | Illuminating Darkness: Enhancing Real-world Low-light Scenes with
Smartphone Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Digital cameras often struggle to produce plausible images in low-light
conditions. Improving these single-shot images remains challenging due to a
lack of diverse real-world pair data samples. To address this limitation, we
propose a large-scale high-resolution (i.e., beyond 4k) pair Single-Shot
Low-Light Enhancement (SLLIE) dataset. Our dataset comprises 6,425 unique
focus-aligned image pairs captured with smartphone sensors in dynamic settings
under challenging lighting conditions (0.1--200 lux), covering various indoor
and outdoor scenes with varying noise and intensity. We extracted and refined
around 180,000 non-overlapping patches from 6,025 collected scenes for training
while reserving 400 pairs for benchmarking. In addition to that, we collected
2,117 low-light scenes from different sources for extensive real-world
aesthetic evaluation. To our knowledge, this is the largest real-world dataset
available for SLLIE research. We also propose learning luminance-chrominance
(LC) attributes separately through a tuning fork-shaped transformer model to
enhance real-world low-light images, addressing challenges like denoising and
over-enhancement in complex scenes. We also propose an LC cross-attention block
for feature fusion, an LC refinement block for enhanced reconstruction, and
LC-guided supervision to ensure perceptually coherent enhancements. We
demonstrated our method's effectiveness across various hardware and scenarios,
proving its practicality in real-world applications. Code and dataset available
at https://github.com/sharif-apu/LSD-TFFormer.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 04:01:56 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Sharif",
"S M A",
""
],
[
"Rehman",
"Abdur",
""
],
[
"Abidin",
"Zain Ul",
""
],
[
"Naqvi",
"Rizwan Ali",
""
],
[
"Dharejo",
"Fayaz Ali",
""
],
[
"Timofte",
"Radu",
""
]
]
| TITLE: Illuminating Darkness: Enhancing Real-world Low-light Scenes with
Smartphone Images
ABSTRACT: Digital cameras often struggle to produce plausible images in low-light
conditions. Improving these single-shot images remains challenging due to a
lack of diverse real-world pair data samples. To address this limitation, we
propose a large-scale high-resolution (i.e., beyond 4k) pair Single-Shot
Low-Light Enhancement (SLLIE) dataset. Our dataset comprises 6,425 unique
focus-aligned image pairs captured with smartphone sensors in dynamic settings
under challenging lighting conditions (0.1--200 lux), covering various indoor
and outdoor scenes with varying noise and intensity. We extracted and refined
around 180,000 non-overlapping patches from 6,025 collected scenes for training
while reserving 400 pairs for benchmarking. In addition to that, we collected
2,117 low-light scenes from different sources for extensive real-world
aesthetic evaluation. To our knowledge, this is the largest real-world dataset
available for SLLIE research. We also propose learning luminance-chrominance
(LC) attributes separately through a tuning fork-shaped transformer model to
enhance real-world low-light images, addressing challenges like denoising and
over-enhancement in complex scenes. We also propose an LC cross-attention block
for feature fusion, an LC refinement block for enhanced reconstruction, and
LC-guided supervision to ensure perceptually coherent enhancements. We
demonstrated our method's effectiveness across various hardware and scenarios,
proving its practicality in real-world applications. Code and dataset available
at https://github.com/sharif-apu/LSD-TFFormer.
| new_dataset | 0.960025 |
2503.06912 | Zeinab Ebrahimi | Zeinab Ebrahimi and Mohammad Deghat | Distributed Pose Graph Optimization using the Splitting Method based on
the Alternating Direction Method of Multipliers | 20 pages, 4 figures | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by/4.0/ | Distributed optimization aims to leverage the local computation and
communication capabilities of each agent to achieve a desired global objective.
This paper addresses the distributed pose graph optimization (PGO) problem
under non-convex constraints, with the goal of approximating the rotation and
translation of each pose given relevant noisy measurements. To achieve this
goal, the splitting method based on the concepts of the alternating direction
method of multipliers (ADMM) and Bregman iteration are applied to solve the
rotation subproblems. The proposed approach enables the iterative resolution of
constrained problems, achieved through solving unconstrained problems and
orthogonality-constrained quadratic problems that have analytical solutions.
The performance of the proposed algorithm is compared against two practical
methods in pose graph optimization: the Distributed Gauss-Seidel (DGS)
algorithm and the centralized pose graph optimizer with an optimality
certificate (SE-Sync). The efficiency of the proposed method is verified
through its application to several simulated and real-world pose graph
datasets. Unlike the DGS method, our approach attempts to solve distributed PGO
problems without relaxing the non-convex constraints.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 04:28:47 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ebrahimi",
"Zeinab",
""
],
[
"Deghat",
"Mohammad",
""
]
]
| TITLE: Distributed Pose Graph Optimization using the Splitting Method based on
the Alternating Direction Method of Multipliers
ABSTRACT: Distributed optimization aims to leverage the local computation and
communication capabilities of each agent to achieve a desired global objective.
This paper addresses the distributed pose graph optimization (PGO) problem
under non-convex constraints, with the goal of approximating the rotation and
translation of each pose given relevant noisy measurements. To achieve this
goal, the splitting method based on the concepts of the alternating direction
method of multipliers (ADMM) and Bregman iteration are applied to solve the
rotation subproblems. The proposed approach enables the iterative resolution of
constrained problems, achieved through solving unconstrained problems and
orthogonality-constrained quadratic problems that have analytical solutions.
The performance of the proposed algorithm is compared against two practical
methods in pose graph optimization: the Distributed Gauss-Seidel (DGS)
algorithm and the centralized pose graph optimizer with an optimality
certificate (SE-Sync). The efficiency of the proposed method is verified
through its application to several simulated and real-world pose graph
datasets. Unlike the DGS method, our approach attempts to solve distributed PGO
problems without relaxing the non-convex constraints.
| no_new_dataset | 0.94699 |
2503.06916 | Yang Lu | Shanshan Yan, Zexi Li, Chao Wu, Meng Pang, Yang Lu, Yan Yan, Hanzi
Wang | You Are Your Own Best Teacher: Achieving Centralized-level Performance
in Federated Learning under Heterogeneous and Long-tailed Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data heterogeneity, stemming from local non-IID data and global long-tailed
distributions, is a major challenge in federated learning (FL), leading to
significant performance gaps compared to centralized learning. Previous
research found that poor representations and biased classifiers are the main
problems and proposed neural-collapse-inspired synthetic simplex ETF to help
representations be closer to neural collapse optima. However, we find that the
neural-collapse-inspired methods are not strong enough to reach neural collapse
and still have huge gaps to centralized training. In this paper, we rethink
this issue from a self-bootstrap perspective and propose FedYoYo (You Are Your
Own Best Teacher), introducing Augmented Self-bootstrap Distillation (ASD) to
improve representation learning by distilling knowledge between weakly and
strongly augmented local samples, without needing extra datasets or models. We
further introduce Distribution-aware Logit Adjustment (DLA) to balance the
self-bootstrap process and correct biased feature representations. FedYoYo
nearly eliminates the performance gap, achieving centralized-level performance
even under mixed heterogeneity. It enhances local representation learning,
reducing model drift and improving convergence, with feature prototypes closer
to neural collapse optimality. Extensive experiments show FedYoYo achieves
state-of-the-art results, even surpassing centralized logit adjustment methods
by 5.4\% under global long-tailed settings.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 04:57:20 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Yan",
"Shanshan",
""
],
[
"Li",
"Zexi",
""
],
[
"Wu",
"Chao",
""
],
[
"Pang",
"Meng",
""
],
[
"Lu",
"Yang",
""
],
[
"Yan",
"Yan",
""
],
[
"Wang",
"Hanzi",
""
]
]
| TITLE: You Are Your Own Best Teacher: Achieving Centralized-level Performance
in Federated Learning under Heterogeneous and Long-tailed Data
ABSTRACT: Data heterogeneity, stemming from local non-IID data and global long-tailed
distributions, is a major challenge in federated learning (FL), leading to
significant performance gaps compared to centralized learning. Previous
research found that poor representations and biased classifiers are the main
problems and proposed neural-collapse-inspired synthetic simplex ETF to help
representations be closer to neural collapse optima. However, we find that the
neural-collapse-inspired methods are not strong enough to reach neural collapse
and still have huge gaps to centralized training. In this paper, we rethink
this issue from a self-bootstrap perspective and propose FedYoYo (You Are Your
Own Best Teacher), introducing Augmented Self-bootstrap Distillation (ASD) to
improve representation learning by distilling knowledge between weakly and
strongly augmented local samples, without needing extra datasets or models. We
further introduce Distribution-aware Logit Adjustment (DLA) to balance the
self-bootstrap process and correct biased feature representations. FedYoYo
nearly eliminates the performance gap, achieving centralized-level performance
even under mixed heterogeneity. It enhances local representation learning,
reducing model drift and improving convergence, with feature prototypes closer
to neural collapse optimality. Extensive experiments show FedYoYo achieves
state-of-the-art results, even surpassing centralized logit adjustment methods
by 5.4\% under global long-tailed settings.
| no_new_dataset | 0.952353 |
2503.06919 | Weidong Guo | Weidong Guo, Hantao Zhang, Shouhong Wan, Bingbing Zou, Wanqin Wang,
Chenyang Qiu and Peiquan Jin | CAFusion: Controllable Anatomical Synthesis of Perirectal Lymph Nodes
via SDF-guided Diffusion | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Lesion synthesis methods have made significant progress in generating
large-scale synthetic datasets. However, existing approaches predominantly
focus on texture synthesis and often fail to accurately model masks for
anatomically complex lesions. Additionally, these methods typically lack
precise control over the synthesis process. For example, perirectal lymph
nodes, which range in diameter from 1 mm to 10 mm, exhibit irregular and
intricate contours that are challenging for current techniques to replicate
faithfully. To address these limitations, we introduce CAFusion, a novel
approach for synthesizing perirectal lymph nodes. By leveraging Signed Distance
Functions (SDF), CAFusion generates highly realistic 3D anatomical structures.
Furthermore, it offers flexible control over both anatomical and textural
features by decoupling the generation of morphological attributes (such as
shape, size, and position) from textural characteristics, including signal
intensity. Experimental results demonstrate that our synthetic data
substantially improve segmentation performance, achieving a 6.45% increase in
the Dice coefficient. In the visual Turing test, experienced radiologists found
it challenging to distinguish between synthetic and real lesions, highlighting
the high degree of realism and anatomical accuracy achieved by our approach.
These findings validate the effectiveness of our method in generating
high-quality synthetic lesions for advancing medical image processing
applications.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 04:59:54 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Guo",
"Weidong",
""
],
[
"Zhang",
"Hantao",
""
],
[
"Wan",
"Shouhong",
""
],
[
"Zou",
"Bingbing",
""
],
[
"Wang",
"Wanqin",
""
],
[
"Qiu",
"Chenyang",
""
],
[
"Jin",
"Peiquan",
""
]
]
| TITLE: CAFusion: Controllable Anatomical Synthesis of Perirectal Lymph Nodes
via SDF-guided Diffusion
ABSTRACT: Lesion synthesis methods have made significant progress in generating
large-scale synthetic datasets. However, existing approaches predominantly
focus on texture synthesis and often fail to accurately model masks for
anatomically complex lesions. Additionally, these methods typically lack
precise control over the synthesis process. For example, perirectal lymph
nodes, which range in diameter from 1 mm to 10 mm, exhibit irregular and
intricate contours that are challenging for current techniques to replicate
faithfully. To address these limitations, we introduce CAFusion, a novel
approach for synthesizing perirectal lymph nodes. By leveraging Signed Distance
Functions (SDF), CAFusion generates highly realistic 3D anatomical structures.
Furthermore, it offers flexible control over both anatomical and textural
features by decoupling the generation of morphological attributes (such as
shape, size, and position) from textural characteristics, including signal
intensity. Experimental results demonstrate that our synthetic data
substantially improve segmentation performance, achieving a 6.45% increase in
the Dice coefficient. In the visual Turing test, experienced radiologists found
it challenging to distinguish between synthetic and real lesions, highlighting
the high degree of realism and anatomical accuracy achieved by our approach.
These findings validate the effectiveness of our method in generating
high-quality synthetic lesions for advancing medical image processing
applications.
| no_new_dataset | 0.951323 |
2503.06928 | Yanlong Wang | Yanlong Wang, Jian Xu, Tiantian Gao, Hongkang Zhang, Shao-Lun Huang,
Danny Dongning Sun, Xiao-Ping Zhang | FinTSBridge: A New Evaluation Suite for Real-world Financial Prediction
with Advanced Time Series Models | ICLR 2025 Workshop Advances in Financial AI | null | null | null | cs.LG q-fin.TR | http://creativecommons.org/licenses/by/4.0/ | Despite the growing attention to time series forecasting in recent years,
many studies have proposed various solutions to address the challenges
encountered in time series prediction, aiming to improve forecasting
performance. However, effectively applying these time series forecasting models
to the field of financial asset pricing remains a challenging issue. There is
still a need for a bridge to connect cutting-edge time series forecasting
models with financial asset pricing. To bridge this gap, we have undertaken the
following efforts: 1) We constructed three datasets from the financial domain;
2) We selected over ten time series forecasting models from recent studies and
validated their performance in financial time series; 3) We developed new
metrics, msIC and msIR, in addition to MSE and MAE, to showcase the time series
correlation captured by the models; 4) We designed financial-specific tasks for
these three datasets and assessed the practical performance and application
potential of these forecasting models in important financial problems. We hope
the developed new evaluation suite, FinTSBridge, can provide valuable insights
into the effectiveness and robustness of advanced forecasting models in
finanical domains.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 05:19:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Yanlong",
""
],
[
"Xu",
"Jian",
""
],
[
"Gao",
"Tiantian",
""
],
[
"Zhang",
"Hongkang",
""
],
[
"Huang",
"Shao-Lun",
""
],
[
"Sun",
"Danny Dongning",
""
],
[
"Zhang",
"Xiao-Ping",
""
]
]
| TITLE: FinTSBridge: A New Evaluation Suite for Real-world Financial Prediction
with Advanced Time Series Models
ABSTRACT: Despite the growing attention to time series forecasting in recent years,
many studies have proposed various solutions to address the challenges
encountered in time series prediction, aiming to improve forecasting
performance. However, effectively applying these time series forecasting models
to the field of financial asset pricing remains a challenging issue. There is
still a need for a bridge to connect cutting-edge time series forecasting
models with financial asset pricing. To bridge this gap, we have undertaken the
following efforts: 1) We constructed three datasets from the financial domain;
2) We selected over ten time series forecasting models from recent studies and
validated their performance in financial time series; 3) We developed new
metrics, msIC and msIR, in addition to MSE and MAE, to showcase the time series
correlation captured by the models; 4) We designed financial-specific tasks for
these three datasets and assessed the practical performance and application
potential of these forecasting models in important financial problems. We hope
the developed new evaluation suite, FinTSBridge, can provide valuable insights
into the effectiveness and robustness of advanced forecasting models in
finanical domains.
| no_new_dataset | 0.922482 |
2503.06934 | Hanyu Zhou | Hanyu Zhou, Gim Hee Lee | LLaFEA: Frame-Event Complementary Fusion for Fine-Grained Spatiotemporal
Understanding in LMMs | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large multimodal models (LMMs) excel in scene understanding but struggle with
fine-grained spatiotemporal reasoning due to weak alignment between linguistic
and visual representations. Existing methods map textual positions and
durations into the visual space encoded from frame-based videos, but suffer
from temporal sparsity that limits language-vision temporal coordination. To
address this issue, we introduce LLaFEA (Large Language and Frame-Event
Assistant) to leverage event cameras for temporally dense perception and
frame-event fusion. Our approach employs a cross-attention mechanism to
integrate complementary spatial and temporal features, followed by
self-attention matching for global spatio-temporal associations. We further
embed textual position and duration tokens into the fused visual space to
enhance fine-grained alignment. This unified framework ensures robust
spatio-temporal coordinate alignment, enabling LMMs to interpret scenes at any
position and any time. In addition, we construct a dataset of real-world
frames-events with coordinate instructions and conduct extensive experiments to
validate the effectiveness of the proposed method.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 05:30:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhou",
"Hanyu",
""
],
[
"Lee",
"Gim Hee",
""
]
]
| TITLE: LLaFEA: Frame-Event Complementary Fusion for Fine-Grained Spatiotemporal
Understanding in LMMs
ABSTRACT: Large multimodal models (LMMs) excel in scene understanding but struggle with
fine-grained spatiotemporal reasoning due to weak alignment between linguistic
and visual representations. Existing methods map textual positions and
durations into the visual space encoded from frame-based videos, but suffer
from temporal sparsity that limits language-vision temporal coordination. To
address this issue, we introduce LLaFEA (Large Language and Frame-Event
Assistant) to leverage event cameras for temporally dense perception and
frame-event fusion. Our approach employs a cross-attention mechanism to
integrate complementary spatial and temporal features, followed by
self-attention matching for global spatio-temporal associations. We further
embed textual position and duration tokens into the fused visual space to
enhance fine-grained alignment. This unified framework ensures robust
spatio-temporal coordinate alignment, enabling LMMs to interpret scenes at any
position and any time. In addition, we construct a dataset of real-world
frames-events with coordinate instructions and conduct extensive experiments to
validate the effectiveness of the proposed method.
| new_dataset | 0.951997 |
2503.06938 | Sania Zahan | Sania Zahan, Ghulam Mubashar Hassan, Ajmal Mian | Modeling Human Skeleton Joint Dynamics for Fall Detection | Published in 2021 Digital Image Computing: Techniques and
Applications (DICTA) | Digital Image Computing: Techniques and Applications (DICTA), Gold
Coast, Australia, 2021, pp. 01-07 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The increasing pace of population aging calls for better care and support
systems. Falling is a frequent and critical problem for elderly people causing
serious long-term health issues. Fall detection from video streams is not an
attractive option for real-life applications due to privacy issues. Existing
methods try to resolve this issue by using very low-resolution cameras or video
encryption. However, privacy cannot be ensured completely with such approaches.
Key points on the body, such as skeleton joints, can convey significant
information about motion dynamics and successive posture changes which are
crucial for fall detection. Skeleton joints have been explored for feature
extraction but with image recognition models that ignore joint dependency
across frames which is important for the classification of actions. Moreover,
existing models are over-parameterized or evaluated on small datasets with very
few activity classes. We propose an efficient graph convolution network model
that exploits spatio-temporal joint dependencies and dynamics of human skeleton
joints for accurate fall detection. Our method leverages dynamic representation
with robust concurrent spatio-temporal characteristics of skeleton joints. We
performed extensive experiments on three large-scale datasets. With a
significantly smaller model size than most existing methods, our proposed
method achieves state-of-the-art results on the large scale NTU datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 05:35:56 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zahan",
"Sania",
""
],
[
"Hassan",
"Ghulam Mubashar",
""
],
[
"Mian",
"Ajmal",
""
]
]
| TITLE: Modeling Human Skeleton Joint Dynamics for Fall Detection
ABSTRACT: The increasing pace of population aging calls for better care and support
systems. Falling is a frequent and critical problem for elderly people causing
serious long-term health issues. Fall detection from video streams is not an
attractive option for real-life applications due to privacy issues. Existing
methods try to resolve this issue by using very low-resolution cameras or video
encryption. However, privacy cannot be ensured completely with such approaches.
Key points on the body, such as skeleton joints, can convey significant
information about motion dynamics and successive posture changes which are
crucial for fall detection. Skeleton joints have been explored for feature
extraction but with image recognition models that ignore joint dependency
across frames which is important for the classification of actions. Moreover,
existing models are over-parameterized or evaluated on small datasets with very
few activity classes. We propose an efficient graph convolution network model
that exploits spatio-temporal joint dependencies and dynamics of human skeleton
joints for accurate fall detection. Our method leverages dynamic representation
with robust concurrent spatio-temporal characteristics of skeleton joints. We
performed extensive experiments on three large-scale datasets. With a
significantly smaller model size than most existing methods, our proposed
method achieves state-of-the-art results on the large scale NTU datasets.
| no_new_dataset | 0.950041 |
2503.06940 | Jianxiong Gao | Jianxiong Gao, Yichang Liu, Baofeng Yang, Jianfeng Feng and Yanwei Fu | CineBrain: A Large-Scale Multi-Modal Brain Dataset During Naturalistic
Audiovisual Narrative Processing | 14 pages, 13 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce CineBrain, the first large-scale dataset
featuring simultaneous EEG and fMRI recordings during dynamic audiovisual
stimulation. Recognizing the complementary strengths of EEG's high temporal
resolution and fMRI's deep-brain spatial coverage, CineBrain provides
approximately six hours of narrative-driven content from the popular television
series The Big Bang Theory for each of six participants. Building upon this
unique dataset, we propose CineSync, an innovative multimodal decoding
framework integrates a Multi-Modal Fusion Encoder with a diffusion-based Neural
Latent Decoder. Our approach effectively fuses EEG and fMRI signals,
significantly improving the reconstruction quality of complex audiovisual
stimuli. To facilitate rigorous evaluation, we introduce Cine-Benchmark, a
comprehensive evaluation protocol that assesses reconstructions across semantic
and perceptual dimensions. Experimental results demonstrate that CineSync
achieves state-of-the-art video reconstruction performance and highlight our
initial success in combining fMRI and EEG for reconstructing both video and
audio stimuli. Project Page: https://jianxgao.github.io/CineBrain.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 05:39:43 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Gao",
"Jianxiong",
""
],
[
"Liu",
"Yichang",
""
],
[
"Yang",
"Baofeng",
""
],
[
"Feng",
"Jianfeng",
""
],
[
"Fu",
"Yanwei",
""
]
]
| TITLE: CineBrain: A Large-Scale Multi-Modal Brain Dataset During Naturalistic
Audiovisual Narrative Processing
ABSTRACT: In this paper, we introduce CineBrain, the first large-scale dataset
featuring simultaneous EEG and fMRI recordings during dynamic audiovisual
stimulation. Recognizing the complementary strengths of EEG's high temporal
resolution and fMRI's deep-brain spatial coverage, CineBrain provides
approximately six hours of narrative-driven content from the popular television
series The Big Bang Theory for each of six participants. Building upon this
unique dataset, we propose CineSync, an innovative multimodal decoding
framework integrates a Multi-Modal Fusion Encoder with a diffusion-based Neural
Latent Decoder. Our approach effectively fuses EEG and fMRI signals,
significantly improving the reconstruction quality of complex audiovisual
stimuli. To facilitate rigorous evaluation, we introduce Cine-Benchmark, a
comprehensive evaluation protocol that assesses reconstructions across semantic
and perceptual dimensions. Experimental results demonstrate that CineSync
achieves state-of-the-art video reconstruction performance and highlight our
initial success in combining fMRI and EEG for reconstructing both video and
audio stimuli. Project Page: https://jianxgao.github.io/CineBrain.
| new_dataset | 0.959193 |
2503.06945 | Feng Gao | Junyan Lin, Feng Gap, Lin Qi, Junyu Dong, Qian Du, Xinbo Gao | Dynamic Cross-Modal Feature Interaction Network for Hyperspectral and
LiDAR Data Classification | Accepted by IEEE TGRS 2025 | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Hyperspectral image (HSI) and LiDAR data joint classification is a
challenging task. Existing multi-source remote sensing data classification
methods often rely on human-designed frameworks for feature extraction, which
heavily depend on expert knowledge. To address these limitations, we propose a
novel Dynamic Cross-Modal Feature Interaction Network (DCMNet), the first
framework leveraging a dynamic routing mechanism for HSI and LiDAR
classification. Specifically, our approach introduces three feature interaction
blocks: Bilinear Spatial Attention Block (BSAB), Bilinear Channel Attention
Block (BCAB), and Integration Convolutional Block (ICB). These blocks are
designed to effectively enhance spatial, spectral, and discriminative feature
interactions. A multi-layer routing space with routing gates is designed to
determine optimal computational paths, enabling data-dependent feature fusion.
Additionally, bilinear attention mechanisms are employed to enhance feature
interactions in spatial and channel representations. Extensive experiments on
three public HSI and LiDAR datasets demonstrate the superiority of DCMNet over
state-of-the-art methods. Our code will be available at
https://github.com/oucailab/DCMNet.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 05:50:13 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Lin",
"Junyan",
""
],
[
"Gap",
"Feng",
""
],
[
"Qi",
"Lin",
""
],
[
"Dong",
"Junyu",
""
],
[
"Du",
"Qian",
""
],
[
"Gao",
"Xinbo",
""
]
]
| TITLE: Dynamic Cross-Modal Feature Interaction Network for Hyperspectral and
LiDAR Data Classification
ABSTRACT: Hyperspectral image (HSI) and LiDAR data joint classification is a
challenging task. Existing multi-source remote sensing data classification
methods often rely on human-designed frameworks for feature extraction, which
heavily depend on expert knowledge. To address these limitations, we propose a
novel Dynamic Cross-Modal Feature Interaction Network (DCMNet), the first
framework leveraging a dynamic routing mechanism for HSI and LiDAR
classification. Specifically, our approach introduces three feature interaction
blocks: Bilinear Spatial Attention Block (BSAB), Bilinear Channel Attention
Block (BCAB), and Integration Convolutional Block (ICB). These blocks are
designed to effectively enhance spatial, spectral, and discriminative feature
interactions. A multi-layer routing space with routing gates is designed to
determine optimal computational paths, enabling data-dependent feature fusion.
Additionally, bilinear attention mechanisms are employed to enhance feature
interactions in spatial and channel representations. Extensive experiments on
three public HSI and LiDAR datasets demonstrate the superiority of DCMNet over
state-of-the-art methods. Our code will be available at
https://github.com/oucailab/DCMNet.
| no_new_dataset | 0.949106 |
2503.06948 | Xiao Wang | Wentao Wu, Chenglong Li, Xiao Wang, Bin Luo, Qi Liu | Large Language Model Guided Progressive Feature Alignment for Multimodal
UAV Object Detection | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing multimodal UAV object detection methods often overlook the impact of
semantic gaps between modalities, which makes it difficult to achieve accurate
semantic and spatial alignments, limiting detection performance. To address
this problem, we propose a Large Language Model (LLM) guided Progressive
feature Alignment Network called LPANet, which leverages the semantic features
extracted from a large language model to guide the progressive semantic and
spatial alignment between modalities for multimodal UAV object detection. To
employ the powerful semantic representation of LLM, we generate the
fine-grained text descriptions of each object category by ChatGPT and then
extract the semantic features using the large language model MPNet. Based on
the semantic features, we guide the semantic and spatial alignments in a
progressive manner as follows. First, we design the Semantic Alignment Module
(SAM) to pull the semantic features and multimodal visual features of each
object closer, alleviating the semantic differences of objects between
modalities. Second, we design the Explicit Spatial alignment Module (ESM) by
integrating the semantic relations into the estimation of feature-level
offsets, alleviating the coarse spatial misalignment between modalities.
Finally, we design the Implicit Spatial alignment Module (ISM), which leverages
the cross-modal correlations to aggregate key features from neighboring regions
to achieve implicit spatial alignment. Comprehensive experiments on two public
multimodal UAV object detection datasets demonstrate that our approach
outperforms state-of-the-art multimodal UAV object detectors.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 05:53:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wu",
"Wentao",
""
],
[
"Li",
"Chenglong",
""
],
[
"Wang",
"Xiao",
""
],
[
"Luo",
"Bin",
""
],
[
"Liu",
"Qi",
""
]
]
| TITLE: Large Language Model Guided Progressive Feature Alignment for Multimodal
UAV Object Detection
ABSTRACT: Existing multimodal UAV object detection methods often overlook the impact of
semantic gaps between modalities, which makes it difficult to achieve accurate
semantic and spatial alignments, limiting detection performance. To address
this problem, we propose a Large Language Model (LLM) guided Progressive
feature Alignment Network called LPANet, which leverages the semantic features
extracted from a large language model to guide the progressive semantic and
spatial alignment between modalities for multimodal UAV object detection. To
employ the powerful semantic representation of LLM, we generate the
fine-grained text descriptions of each object category by ChatGPT and then
extract the semantic features using the large language model MPNet. Based on
the semantic features, we guide the semantic and spatial alignments in a
progressive manner as follows. First, we design the Semantic Alignment Module
(SAM) to pull the semantic features and multimodal visual features of each
object closer, alleviating the semantic differences of objects between
modalities. Second, we design the Explicit Spatial alignment Module (ESM) by
integrating the semantic relations into the estimation of feature-level
offsets, alleviating the coarse spatial misalignment between modalities.
Finally, we design the Implicit Spatial alignment Module (ISM), which leverages
the cross-modal correlations to aggregate key features from neighboring regions
to achieve implicit spatial alignment. Comprehensive experiments on two public
multimodal UAV object detection datasets demonstrate that our approach
outperforms state-of-the-art multimodal UAV object detectors.
| no_new_dataset | 0.950869 |
2503.06973 | Zhaoxiang Liu | Xiang Liu, Zhaoxiang Liu, Huan Hu, Zezhou Chen, Kohou Wang, Kai Wang,
and Shiguo Lian | A Multimodal Benchmark Dataset and Model for Crop Disease Diagnosis | Accepted by ECCV 2024 (14 pages, 8 figures) | null | 10.1007/978-3-031-73016-0_10 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | While conversational generative AI has shown considerable potential in
enhancing decision-making for agricultural professionals, its exploration has
predominantly been anchored in text-based interactions. The evolution of
multimodal conversational AI, leveraging vast amounts of image-text data from
diverse sources, marks a significant stride forward. However, the application
of such advanced vision-language models in the agricultural domain,
particularly for crop disease diagnosis, remains underexplored. In this work,
we present the crop disease domain multimodal (CDDM) dataset, a pioneering
resource designed to advance the field of agricultural research through the
application of multimodal learning techniques. The dataset comprises 137,000
images of various crop diseases, accompanied by 1 million question-answer pairs
that span a broad spectrum of agricultural knowledge, from disease
identification to management practices. By integrating visual and textual data,
CDDM facilitates the development of sophisticated question-answering systems
capable of providing precise, useful advice to farmers and agricultural
professionals. We demonstrate the utility of the dataset by finetuning
state-of-the-art multimodal models, showcasing significant improvements in crop
disease diagnosis. Specifically, we employed a novel finetuning strategy that
utilizes low-rank adaptation (LoRA) to finetune the visual encoder, adapter and
language model simultaneously. Our contributions include not only the dataset
but also a finetuning strategy and a benchmark to stimulate further research in
agricultural technology, aiming to bridge the gap between advanced AI
techniques and practical agricultural applications. The dataset is available at
https: //github.com/UnicomAI/UnicomBenchmark/tree/main/CDDMBench.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 06:37:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Xiang",
""
],
[
"Liu",
"Zhaoxiang",
""
],
[
"Hu",
"Huan",
""
],
[
"Chen",
"Zezhou",
""
],
[
"Wang",
"Kohou",
""
],
[
"Wang",
"Kai",
""
],
[
"Lian",
"Shiguo",
""
]
]
| TITLE: A Multimodal Benchmark Dataset and Model for Crop Disease Diagnosis
ABSTRACT: While conversational generative AI has shown considerable potential in
enhancing decision-making for agricultural professionals, its exploration has
predominantly been anchored in text-based interactions. The evolution of
multimodal conversational AI, leveraging vast amounts of image-text data from
diverse sources, marks a significant stride forward. However, the application
of such advanced vision-language models in the agricultural domain,
particularly for crop disease diagnosis, remains underexplored. In this work,
we present the crop disease domain multimodal (CDDM) dataset, a pioneering
resource designed to advance the field of agricultural research through the
application of multimodal learning techniques. The dataset comprises 137,000
images of various crop diseases, accompanied by 1 million question-answer pairs
that span a broad spectrum of agricultural knowledge, from disease
identification to management practices. By integrating visual and textual data,
CDDM facilitates the development of sophisticated question-answering systems
capable of providing precise, useful advice to farmers and agricultural
professionals. We demonstrate the utility of the dataset by finetuning
state-of-the-art multimodal models, showcasing significant improvements in crop
disease diagnosis. Specifically, we employed a novel finetuning strategy that
utilizes low-rank adaptation (LoRA) to finetune the visual encoder, adapter and
language model simultaneously. Our contributions include not only the dataset
but also a finetuning strategy and a benchmark to stimulate further research in
agricultural technology, aiming to bridge the gap between advanced AI
techniques and practical agricultural applications. The dataset is available at
https: //github.com/UnicomAI/UnicomBenchmark/tree/main/CDDMBench.
| new_dataset | 0.974067 |
2503.06974 | Yang Liu | Yang Liu, and Mengyuan Liu, and Shudong Huang, and Jiancheng Lv | Asymmetric Visual Semantic Embedding Framework for Efficient
Vision-Language Alignment | 9 pages, 5 figures, The 39th Annual AAAI Conference on Artificial
Intelligence | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning visual semantic similarity is a critical challenge in bridging the
gap between images and texts. However, there exist inherent variations between
vision and language data, such as information density, i.e., images can contain
textual information from multiple different views, which makes it difficult to
compute the similarity between these two modalities accurately and efficiently.
In this paper, we propose a novel framework called Asymmetric Visual Semantic
Embedding (AVSE) to dynamically select features from various regions of images
tailored to different textual inputs for similarity calculation. To capture
information from different views in the image, we design a radial bias sampling
module to sample image patches and obtain image features from various views,
Furthermore, AVSE introduces a novel module for efficient computation of visual
semantic similarity between asymmetric image and text embeddings. Central to
this module is the presumption of foundational semantic units within the
embeddings, denoted as ``meta-semantic embeddings." It segments all embeddings
into meta-semantic embeddings with the same dimension and calculates visual
semantic similarity by finding the optimal match of meta-semantic embeddings of
two modalities. Our proposed AVSE model is extensively evaluated on the
large-scale MS-COCO and Flickr30K datasets, demonstrating its superiority over
recent state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 06:38:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Yang",
""
],
[
"Liu",
"Mengyuan",
""
],
[
"Huang",
"Shudong",
""
],
[
"Lv",
"Jiancheng",
""
]
]
| TITLE: Asymmetric Visual Semantic Embedding Framework for Efficient
Vision-Language Alignment
ABSTRACT: Learning visual semantic similarity is a critical challenge in bridging the
gap between images and texts. However, there exist inherent variations between
vision and language data, such as information density, i.e., images can contain
textual information from multiple different views, which makes it difficult to
compute the similarity between these two modalities accurately and efficiently.
In this paper, we propose a novel framework called Asymmetric Visual Semantic
Embedding (AVSE) to dynamically select features from various regions of images
tailored to different textual inputs for similarity calculation. To capture
information from different views in the image, we design a radial bias sampling
module to sample image patches and obtain image features from various views,
Furthermore, AVSE introduces a novel module for efficient computation of visual
semantic similarity between asymmetric image and text embeddings. Central to
this module is the presumption of foundational semantic units within the
embeddings, denoted as ``meta-semantic embeddings." It segments all embeddings
into meta-semantic embeddings with the same dimension and calculates visual
semantic similarity by finding the optimal match of meta-semantic embeddings of
two modalities. Our proposed AVSE model is extensively evaluated on the
large-scale MS-COCO and Flickr30K datasets, demonstrating its superiority over
recent state-of-the-art methods.
| no_new_dataset | 0.947332 |
2503.06976 | Haishan Huang | Pengchen Liang, Haishan Huang, Bin Pu, Jianguo Chen, Xiang Hua, Jing
Zhang, Weibo Ma, Zhuangzhuang Chen, Yiwei Li, Qing Chang | Task-Specific Knowledge Distillation from the Vision Foundation Model
for Enhanced Medical Image Segmentation | 29 pages, 10 figures, 16 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale pre-trained models, such as Vision Foundation Models (VFMs), have
demonstrated impressive performance across various downstream tasks by
transferring generalized knowledge, especially when target data is limited.
However, their high computational cost and the domain gap between natural and
medical images limit their practical application in medical segmentation tasks.
Motivated by this, we pose the following important question: "How can we
effectively utilize the knowledge of large pre-trained VFMs to train a small,
task-specific model for medical image segmentation when training data is
limited?" To address this problem, we propose a novel and generalizable
task-specific knowledge distillation framework. Our method fine-tunes the VFM
on the target segmentation task to capture task-specific features before
distilling the knowledge to smaller models, leveraging Low-Rank Adaptation
(LoRA) to reduce the computational cost of fine-tuning. Additionally, we
incorporate synthetic data generated by diffusion models to augment the
transfer set, enhancing model performance in data-limited scenarios.
Experimental results across five medical image datasets demonstrate that our
method consistently outperforms task-agnostic knowledge distillation and
self-supervised pretraining approaches like MoCo v3 and Masked Autoencoders
(MAE). For example, on the KidneyUS dataset, our method achieved a 28% higher
Dice score than task-agnostic KD using 80 labeled samples for fine-tuning. On
the CHAOS dataset, it achieved an 11% improvement over MAE with 100 labeled
samples. These results underscore the potential of task-specific knowledge
distillation to train accurate, efficient models for medical image segmentation
in data-constrained settings.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 06:39:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liang",
"Pengchen",
""
],
[
"Huang",
"Haishan",
""
],
[
"Pu",
"Bin",
""
],
[
"Chen",
"Jianguo",
""
],
[
"Hua",
"Xiang",
""
],
[
"Zhang",
"Jing",
""
],
[
"Ma",
"Weibo",
""
],
[
"Chen",
"Zhuangzhuang",
""
],
[
"Li",
"Yiwei",
""
],
[
"Chang",
"Qing",
""
]
]
| TITLE: Task-Specific Knowledge Distillation from the Vision Foundation Model
for Enhanced Medical Image Segmentation
ABSTRACT: Large-scale pre-trained models, such as Vision Foundation Models (VFMs), have
demonstrated impressive performance across various downstream tasks by
transferring generalized knowledge, especially when target data is limited.
However, their high computational cost and the domain gap between natural and
medical images limit their practical application in medical segmentation tasks.
Motivated by this, we pose the following important question: "How can we
effectively utilize the knowledge of large pre-trained VFMs to train a small,
task-specific model for medical image segmentation when training data is
limited?" To address this problem, we propose a novel and generalizable
task-specific knowledge distillation framework. Our method fine-tunes the VFM
on the target segmentation task to capture task-specific features before
distilling the knowledge to smaller models, leveraging Low-Rank Adaptation
(LoRA) to reduce the computational cost of fine-tuning. Additionally, we
incorporate synthetic data generated by diffusion models to augment the
transfer set, enhancing model performance in data-limited scenarios.
Experimental results across five medical image datasets demonstrate that our
method consistently outperforms task-agnostic knowledge distillation and
self-supervised pretraining approaches like MoCo v3 and Masked Autoencoders
(MAE). For example, on the KidneyUS dataset, our method achieved a 28% higher
Dice score than task-agnostic KD using 80 labeled samples for fine-tuning. On
the CHAOS dataset, it achieved an 11% improvement over MAE with 100 labeled
samples. These results underscore the potential of task-specific knowledge
distillation to train accurate, efficient models for medical image segmentation
in data-constrained settings.
| no_new_dataset | 0.94699 |
2503.06983 | Jiahao Wang | Jiahao Wang, Xiangyu Cao, Jiaru Zhong, Yuner Zhang, Haibao Yu, Lei He
and Shaobing Xu | Griffin: Aerial-Ground Cooperative Detection and Tracking Dataset and
Benchmark | 8 pages, 7 figures. This work has been submitted to IROS 2025 for
possible publication | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite significant advancements, autonomous driving systems continue to
struggle with occluded objects and long-range detection due to the inherent
limitations of single-perspective sensing. Aerial-ground cooperation offers a
promising solution by integrating UAVs' aerial views with ground vehicles'
local observations. However, progress in this emerging field has been hindered
by the absence of public datasets and standardized evaluation benchmarks. To
address this gap, this paper presents a comprehensive solution for
aerial-ground cooperative 3D perception through three key contributions: (1)
Griffin, a large-scale multi-modal dataset featuring over 200 dynamic scenes
(30k+ frames) with varied UAV altitudes (20-60m), diverse weather conditions,
and occlusion-aware 3D annotations, enhanced by CARLA-AirSim co-simulation for
realistic UAV dynamics; (2) A unified benchmarking framework for aerial-ground
cooperative detection and tracking tasks, including protocols for evaluating
communication efficiency, latency tolerance, and altitude adaptability; (3)
AGILE, an instance-level intermediate fusion baseline that dynamically aligns
cross-view features through query-based interaction, achieving an advantageous
balance between communication overhead and perception accuracy. Extensive
experiments prove the effectiveness of aerial-ground cooperative perception and
demonstrate the direction of further research. The dataset and codes are
available at https://github.com/wang-jh18-SVM/Griffin.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:00:07 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Jiahao",
""
],
[
"Cao",
"Xiangyu",
""
],
[
"Zhong",
"Jiaru",
""
],
[
"Zhang",
"Yuner",
""
],
[
"Yu",
"Haibao",
""
],
[
"He",
"Lei",
""
],
[
"Xu",
"Shaobing",
""
]
]
| TITLE: Griffin: Aerial-Ground Cooperative Detection and Tracking Dataset and
Benchmark
ABSTRACT: Despite significant advancements, autonomous driving systems continue to
struggle with occluded objects and long-range detection due to the inherent
limitations of single-perspective sensing. Aerial-ground cooperation offers a
promising solution by integrating UAVs' aerial views with ground vehicles'
local observations. However, progress in this emerging field has been hindered
by the absence of public datasets and standardized evaluation benchmarks. To
address this gap, this paper presents a comprehensive solution for
aerial-ground cooperative 3D perception through three key contributions: (1)
Griffin, a large-scale multi-modal dataset featuring over 200 dynamic scenes
(30k+ frames) with varied UAV altitudes (20-60m), diverse weather conditions,
and occlusion-aware 3D annotations, enhanced by CARLA-AirSim co-simulation for
realistic UAV dynamics; (2) A unified benchmarking framework for aerial-ground
cooperative detection and tracking tasks, including protocols for evaluating
communication efficiency, latency tolerance, and altitude adaptability; (3)
AGILE, an instance-level intermediate fusion baseline that dynamically aligns
cross-view features through query-based interaction, achieving an advantageous
balance between communication overhead and perception accuracy. Extensive
experiments prove the effectiveness of aerial-ground cooperative perception and
demonstrate the direction of further research. The dataset and codes are
available at https://github.com/wang-jh18-SVM/Griffin.
| new_dataset | 0.966092 |
2503.06986 | Youngseok Kim | Youngseok Kim, Sunwook Hwang, Hyung-Sin Kim, and Saewoong Bahk | ConcreTizer: Model Inversion Attack via Occupancy Classification and
Dispersion Control for 3D Point Cloud Restoration | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growing use of 3D point cloud data in autonomous vehicles (AVs) has
raised serious privacy concerns, particularly due to the sensitive information
that can be extracted from 3D data. While model inversion attacks have been
widely studied in the context of 2D data, their application to 3D point clouds
remains largely unexplored. To fill this gap, we present the first in-depth
study of model inversion attacks aimed at restoring 3D point cloud scenes. Our
analysis reveals the unique challenges, the inherent sparsity of 3D point
clouds and the ambiguity between empty and non-empty voxels after voxelization,
which are further exacerbated by the dispersion of non-empty voxels across
feature extractor layers. To address these challenges, we introduce
ConcreTizer, a simple yet effective model inversion attack designed
specifically for voxel-based 3D point cloud data. ConcreTizer incorporates
Voxel Occupancy Classification to distinguish between empty and non-empty
voxels and Dispersion-Controlled Supervision to mitigate non-empty voxel
dispersion. Extensive experiments on widely used 3D feature extractors and
benchmark datasets, such as KITTI and Waymo, demonstrate that ConcreTizer
concretely restores the original 3D point cloud scene from disrupted 3D feature
data. Our findings highlight both the vulnerability of 3D data to inversion
attacks and the urgent need for robust defense strategies.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:05:36 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kim",
"Youngseok",
""
],
[
"Hwang",
"Sunwook",
""
],
[
"Kim",
"Hyung-Sin",
""
],
[
"Bahk",
"Saewoong",
""
]
]
| TITLE: ConcreTizer: Model Inversion Attack via Occupancy Classification and
Dispersion Control for 3D Point Cloud Restoration
ABSTRACT: The growing use of 3D point cloud data in autonomous vehicles (AVs) has
raised serious privacy concerns, particularly due to the sensitive information
that can be extracted from 3D data. While model inversion attacks have been
widely studied in the context of 2D data, their application to 3D point clouds
remains largely unexplored. To fill this gap, we present the first in-depth
study of model inversion attacks aimed at restoring 3D point cloud scenes. Our
analysis reveals the unique challenges, the inherent sparsity of 3D point
clouds and the ambiguity between empty and non-empty voxels after voxelization,
which are further exacerbated by the dispersion of non-empty voxels across
feature extractor layers. To address these challenges, we introduce
ConcreTizer, a simple yet effective model inversion attack designed
specifically for voxel-based 3D point cloud data. ConcreTizer incorporates
Voxel Occupancy Classification to distinguish between empty and non-empty
voxels and Dispersion-Controlled Supervision to mitigate non-empty voxel
dispersion. Extensive experiments on widely used 3D feature extractors and
benchmark datasets, such as KITTI and Waymo, demonstrate that ConcreTizer
concretely restores the original 3D point cloud scene from disrupted 3D feature
data. Our findings highlight both the vulnerability of 3D data to inversion
attacks and the urgent need for robust defense strategies.
| no_new_dataset | 0.944382 |
2503.06993 | Yang Lu | Shihao Hou, Xinyi Shang, Shreyank N Gowda, Yang Lu, Chao Wu, Yan Yan,
Hanzi Wang | CAPT: Class-Aware Prompt Tuning for Federated Long-Tailed Learning with
Vision-Language Model | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effectively handling the co-occurrence of non-IID data and long-tailed
distributions remains a critical challenge in federated learning. While
fine-tuning vision-language models (VLMs) like CLIP has shown to be promising
in addressing non-IID data challenges, this approach leads to severe
degradation of tail classes in federated long-tailed scenarios. Under the
composite effects of strong non-IID data distribution and long-tailed class
imbalances, VLM fine-tuning may even fail to yield any improvement. To address
this issue, we propose Class-Aware Prompt Learning for Federated Long-tailed
Learning (CAPT), a novel framework that leverages a pre-trained VLM to
effectively handle both data heterogeneity and long-tailed distributions. CAPT
introduces a dual-prompt mechanism that synergizes general and class-aware
prompts, enabling the framework to capture global trends while preserving
class-specific knowledge. To better aggregate and share knowledge across
clients, we introduce a heterogeneity-aware client clustering strategy that
groups clients based on their data distributions, enabling efficient
collaboration and knowledge sharing. Extensive experiments on various
long-tailed datasets with different levels of data heterogeneity demonstrate
that CAPT significantly improves tail class performance without compromising
overall accuracy, outperforming state-of-the-art methods in federated
long-tailed learning scenarios.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:17:15 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hou",
"Shihao",
""
],
[
"Shang",
"Xinyi",
""
],
[
"Gowda",
"Shreyank N",
""
],
[
"Lu",
"Yang",
""
],
[
"Wu",
"Chao",
""
],
[
"Yan",
"Yan",
""
],
[
"Wang",
"Hanzi",
""
]
]
| TITLE: CAPT: Class-Aware Prompt Tuning for Federated Long-Tailed Learning with
Vision-Language Model
ABSTRACT: Effectively handling the co-occurrence of non-IID data and long-tailed
distributions remains a critical challenge in federated learning. While
fine-tuning vision-language models (VLMs) like CLIP has shown to be promising
in addressing non-IID data challenges, this approach leads to severe
degradation of tail classes in federated long-tailed scenarios. Under the
composite effects of strong non-IID data distribution and long-tailed class
imbalances, VLM fine-tuning may even fail to yield any improvement. To address
this issue, we propose Class-Aware Prompt Learning for Federated Long-tailed
Learning (CAPT), a novel framework that leverages a pre-trained VLM to
effectively handle both data heterogeneity and long-tailed distributions. CAPT
introduces a dual-prompt mechanism that synergizes general and class-aware
prompts, enabling the framework to capture global trends while preserving
class-specific knowledge. To better aggregate and share knowledge across
clients, we introduce a heterogeneity-aware client clustering strategy that
groups clients based on their data distributions, enabling efficient
collaboration and knowledge sharing. Extensive experiments on various
long-tailed datasets with different levels of data heterogeneity demonstrate
that CAPT significantly improves tail class performance without compromising
overall accuracy, outperforming state-of-the-art methods in federated
long-tailed learning scenarios.
| no_new_dataset | 0.951278 |
2503.06997 | Qian Liu | Qian Liu, Lan Wang, Bing Yang and Hao Wu | Water Quality Data Imputation via A Fast Latent Factorization of Tensors
with PID-based Optimizer | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Water quality data can supply a substantial decision support for water
resources utilization and pollution prevention. However, there are numerous
missing values in water quality data due to inescapable factors like sensor
failure, thereby leading to biased result for hydrological analysis and failing
to support environmental governance decision accurately. A Latent Factorization
of Tensors (LFT) with Stochastic Gradient Descent (SGD) proves to be an
efficient imputation method. However, a standard SGD-based LFT model commonly
surfers from the slow convergence that impairs its efficiency. To tackle this
issue, this paper proposes a Fast Latent Factorization of Tensors (FLFT) model.
It constructs an adjusted instance error into SGD via leveraging a nonlinear
PID controller to incorporates the past, current and future information of
prediction error for improving convergence rate. Comparing with state-of-art
models in real world datasets, the results of experiment indicate that the FLFT
model achieves a better convergence rate and higher accuracy.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:22:54 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Qian",
""
],
[
"Wang",
"Lan",
""
],
[
"Yang",
"Bing",
""
],
[
"Wu",
"Hao",
""
]
]
| TITLE: Water Quality Data Imputation via A Fast Latent Factorization of Tensors
with PID-based Optimizer
ABSTRACT: Water quality data can supply a substantial decision support for water
resources utilization and pollution prevention. However, there are numerous
missing values in water quality data due to inescapable factors like sensor
failure, thereby leading to biased result for hydrological analysis and failing
to support environmental governance decision accurately. A Latent Factorization
of Tensors (LFT) with Stochastic Gradient Descent (SGD) proves to be an
efficient imputation method. However, a standard SGD-based LFT model commonly
surfers from the slow convergence that impairs its efficiency. To tackle this
issue, this paper proposes a Fast Latent Factorization of Tensors (FLFT) model.
It constructs an adjusted instance error into SGD via leveraging a nonlinear
PID controller to incorporates the past, current and future information of
prediction error for improving convergence rate. Comparing with state-of-art
models in real world datasets, the results of experiment indicate that the FLFT
model achieves a better convergence rate and higher accuracy.
| no_new_dataset | 0.94743 |
2503.07000 | Zhaojie Zeng | Zhaojie Zeng, Yuesong Wang, Lili Ju, Tao Guan | Frequency-Aware Density Control via Reparameterization for High-Quality
Rendering of 3D Gaussian Splatting | Accepted to AAAI2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | By adaptively controlling the density and generating more Gaussians in
regions with high-frequency information, 3D Gaussian Splatting (3DGS) can
better represent scene details. From the signal processing perspective,
representing details usually needs more Gaussians with relatively smaller
scales. However, 3DGS currently lacks an explicit constraint linking the
density and scale of 3D Gaussians across the domain, leading to 3DGS using
improper-scale Gaussians to express frequency information, resulting in the
loss of accuracy. In this paper, we propose to establish a direct relation
between density and scale through the reparameterization of the scaling
parameters and ensure the consistency between them via explicit constraints
(i.e., density responds well to changes in frequency). Furthermore, we develop
a frequency-aware density control strategy, consisting of densification and
deletion, to improve representation quality with fewer Gaussians. A dynamic
threshold encourages densification in high-frequency regions, while a
scale-based filter deletes Gaussians with improper scale. Experimental results
on various datasets demonstrate that our method outperforms existing
state-of-the-art methods quantitatively and qualitatively.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:30:45 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zeng",
"Zhaojie",
""
],
[
"Wang",
"Yuesong",
""
],
[
"Ju",
"Lili",
""
],
[
"Guan",
"Tao",
""
]
]
| TITLE: Frequency-Aware Density Control via Reparameterization for High-Quality
Rendering of 3D Gaussian Splatting
ABSTRACT: By adaptively controlling the density and generating more Gaussians in
regions with high-frequency information, 3D Gaussian Splatting (3DGS) can
better represent scene details. From the signal processing perspective,
representing details usually needs more Gaussians with relatively smaller
scales. However, 3DGS currently lacks an explicit constraint linking the
density and scale of 3D Gaussians across the domain, leading to 3DGS using
improper-scale Gaussians to express frequency information, resulting in the
loss of accuracy. In this paper, we propose to establish a direct relation
between density and scale through the reparameterization of the scaling
parameters and ensure the consistency between them via explicit constraints
(i.e., density responds well to changes in frequency). Furthermore, we develop
a frequency-aware density control strategy, consisting of densification and
deletion, to improve representation quality with fewer Gaussians. A dynamic
threshold encourages densification in high-frequency regions, while a
scale-based filter deletes Gaussians with improper scale. Experimental results
on various datasets demonstrate that our method outperforms existing
state-of-the-art methods quantitatively and qualitatively.
| no_new_dataset | 0.949809 |
2503.07002 | Zongqing Lu | Jiazheng Liu, Sipeng Zheng, B\"orje F. Karlsson, and Zongqing Lu | Taking Notes Brings Focus? Towards Multi-Turn Multimodal Dialogue
Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal large language models (MLLMs), built on large-scale pre-trained
vision towers and language models, have shown great capabilities in multimodal
understanding. However, most existing MLLMs are trained on single-turn vision
question-answering tasks, which do not accurately reflect real-world human
conversations. In this paper, we introduce MMDiag, a multi-turn multimodal
dialogue dataset. This dataset is collaboratively generated through
deliberately designed rules and GPT assistance, featuring strong correlations
between questions, between questions and images, and among different image
regions; thus aligning more closely with real-world scenarios. MMDiag serves as
a strong benchmark for multi-turn multimodal dialogue learning and brings more
challenges to the grounding and reasoning capabilities of MLLMs. Further,
inspired by human vision processing, we present DiagNote, an MLLM equipped with
multimodal grounding and reasoning capabilities. DiagNote consists of two
modules (Deliberate and Gaze) interacting with each other to perform
Chain-of-Thought and annotations respectively, throughout multi-turn dialogues.
We empirically demonstrate the advantages of DiagNote in both grounding and
jointly processing and reasoning with vision and language information over
existing MLLMs.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:32:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Jiazheng",
""
],
[
"Zheng",
"Sipeng",
""
],
[
"Karlsson",
"Börje F.",
""
],
[
"Lu",
"Zongqing",
""
]
]
| TITLE: Taking Notes Brings Focus? Towards Multi-Turn Multimodal Dialogue
Learning
ABSTRACT: Multimodal large language models (MLLMs), built on large-scale pre-trained
vision towers and language models, have shown great capabilities in multimodal
understanding. However, most existing MLLMs are trained on single-turn vision
question-answering tasks, which do not accurately reflect real-world human
conversations. In this paper, we introduce MMDiag, a multi-turn multimodal
dialogue dataset. This dataset is collaboratively generated through
deliberately designed rules and GPT assistance, featuring strong correlations
between questions, between questions and images, and among different image
regions; thus aligning more closely with real-world scenarios. MMDiag serves as
a strong benchmark for multi-turn multimodal dialogue learning and brings more
challenges to the grounding and reasoning capabilities of MLLMs. Further,
inspired by human vision processing, we present DiagNote, an MLLM equipped with
multimodal grounding and reasoning capabilities. DiagNote consists of two
modules (Deliberate and Gaze) interacting with each other to perform
Chain-of-Thought and annotations respectively, throughout multi-turn dialogues.
We empirically demonstrate the advantages of DiagNote in both grounding and
jointly processing and reasoning with vision and language information over
existing MLLMs.
| new_dataset | 0.957437 |
2503.07008 | Sania Zahan | Sania Zahan, Ghulam Mubashar Hassan, Ajmal Mian | SDFA: Structure Aware Discriminative Feature Aggregation for Efficient
Human Fall Detection in Video | Published IEEE Transactions on Industrial Informatics | in IEEE Transactions on Industrial Informatics, vol. 19, no. 8,
pp. 8713-8721, Aug. 2023 | 10.1109/TII.2022.3221208 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Older people are susceptible to fall due to instability in posture and
deteriorating health. Immediate access to medical support can greatly reduce
repercussions. Hence, there is an increasing interest in automated fall
detection, often incorporated into a smart healthcare system to provide better
monitoring. Existing systems focus on wearable devices which are inconvenient
or video monitoring which has privacy concerns. Moreover, these systems provide
a limited perspective of their generalization ability as they are tested on
datasets containing few activities that have wide disparity in the action space
and are easy to differentiate. Complex daily life scenarios pose much greater
challenges with activities that overlap in action spaces due to similar posture
or motion. To overcome these limitations, we propose a fall detection model,
coined SDFA, based on human skeletons extracted from low-resolution videos. The
use of skeleton data ensures privacy and low-resolution videos ensures low
hardware and computational cost. Our model captures discriminative structural
displacements and motion trends using unified joint and motion features
projected onto a shared high dimensional space. Particularly, the use of
separable convolution combined with a powerful GCN architecture provides
improved performance. Extensive experiments on five large-scale datasets with a
wide range of evaluation settings show that our model achieves competitive
performance with extremely low computational complexity and runs faster than
existing models.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:46:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zahan",
"Sania",
""
],
[
"Hassan",
"Ghulam Mubashar",
""
],
[
"Mian",
"Ajmal",
""
]
]
| TITLE: SDFA: Structure Aware Discriminative Feature Aggregation for Efficient
Human Fall Detection in Video
ABSTRACT: Older people are susceptible to fall due to instability in posture and
deteriorating health. Immediate access to medical support can greatly reduce
repercussions. Hence, there is an increasing interest in automated fall
detection, often incorporated into a smart healthcare system to provide better
monitoring. Existing systems focus on wearable devices which are inconvenient
or video monitoring which has privacy concerns. Moreover, these systems provide
a limited perspective of their generalization ability as they are tested on
datasets containing few activities that have wide disparity in the action space
and are easy to differentiate. Complex daily life scenarios pose much greater
challenges with activities that overlap in action spaces due to similar posture
or motion. To overcome these limitations, we propose a fall detection model,
coined SDFA, based on human skeletons extracted from low-resolution videos. The
use of skeleton data ensures privacy and low-resolution videos ensures low
hardware and computational cost. Our model captures discriminative structural
displacements and motion trends using unified joint and motion features
projected onto a shared high dimensional space. Particularly, the use of
separable convolution combined with a powerful GCN architecture provides
improved performance. Extensive experiments on five large-scale datasets with a
wide range of evaluation settings show that our model achieves competitive
performance with extremely low computational complexity and runs faster than
existing models.
| no_new_dataset | 0.949106 |
2503.07017 | Yuchen Cui | Haozhuo Li, Yuchen Cui, Dorsa Sadigh | How to Train Your Robots? The Impact of Demonstration Modality on
Imitation Learning | 8 pages, ICRA | null | null | null | cs.RO cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Imitation learning is a promising approach for learning robot policies with
user-provided data. The way demonstrations are provided, i.e., demonstration
modality, influences the quality of the data. While existing research shows
that kinesthetic teaching (physically guiding the robot) is preferred by users
for the intuitiveness and ease of use, the majority of existing manipulation
datasets were collected through teleoperation via a VR controller or
spacemouse. In this work, we investigate how different demonstration modalities
impact downstream learning performance as well as user experience.
Specifically, we compare low-cost demonstration modalities including
kinesthetic teaching, teleoperation with a VR controller, and teleoperation
with a spacemouse controller. We experiment with three table-top manipulation
tasks with different motion constraints. We evaluate and compare imitation
learning performance using data from different demonstration modalities, and
collected subjective feedback on user experience. Our results show that
kinesthetic teaching is rated the most intuitive for controlling the robot and
provides cleanest data for best downstream learning performance. However, it is
not preferred as the way for large-scale data collection due to the physical
load. Based on such insight, we propose a simple data collection scheme that
relies on a small number of kinesthetic demonstrations mixed with data
collected through teleoperation to achieve the best overall learning
performance while maintaining low data-collection effort.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:57:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Haozhuo",
""
],
[
"Cui",
"Yuchen",
""
],
[
"Sadigh",
"Dorsa",
""
]
]
| TITLE: How to Train Your Robots? The Impact of Demonstration Modality on
Imitation Learning
ABSTRACT: Imitation learning is a promising approach for learning robot policies with
user-provided data. The way demonstrations are provided, i.e., demonstration
modality, influences the quality of the data. While existing research shows
that kinesthetic teaching (physically guiding the robot) is preferred by users
for the intuitiveness and ease of use, the majority of existing manipulation
datasets were collected through teleoperation via a VR controller or
spacemouse. In this work, we investigate how different demonstration modalities
impact downstream learning performance as well as user experience.
Specifically, we compare low-cost demonstration modalities including
kinesthetic teaching, teleoperation with a VR controller, and teleoperation
with a spacemouse controller. We experiment with three table-top manipulation
tasks with different motion constraints. We evaluate and compare imitation
learning performance using data from different demonstration modalities, and
collected subjective feedback on user experience. Our results show that
kinesthetic teaching is rated the most intuitive for controlling the robot and
provides cleanest data for best downstream learning performance. However, it is
not preferred as the way for large-scale data collection due to the physical
load. Based on such insight, we propose a simple data collection scheme that
relies on a small number of kinesthetic demonstrations mixed with data
collected through teleoperation to achieve the best overall learning
performance while maintaining low data-collection effort.
| no_new_dataset | 0.949856 |
2503.07018 | Xintong Li | Xintong Li, Jalend Bantupalli, Ria Dharmani, Yuwei Zhang, Jingbo Shang | Toward Multi-Session Personalized Conversation: A Large-Scale Dataset
and Hierarchical Tree Framework for Implicit Reasoning | Preprint | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | There has been a surge in the use of large language models (LLM)
conversational agents to generate responses based on long-term history from
multiple sessions. However, existing long-term open-domain dialogue datasets
lack complex, real-world personalization and fail to capture implicit
reasoning-where relevant information is embedded in subtle, syntactic, or
semantically distant connections rather than explicit statements. In such
cases, traditional retrieval methods fail to capture relevant context, and
long-context modeling also becomes inefficient due to numerous complicated
persona-related details. To address this gap, we introduce ImplexConv, a
large-scale long-term dataset with 2,500 examples, each containing
approximately 100 conversation sessions, designed to study implicit reasoning
in personalized dialogues. Additionally, we propose TaciTree, a novel
hierarchical tree framework that structures conversation history into multiple
levels of summarization. Instead of brute-force searching all data, TaciTree
enables an efficient, level-based retrieval process where models refine their
search by progressively selecting relevant details. Our experiments demonstrate
that TaciTree significantly improves the ability of LLMs to reason over
long-term conversations with implicit contextual dependencies.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 07:59:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Xintong",
""
],
[
"Bantupalli",
"Jalend",
""
],
[
"Dharmani",
"Ria",
""
],
[
"Zhang",
"Yuwei",
""
],
[
"Shang",
"Jingbo",
""
]
]
| TITLE: Toward Multi-Session Personalized Conversation: A Large-Scale Dataset
and Hierarchical Tree Framework for Implicit Reasoning
ABSTRACT: There has been a surge in the use of large language models (LLM)
conversational agents to generate responses based on long-term history from
multiple sessions. However, existing long-term open-domain dialogue datasets
lack complex, real-world personalization and fail to capture implicit
reasoning-where relevant information is embedded in subtle, syntactic, or
semantically distant connections rather than explicit statements. In such
cases, traditional retrieval methods fail to capture relevant context, and
long-context modeling also becomes inefficient due to numerous complicated
persona-related details. To address this gap, we introduce ImplexConv, a
large-scale long-term dataset with 2,500 examples, each containing
approximately 100 conversation sessions, designed to study implicit reasoning
in personalized dialogues. Additionally, we propose TaciTree, a novel
hierarchical tree framework that structures conversation history into multiple
levels of summarization. Instead of brute-force searching all data, TaciTree
enables an efficient, level-based retrieval process where models refine their
search by progressively selecting relevant details. Our experiments demonstrate
that TaciTree significantly improves the ability of LLMs to reason over
long-term conversations with implicit contextual dependencies.
| new_dataset | 0.956594 |
2503.07019 | Keyu Du | Keyu Du, Hao Xu, Haipeng Li, Hong Qu, Chi-Wing Fu, Shuaicheng Liu | HybridReg: Robust 3D Point Cloud Registration with Hybrid Motions | 2025, Association for the Advancement of Artificial Intelligence | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene-level point cloud registration is very challenging when considering
dynamic foregrounds. Existing indoor datasets mostly assume rigid motions, so
the trained models cannot robustly handle scenes with non-rigid motions. On the
other hand, non-rigid datasets are mainly object-level, so the trained models
cannot generalize well to complex scenes. This paper presents HybridReg, a new
approach to 3D point cloud registration, learning uncertainty mask to account
for hybrid motions: rigid for backgrounds and non-rigid/rigid for
instance-level foregrounds. First, we build a scene-level 3D registration
dataset, namely HybridMatch, designed specifically with strategies to arrange
diverse deforming foregrounds in a controllable manner. Second, we account for
different motion types and formulate a mask-learning module to alleviate the
interference of deforming outliers. Third, we exploit a simple yet effective
negative log-likelihood loss to adopt uncertainty to guide the feature
extraction and correlation computation. To our best knowledge, HybridReg is the
first work that exploits hybrid motions for robust point cloud registration.
Extensive experiments show HybridReg's strengths, leading it to achieve
state-of-the-art performance on both widely-used indoor and outdoor datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:01:32 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Du",
"Keyu",
""
],
[
"Xu",
"Hao",
""
],
[
"Li",
"Haipeng",
""
],
[
"Qu",
"Hong",
""
],
[
"Fu",
"Chi-Wing",
""
],
[
"Liu",
"Shuaicheng",
""
]
]
| TITLE: HybridReg: Robust 3D Point Cloud Registration with Hybrid Motions
ABSTRACT: Scene-level point cloud registration is very challenging when considering
dynamic foregrounds. Existing indoor datasets mostly assume rigid motions, so
the trained models cannot robustly handle scenes with non-rigid motions. On the
other hand, non-rigid datasets are mainly object-level, so the trained models
cannot generalize well to complex scenes. This paper presents HybridReg, a new
approach to 3D point cloud registration, learning uncertainty mask to account
for hybrid motions: rigid for backgrounds and non-rigid/rigid for
instance-level foregrounds. First, we build a scene-level 3D registration
dataset, namely HybridMatch, designed specifically with strategies to arrange
diverse deforming foregrounds in a controllable manner. Second, we account for
different motion types and formulate a mask-learning module to alleviate the
interference of deforming outliers. Third, we exploit a simple yet effective
negative log-likelihood loss to adopt uncertainty to guide the feature
extraction and correlation computation. To our best knowledge, HybridReg is the
first work that exploits hybrid motions for robust point cloud registration.
Extensive experiments show HybridReg's strengths, leading it to achieve
state-of-the-art performance on both widely-used indoor and outdoor datasets.
| new_dataset | 0.867092 |
2503.07020 | Yuting Hu | Yuting Hu, Chenhui Xu, Ruiyang Qin, Dancheng Liu, Amir Nassereldine,
Yiyu Shi, Jinjun Xiong | Combating Partial Perception Deficit in Autonomous Driving with
Multimodal LLM Commonsense | null | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partial perception deficits can compromise autonomous vehicle safety by
disrupting environmental understanding. Current protocols typically respond
with immediate stops or minimal-risk maneuvers, worsening traffic flow and
lacking flexibility for rare driving scenarios. In this paper, we propose
LLM-RCO, a framework leveraging large language models to integrate human-like
driving commonsense into autonomous systems facing perception deficits. LLM-RCO
features four key modules: hazard inference, short-term motion planner, action
condition verifier, and safety constraint generator. These modules interact
with the dynamic driving environment, enabling proactive and context-aware
control actions to override the original control policy of autonomous agents.
To improve safety in such challenging conditions, we construct DriveLM-Deficit,
a dataset of 53,895 video clips featuring deficits of safety-critical objects,
complete with annotations for LLM-based hazard inference and motion planning
fine-tuning. Extensive experiments in adverse driving conditions with the CARLA
simulator demonstrate that systems equipped with LLM-RCO significantly improve
driving performance, highlighting its potential for enhancing autonomous
driving resilience against adverse perception deficits. Our results also show
that LLMs fine-tuned with DriveLM-Deficit can enable more proactive movements
instead of conservative stops in the context of perception deficits.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:01:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hu",
"Yuting",
""
],
[
"Xu",
"Chenhui",
""
],
[
"Qin",
"Ruiyang",
""
],
[
"Liu",
"Dancheng",
""
],
[
"Nassereldine",
"Amir",
""
],
[
"Shi",
"Yiyu",
""
],
[
"Xiong",
"Jinjun",
""
]
]
| TITLE: Combating Partial Perception Deficit in Autonomous Driving with
Multimodal LLM Commonsense
ABSTRACT: Partial perception deficits can compromise autonomous vehicle safety by
disrupting environmental understanding. Current protocols typically respond
with immediate stops or minimal-risk maneuvers, worsening traffic flow and
lacking flexibility for rare driving scenarios. In this paper, we propose
LLM-RCO, a framework leveraging large language models to integrate human-like
driving commonsense into autonomous systems facing perception deficits. LLM-RCO
features four key modules: hazard inference, short-term motion planner, action
condition verifier, and safety constraint generator. These modules interact
with the dynamic driving environment, enabling proactive and context-aware
control actions to override the original control policy of autonomous agents.
To improve safety in such challenging conditions, we construct DriveLM-Deficit,
a dataset of 53,895 video clips featuring deficits of safety-critical objects,
complete with annotations for LLM-based hazard inference and motion planning
fine-tuning. Extensive experiments in adverse driving conditions with the CARLA
simulator demonstrate that systems equipped with LLM-RCO significantly improve
driving performance, highlighting its potential for enhancing autonomous
driving resilience against adverse perception deficits. Our results also show
that LLMs fine-tuned with DriveLM-Deficit can enable more proactive movements
instead of conservative stops in the context of perception deficits.
| new_dataset | 0.958382 |
2503.07025 | Sriram Vasudevan | Sriram Vasudevan | Weak Supervision for Improved Precision in Search Systems | Accepted to the AAAI 2025 Workshop on Computational Jobs Marketplace | null | null | null | cs.IR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Labeled datasets are essential for modern search engines, which increasingly
rely on supervised learning methods like Learning to Rank and massive amounts
of data to power deep learning models. However, creating these datasets is both
time-consuming and costly, leading to the common use of user click and activity
logs as proxies for relevance. In this paper, we present a weak supervision
approach to infer the quality of query-document pairs and apply it within a
Learning to Rank framework to enhance the precision of a large-scale search
system.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:06:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Vasudevan",
"Sriram",
""
]
]
| TITLE: Weak Supervision for Improved Precision in Search Systems
ABSTRACT: Labeled datasets are essential for modern search engines, which increasingly
rely on supervised learning methods like Learning to Rank and massive amounts
of data to power deep learning models. However, creating these datasets is both
time-consuming and costly, leading to the common use of user click and activity
logs as proxies for relevance. In this paper, we present a weak supervision
approach to infer the quality of query-document pairs and apply it within a
Learning to Rank framework to enhance the precision of a large-scale search
system.
| no_new_dataset | 0.951369 |
2503.07029 | Dong-Hee Paek | Dong-Hee Paek, Seung-Hyun Kong | Availability-aware Sensor Fusion via Unified Canonical Space for 4D
Radar, LiDAR, and Camera | Arxiv preprint | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensor fusion of camera, LiDAR, and 4-dimensional (4D) Radar has brought a
significant performance improvement in autonomous driving (AD). However, there
still exist fundamental challenges: deeply coupled fusion methods assume
continuous sensor availability, making them vulnerable to sensor degradation
and failure, whereas sensor-wise cross-attention fusion methods struggle with
computational cost and unified feature representation. This paper presents
availability-aware sensor fusion (ASF), a novel method that employs unified
canonical projection (UCP) to enable consistency in all sensor features for
fusion and cross-attention across sensors along patches (CASAP) to enhance
robustness of sensor fusion against sensor degradation and failure. As a
result, the proposed ASF shows a superior object detection performance to the
existing state-of-the-art fusion methods under various weather and sensor
degradation (or failure) conditions; Extensive experiments on the K-Radar
dataset demonstrate that ASF achieves improvements of 9.7% in AP BEV (87.2%)
and 20.1% in AP 3D (73.6%) in object detection at IoU=0.5, while requiring a
low computational cost. The code will be available at
https://github.com/kaist-avelab/K-Radar.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:10:28 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Paek",
"Dong-Hee",
""
],
[
"Kong",
"Seung-Hyun",
""
]
]
| TITLE: Availability-aware Sensor Fusion via Unified Canonical Space for 4D
Radar, LiDAR, and Camera
ABSTRACT: Sensor fusion of camera, LiDAR, and 4-dimensional (4D) Radar has brought a
significant performance improvement in autonomous driving (AD). However, there
still exist fundamental challenges: deeply coupled fusion methods assume
continuous sensor availability, making them vulnerable to sensor degradation
and failure, whereas sensor-wise cross-attention fusion methods struggle with
computational cost and unified feature representation. This paper presents
availability-aware sensor fusion (ASF), a novel method that employs unified
canonical projection (UCP) to enable consistency in all sensor features for
fusion and cross-attention across sensors along patches (CASAP) to enhance
robustness of sensor fusion against sensor degradation and failure. As a
result, the proposed ASF shows a superior object detection performance to the
existing state-of-the-art fusion methods under various weather and sensor
degradation (or failure) conditions; Extensive experiments on the K-Radar
dataset demonstrate that ASF achieves improvements of 9.7% in AP BEV (87.2%)
and 20.1% in AP 3D (73.6%) in object detection at IoU=0.5, while requiring a
low computational cost. The code will be available at
https://github.com/kaist-avelab/K-Radar.
| no_new_dataset | 0.947769 |
2503.07032 | Jie Xu | Zhi Qin, Qianhui Gui, Mouxiao Bian, Rui Wang, Hong Ge, Dandan Yao,
Ziying Sun, Yuan Zhao, Yu Zhang, Hui Shi, Dongdong Wang, Chenxin Song,
Shenghong Ju, Lihao Liu, Junjun He, Jie Xu, Yuan-Cheng Wang | Multimodal Human-AI Synergy for Medical Imaging Quality Control: A
Hybrid Intelligence Framework with Adaptive Dataset Curation and Closed-Loop
Evaluation | null | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical imaging quality control (QC) is essential for accurate diagnosis, yet
traditional QC methods remain labor-intensive and subjective. To address this
challenge, in this study, we establish a standardized dataset and evaluation
framework for medical imaging QC, systematically assessing large language
models (LLMs) in image quality assessment and report standardization.
Specifically, we first constructed and anonymized a dataset of 161 chest X-ray
(CXR) radiographs and 219 CT reports for evaluation. Then, multiple LLMs,
including Gemini 2.0-Flash, GPT-4o, and DeepSeek-R1, were evaluated based on
recall, precision, and F1 score to detect technical errors and inconsistencies.
Experimental results show that Gemini 2.0-Flash achieved a Macro F1 score of 90
in CXR tasks, demonstrating strong generalization but limited fine-grained
performance. DeepSeek-R1 excelled in CT report auditing with a 62.23\% recall
rate, outperforming other models. However, its distilled variants performed
poorly, while InternLM2.5-7B-chat exhibited the highest additional discovery
rate, indicating broader but less precise error detection. These findings
highlight the potential of LLMs in medical imaging QC, with DeepSeek-R1 and
Gemini 2.0-Flash demonstrating superior performance.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:16:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Qin",
"Zhi",
""
],
[
"Gui",
"Qianhui",
""
],
[
"Bian",
"Mouxiao",
""
],
[
"Wang",
"Rui",
""
],
[
"Ge",
"Hong",
""
],
[
"Yao",
"Dandan",
""
],
[
"Sun",
"Ziying",
""
],
[
"Zhao",
"Yuan",
""
],
[
"Zhang",
"Yu",
""
],
[
"Shi",
"Hui",
""
],
[
"Wang",
"Dongdong",
""
],
[
"Song",
"Chenxin",
""
],
[
"Ju",
"Shenghong",
""
],
[
"Liu",
"Lihao",
""
],
[
"He",
"Junjun",
""
],
[
"Xu",
"Jie",
""
],
[
"Wang",
"Yuan-Cheng",
""
]
]
| TITLE: Multimodal Human-AI Synergy for Medical Imaging Quality Control: A
Hybrid Intelligence Framework with Adaptive Dataset Curation and Closed-Loop
Evaluation
ABSTRACT: Medical imaging quality control (QC) is essential for accurate diagnosis, yet
traditional QC methods remain labor-intensive and subjective. To address this
challenge, in this study, we establish a standardized dataset and evaluation
framework for medical imaging QC, systematically assessing large language
models (LLMs) in image quality assessment and report standardization.
Specifically, we first constructed and anonymized a dataset of 161 chest X-ray
(CXR) radiographs and 219 CT reports for evaluation. Then, multiple LLMs,
including Gemini 2.0-Flash, GPT-4o, and DeepSeek-R1, were evaluated based on
recall, precision, and F1 score to detect technical errors and inconsistencies.
Experimental results show that Gemini 2.0-Flash achieved a Macro F1 score of 90
in CXR tasks, demonstrating strong generalization but limited fine-grained
performance. DeepSeek-R1 excelled in CT report auditing with a 62.23\% recall
rate, outperforming other models. However, its distilled variants performed
poorly, while InternLM2.5-7B-chat exhibited the highest additional discovery
rate, indicating broader but less precise error detection. These findings
highlight the potential of LLMs in medical imaging QC, with DeepSeek-R1 and
Gemini 2.0-Flash demonstrating superior performance.
| new_dataset | 0.956917 |
2503.07036 | Nardine Basta | Nardine Basta, Conor Atkins, and Dali Kaafar | Bot Wars Evolved: Orchestrating Competing LLMs in a Counterstrike
Against Phone Scams | null | Pacific-Asia Conference on Knowledge Discovery and Data Mining,
PAKDD 2025 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present "Bot Wars," a framework using Large Language Models (LLMs)
scam-baiters to counter phone scams through simulated adversarial dialogues.
Our key contribution is a formal foundation for strategy emergence through
chain-of-thought reasoning without explicit optimization. Through a novel
two-layer prompt architecture, our framework enables LLMs to craft
demographically authentic victim personas while maintaining strategic
coherence. We evaluate our approach using a dataset of 3,200 scam dialogues
validated against 179 hours of human scam-baiting interactions, demonstrating
its effectiveness in capturing complex adversarial dynamics. Our systematic
evaluation through cognitive, quantitative, and content-specific metrics shows
that GPT-4 excels in dialogue naturalness and persona authenticity, while
Deepseek demonstrates superior engagement sustainability.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:21:36 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Basta",
"Nardine",
""
],
[
"Atkins",
"Conor",
""
],
[
"Kaafar",
"Dali",
""
]
]
| TITLE: Bot Wars Evolved: Orchestrating Competing LLMs in a Counterstrike
Against Phone Scams
ABSTRACT: We present "Bot Wars," a framework using Large Language Models (LLMs)
scam-baiters to counter phone scams through simulated adversarial dialogues.
Our key contribution is a formal foundation for strategy emergence through
chain-of-thought reasoning without explicit optimization. Through a novel
two-layer prompt architecture, our framework enables LLMs to craft
demographically authentic victim personas while maintaining strategic
coherence. We evaluate our approach using a dataset of 3,200 scam dialogues
validated against 179 hours of human scam-baiting interactions, demonstrating
its effectiveness in capturing complex adversarial dynamics. Our systematic
evaluation through cognitive, quantitative, and content-specific metrics shows
that GPT-4 excels in dialogue naturalness and persona authenticity, while
Deepseek demonstrates superior engagement sustainability.
| new_dataset | 0.971645 |
2503.07047 | Yongle Zhang | Yongle Zhang, Yimin Liu, Qiang Wu | Recovering Partially Corrupted Major Objects through Tri-modality Based
Image Completion | 17 pages, 6 page supplementary | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffusion models have become widely adopted in image completion tasks, with
text prompts commonly employed to ensure semantic coherence by providing
high-level guidance. However, a persistent challenge arises when an object is
partially obscured in the damaged region, yet its remaining parts are still
visible in the background. While text prompts offer semantic direction, they
often fail to precisely recover fine-grained structural details, such as the
object's overall posture, ensuring alignment with the visible object
information in the background. This limitation stems from the inability of text
prompts to provide pixel-level specificity. To address this, we propose
supplementing text-based guidance with a novel visual aid: a casual sketch,
which can be roughly drawn by anyone based on visible object parts. This sketch
supplies critical structural cues, enabling the generative model to produce an
object structure that seamlessly integrates with the existing background. We
introduce the Visual Sketch Self-Aware (VSSA) model, which integrates the
casual sketch into each iterative step of the diffusion process, offering
distinct advantages for partially corrupted scenarios. By blending
sketch-derived features with those of the corrupted image, and leveraging text
prompt guidance, the VSSA assists the diffusion model in generating images that
preserve both the intended object semantics and structural consistency across
the restored objects and original regions. To support this research, we created
two datasets, CUB-sketch and MSCOCO-sketch, each combining images, sketches,
and text. Extensive qualitative and quantitative experiments demonstrate that
our approach outperforms several state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:34:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Yongle",
""
],
[
"Liu",
"Yimin",
""
],
[
"Wu",
"Qiang",
""
]
]
| TITLE: Recovering Partially Corrupted Major Objects through Tri-modality Based
Image Completion
ABSTRACT: Diffusion models have become widely adopted in image completion tasks, with
text prompts commonly employed to ensure semantic coherence by providing
high-level guidance. However, a persistent challenge arises when an object is
partially obscured in the damaged region, yet its remaining parts are still
visible in the background. While text prompts offer semantic direction, they
often fail to precisely recover fine-grained structural details, such as the
object's overall posture, ensuring alignment with the visible object
information in the background. This limitation stems from the inability of text
prompts to provide pixel-level specificity. To address this, we propose
supplementing text-based guidance with a novel visual aid: a casual sketch,
which can be roughly drawn by anyone based on visible object parts. This sketch
supplies critical structural cues, enabling the generative model to produce an
object structure that seamlessly integrates with the existing background. We
introduce the Visual Sketch Self-Aware (VSSA) model, which integrates the
casual sketch into each iterative step of the diffusion process, offering
distinct advantages for partially corrupted scenarios. By blending
sketch-derived features with those of the corrupted image, and leveraging text
prompt guidance, the VSSA assists the diffusion model in generating images that
preserve both the intended object semantics and structural consistency across
the restored objects and original regions. To support this research, we created
two datasets, CUB-sketch and MSCOCO-sketch, each combining images, sketches,
and text. Extensive qualitative and quantitative experiments demonstrate that
our approach outperforms several state-of-the-art methods.
| no_new_dataset | 0.920932 |
2503.07066 | Xiaotian Han | Xiaotian Han, Tianlong Chen, Kaixiong Zhou, Zhimeng Jiang, Zhangyang
Wang, Xia Hu | You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at
Inference Time | CPAL2025(Oral) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks are prone to various bias issues, jeopardizing their
applications for high-stake decision-making. Existing fairness methods
typically offer a fixed accuracy-fairness trade-off, since the weight of the
well-trained model is a fixed point (fairness-optimum) in the weight space.
Nevertheless, more flexible accuracy-fairness trade-offs at inference time are
practically desired since: 1) stakes of the same downstream task can vary for
different individuals, and 2) different regions have diverse laws or
regularization for fairness. If using the previous fairness methods, we have to
train multiple models, each offering a specific level of accuracy-fairness
trade-off. This is often computationally expensive, time-consuming, and
difficult to deploy, making it less practical for real-world applications. To
address this problem, we propose You Only Debias Once (YODO) to achieve in-situ
flexible accuracy-fairness trade-offs at inference time, using a single model
that trained only once. Instead of pursuing one individual fixed point
(fairness-optimum) in the weight space, we aim to find a "line" in the weight
space that connects the accuracy-optimum and fairness-optimum points using a
single model. Points (models) on this line implement varying levels of
accuracy-fairness trade-offs. At inference time, by manually selecting the
specific position of the learned "line", our proposed method can achieve
arbitrary accuracy-fairness trade-offs for different end-users and scenarios.
Experimental results on tabular and image datasets show that YODO achieves
flexible trade-offs between model accuracy and fairness, at ultra-low
overheads. For example, if we need $100$ levels of trade-off on the \acse
dataset, YODO takes $3.53$ seconds while training $100$ fixed models consumes
$425$ seconds. The code is available at https://github.com/ahxt/yodo.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:50:55 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Han",
"Xiaotian",
""
],
[
"Chen",
"Tianlong",
""
],
[
"Zhou",
"Kaixiong",
""
],
[
"Jiang",
"Zhimeng",
""
],
[
"Wang",
"Zhangyang",
""
],
[
"Hu",
"Xia",
""
]
]
| TITLE: You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at
Inference Time
ABSTRACT: Deep neural networks are prone to various bias issues, jeopardizing their
applications for high-stake decision-making. Existing fairness methods
typically offer a fixed accuracy-fairness trade-off, since the weight of the
well-trained model is a fixed point (fairness-optimum) in the weight space.
Nevertheless, more flexible accuracy-fairness trade-offs at inference time are
practically desired since: 1) stakes of the same downstream task can vary for
different individuals, and 2) different regions have diverse laws or
regularization for fairness. If using the previous fairness methods, we have to
train multiple models, each offering a specific level of accuracy-fairness
trade-off. This is often computationally expensive, time-consuming, and
difficult to deploy, making it less practical for real-world applications. To
address this problem, we propose You Only Debias Once (YODO) to achieve in-situ
flexible accuracy-fairness trade-offs at inference time, using a single model
that trained only once. Instead of pursuing one individual fixed point
(fairness-optimum) in the weight space, we aim to find a "line" in the weight
space that connects the accuracy-optimum and fairness-optimum points using a
single model. Points (models) on this line implement varying levels of
accuracy-fairness trade-offs. At inference time, by manually selecting the
specific position of the learned "line", our proposed method can achieve
arbitrary accuracy-fairness trade-offs for different end-users and scenarios.
Experimental results on tabular and image datasets show that YODO achieves
flexible trade-offs between model accuracy and fairness, at ultra-low
overheads. For example, if we need $100$ levels of trade-off on the \acse
dataset, YODO takes $3.53$ seconds while training $100$ fixed models consumes
$425$ seconds. The code is available at https://github.com/ahxt/yodo.
| no_new_dataset | 0.950595 |
2503.07075 | Chuanming Wang | Chuanming Wang, Henming Mao, Huanhuan Zhang, Huiyuan Fu, Huadong Ma | XR-VLM: Cross-Relationship Modeling with Multi-part Prompts and Visual
Features for Fine-Grained Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Vision-Language Models (VLMs) have demonstrated impressive performance on
various visual tasks, yet they still require adaptation on downstream tasks to
achieve optimal performance. Recently, various adaptation technologies have
been proposed, but we observe they often underperform in fine-grained visual
recognition, which requires models to capture subtle yet discriminative
features to distinguish similar sub-categories. Current adaptation methods
typically rely on an alignment-based prediction framework, \ie the visual
feature is compared with each class prompt for similarity calculation as the
final prediction, which lacks class interaction during the forward pass.
Besides, learning single uni-modal feature further restricts the model's
expressive capacity. Therefore, we propose a novel mechanism, XR-VLM, to
discover subtle differences by modeling cross-relationships, which specifically
excels in scenarios involving multiple features. Our method introduces a
unified multi-part visual feature extraction module designed to seamlessly
integrate with the diverse backbones inherent in VLMs. Additionally, we develop
a multi-part prompt learning module to capture multi-perspective descriptions
of sub-categories. To further enhance discriminative capability, we propose a
cross relationship modeling pattern that combines visual feature with all class
prompt features, enabling a deeper exploration of the relationships between
these two modalities. Extensive experiments have been conducted on various
fine-grained datasets, and the results demonstrate that our method achieves
significant improvements compared to current state-of-the-art approaches. Code
will be released.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 08:58:05 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Chuanming",
""
],
[
"Mao",
"Henming",
""
],
[
"Zhang",
"Huanhuan",
""
],
[
"Fu",
"Huiyuan",
""
],
[
"Ma",
"Huadong",
""
]
]
| TITLE: XR-VLM: Cross-Relationship Modeling with Multi-part Prompts and Visual
Features for Fine-Grained Recognition
ABSTRACT: Vision-Language Models (VLMs) have demonstrated impressive performance on
various visual tasks, yet they still require adaptation on downstream tasks to
achieve optimal performance. Recently, various adaptation technologies have
been proposed, but we observe they often underperform in fine-grained visual
recognition, which requires models to capture subtle yet discriminative
features to distinguish similar sub-categories. Current adaptation methods
typically rely on an alignment-based prediction framework, \ie the visual
feature is compared with each class prompt for similarity calculation as the
final prediction, which lacks class interaction during the forward pass.
Besides, learning single uni-modal feature further restricts the model's
expressive capacity. Therefore, we propose a novel mechanism, XR-VLM, to
discover subtle differences by modeling cross-relationships, which specifically
excels in scenarios involving multiple features. Our method introduces a
unified multi-part visual feature extraction module designed to seamlessly
integrate with the diverse backbones inherent in VLMs. Additionally, we develop
a multi-part prompt learning module to capture multi-perspective descriptions
of sub-categories. To further enhance discriminative capability, we propose a
cross relationship modeling pattern that combines visual feature with all class
prompt features, enabling a deeper exploration of the relationships between
these two modalities. Extensive experiments have been conducted on various
fine-grained datasets, and the results demonstrate that our method achieves
significant improvements compared to current state-of-the-art approaches. Code
will be released.
| no_new_dataset | 0.944638 |
2503.07078 | Kuo Hsuan Hung | Kuo-Hsuan Hung and Xugang Lu and Szu-Wei Fu and Huan-Hsin Tseng and
Hsin-Yi Lin and Chii-Wann Lin and Yu Tsao | Linguistic Knowledge Transfer Learning for Speech Enhancement | 11 pages, 6 figures | null | null | null | cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linguistic knowledge plays a crucial role in spoken language comprehension.
It provides essential semantic and syntactic context for speech perception in
noisy environments. However, most speech enhancement (SE) methods predominantly
rely on acoustic features to learn the mapping relationship between noisy and
clean speech, with limited exploration of linguistic integration. While
text-informed SE approaches have been investigated, they often require explicit
speech-text alignment or externally provided textual data, constraining their
practicality in real-world scenarios. Additionally, using text as input poses
challenges in aligning linguistic and acoustic representations due to their
inherent differences. In this study, we propose the Cross-Modality Knowledge
Transfer (CMKT) learning framework, which leverages pre-trained large language
models (LLMs) to infuse linguistic knowledge into SE models without requiring
text input or LLMs during inference. Furthermore, we introduce a misalignment
strategy to improve knowledge transfer. This strategy applies controlled
temporal shifts, encouraging the model to learn more robust representations.
Experimental evaluations demonstrate that CMKT consistently outperforms
baseline models across various SE architectures and LLM embeddings,
highlighting its adaptability to different configurations. Additionally,
results on Mandarin and English datasets confirm its effectiveness across
diverse linguistic conditions, further validating its robustness. Moreover,
CMKT remains effective even in scenarios without textual data, underscoring its
practicality for real-world applications. By bridging the gap between
linguistic and acoustic modalities, CMKT offers a scalable and innovative
solution for integrating linguistic knowledge into SE models, leading to
substantial improvements in both intelligibility and enhancement performance.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:00:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hung",
"Kuo-Hsuan",
""
],
[
"Lu",
"Xugang",
""
],
[
"Fu",
"Szu-Wei",
""
],
[
"Tseng",
"Huan-Hsin",
""
],
[
"Lin",
"Hsin-Yi",
""
],
[
"Lin",
"Chii-Wann",
""
],
[
"Tsao",
"Yu",
""
]
]
| TITLE: Linguistic Knowledge Transfer Learning for Speech Enhancement
ABSTRACT: Linguistic knowledge plays a crucial role in spoken language comprehension.
It provides essential semantic and syntactic context for speech perception in
noisy environments. However, most speech enhancement (SE) methods predominantly
rely on acoustic features to learn the mapping relationship between noisy and
clean speech, with limited exploration of linguistic integration. While
text-informed SE approaches have been investigated, they often require explicit
speech-text alignment or externally provided textual data, constraining their
practicality in real-world scenarios. Additionally, using text as input poses
challenges in aligning linguistic and acoustic representations due to their
inherent differences. In this study, we propose the Cross-Modality Knowledge
Transfer (CMKT) learning framework, which leverages pre-trained large language
models (LLMs) to infuse linguistic knowledge into SE models without requiring
text input or LLMs during inference. Furthermore, we introduce a misalignment
strategy to improve knowledge transfer. This strategy applies controlled
temporal shifts, encouraging the model to learn more robust representations.
Experimental evaluations demonstrate that CMKT consistently outperforms
baseline models across various SE architectures and LLM embeddings,
highlighting its adaptability to different configurations. Additionally,
results on Mandarin and English datasets confirm its effectiveness across
diverse linguistic conditions, further validating its robustness. Moreover,
CMKT remains effective even in scenarios without textual data, underscoring its
practicality for real-world applications. By bridging the gap between
linguistic and acoustic modalities, CMKT offers a scalable and innovative
solution for integrating linguistic knowledge into SE models, leading to
substantial improvements in both intelligibility and enhancement performance.
| no_new_dataset | 0.943348 |
2503.07082 | Nikolaos Ioannis Bountos | Spyros Kondylatos, Nikolaos Ioannis Bountos, Dimitrios Michail, Xiao
Xiang Zhu, Gustau Camps-Valls, Ioannis Papoutsis | On the Generalization of Representation Uncertainty in Earth Observation | 18 pages | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent advances in Computer Vision have introduced the concept of pretrained
representation uncertainty, enabling zero-shot uncertainty estimation. This
holds significant potential for Earth Observation (EO), where trustworthiness
is critical, yet the complexity of EO data poses challenges to
uncertainty-aware methods. In this work, we investigate the generalization of
representation uncertainty in EO, considering the domain's unique semantic
characteristics. We pretrain uncertainties on large EO datasets and propose an
evaluation framework to assess their zero-shot performance in multi-label
classification and segmentation EO tasks. Our findings reveal that, unlike
uncertainties pretrained on natural images, EO-pretraining exhibits strong
generalization across unseen EO domains, geographic locations, and target
granularities, while maintaining sensitivity to variations in ground sampling
distance. We demonstrate the practical utility of pretrained uncertainties
showcasing their alignment with task-specific uncertainties in downstream
tasks, their sensitivity to real-world EO image noise, and their ability to
generate spatial uncertainty estimates out-of-the-box. Initiating the
discussion on representation uncertainty in EO, our study provides insights
into its strengths and limitations, paving the way for future research in the
field. Code and weights are available at:
https://github.com/Orion-AI-Lab/EOUncertaintyGeneralization.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:04:50 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kondylatos",
"Spyros",
""
],
[
"Bountos",
"Nikolaos Ioannis",
""
],
[
"Michail",
"Dimitrios",
""
],
[
"Zhu",
"Xiao Xiang",
""
],
[
"Camps-Valls",
"Gustau",
""
],
[
"Papoutsis",
"Ioannis",
""
]
]
| TITLE: On the Generalization of Representation Uncertainty in Earth Observation
ABSTRACT: Recent advances in Computer Vision have introduced the concept of pretrained
representation uncertainty, enabling zero-shot uncertainty estimation. This
holds significant potential for Earth Observation (EO), where trustworthiness
is critical, yet the complexity of EO data poses challenges to
uncertainty-aware methods. In this work, we investigate the generalization of
representation uncertainty in EO, considering the domain's unique semantic
characteristics. We pretrain uncertainties on large EO datasets and propose an
evaluation framework to assess their zero-shot performance in multi-label
classification and segmentation EO tasks. Our findings reveal that, unlike
uncertainties pretrained on natural images, EO-pretraining exhibits strong
generalization across unseen EO domains, geographic locations, and target
granularities, while maintaining sensitivity to variations in ground sampling
distance. We demonstrate the practical utility of pretrained uncertainties
showcasing their alignment with task-specific uncertainties in downstream
tasks, their sensitivity to real-world EO image noise, and their ability to
generate spatial uncertainty estimates out-of-the-box. Initiating the
discussion on representation uncertainty in EO, our study provides insights
into its strengths and limitations, paving the way for future research in the
field. Code and weights are available at:
https://github.com/Orion-AI-Lab/EOUncertaintyGeneralization.
| no_new_dataset | 0.946349 |
2503.07094 | Jie Xu | Xiaoyi Liang, Mouxiao Bian, Moxin Chen, Lihao Liu, Junjun He, Jie Xu,
Lin Li | A Novel Ophthalmic Benchmark for Evaluating Multimodal Large Language
Models with Fundus Photographs and OCT Images | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, large language models (LLMs) have demonstrated remarkable
potential across various medical applications. Building on this foundation,
multimodal large language models (MLLMs) integrate LLMs with visual models to
process diverse inputs, including clinical data and medical images. In
ophthalmology, LLMs have been explored for analyzing optical coherence
tomography (OCT) reports, assisting in disease classification, and even
predicting treatment outcomes. However, existing MLLM benchmarks often fail to
capture the complexities of real-world clinical practice, particularly in the
analysis of OCT images. Many suffer from limitations such as small sample
sizes, a lack of diverse OCT datasets, and insufficient expert validation.
These shortcomings hinder the accurate assessment of MLLMs' ability to
interpret OCT scans and their broader applicability in ophthalmology. Our
dataset, curated through rigorous quality control and expert annotation,
consists of 439 fundus images and 75 OCT images. Using a standardized API-based
framework, we assessed seven mainstream MLLMs and observed significant
variability in diagnostic accuracy across different diseases. While some models
performed well in diagnosing conditions such as diabetic retinopathy and
age-related macular degeneration, they struggled with others, including
choroidal neovascularization and myopia, highlighting inconsistencies in
performance and the need for further refinement. Our findings emphasize the
importance of developing clinically relevant benchmarks to provide a more
accurate assessment of MLLMs' capabilities. By refining these models and
expanding their scope, we can enhance their potential to transform ophthalmic
diagnosis and treatment.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:19:55 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liang",
"Xiaoyi",
""
],
[
"Bian",
"Mouxiao",
""
],
[
"Chen",
"Moxin",
""
],
[
"Liu",
"Lihao",
""
],
[
"He",
"Junjun",
""
],
[
"Xu",
"Jie",
""
],
[
"Li",
"Lin",
""
]
]
| TITLE: A Novel Ophthalmic Benchmark for Evaluating Multimodal Large Language
Models with Fundus Photographs and OCT Images
ABSTRACT: In recent years, large language models (LLMs) have demonstrated remarkable
potential across various medical applications. Building on this foundation,
multimodal large language models (MLLMs) integrate LLMs with visual models to
process diverse inputs, including clinical data and medical images. In
ophthalmology, LLMs have been explored for analyzing optical coherence
tomography (OCT) reports, assisting in disease classification, and even
predicting treatment outcomes. However, existing MLLM benchmarks often fail to
capture the complexities of real-world clinical practice, particularly in the
analysis of OCT images. Many suffer from limitations such as small sample
sizes, a lack of diverse OCT datasets, and insufficient expert validation.
These shortcomings hinder the accurate assessment of MLLMs' ability to
interpret OCT scans and their broader applicability in ophthalmology. Our
dataset, curated through rigorous quality control and expert annotation,
consists of 439 fundus images and 75 OCT images. Using a standardized API-based
framework, we assessed seven mainstream MLLMs and observed significant
variability in diagnostic accuracy across different diseases. While some models
performed well in diagnosing conditions such as diabetic retinopathy and
age-related macular degeneration, they struggled with others, including
choroidal neovascularization and myopia, highlighting inconsistencies in
performance and the need for further refinement. Our findings emphasize the
importance of developing clinically relevant benchmarks to provide a more
accurate assessment of MLLMs' capabilities. By refining these models and
expanding their scope, we can enhance their potential to transform ophthalmic
diagnosis and treatment.
| new_dataset | 0.566191 |
2503.07097 | Zijie Fan | Xiaoyan Kui, Zijie Fan, Zexin Ji, Qinsong Li, Chengtao Liu, Weixin Si,
Beiji Zou | A Comprehensive Survey on Magnetic Resonance Image Reconstruction | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Magnetic resonance imaging (MRI) reconstruction is a fundamental task aimed
at recovering high-quality images from undersampled or low-quality MRI data.
This process enhances diagnostic accuracy and optimizes clinical applications.
In recent years, deep learning-based MRI reconstruction has made significant
progress. Advancements include single-modality feature extraction using
different network architectures, the integration of multimodal information, and
the adoption of unsupervised or semi-supervised learning strategies. However,
despite extensive research, MRI reconstruction remains a challenging problem
that has yet to be fully resolved. This survey provides a systematic review of
MRI reconstruction methods, covering key aspects such as data acquisition and
preprocessing, publicly available datasets, single and multi-modal
reconstruction models, training strategies, and evaluation metrics based on
image reconstruction and downstream tasks. Additionally, we analyze the major
challenges in this field and explore potential future directions.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:20:53 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Kui",
"Xiaoyan",
""
],
[
"Fan",
"Zijie",
""
],
[
"Ji",
"Zexin",
""
],
[
"Li",
"Qinsong",
""
],
[
"Liu",
"Chengtao",
""
],
[
"Si",
"Weixin",
""
],
[
"Zou",
"Beiji",
""
]
]
| TITLE: A Comprehensive Survey on Magnetic Resonance Image Reconstruction
ABSTRACT: Magnetic resonance imaging (MRI) reconstruction is a fundamental task aimed
at recovering high-quality images from undersampled or low-quality MRI data.
This process enhances diagnostic accuracy and optimizes clinical applications.
In recent years, deep learning-based MRI reconstruction has made significant
progress. Advancements include single-modality feature extraction using
different network architectures, the integration of multimodal information, and
the adoption of unsupervised or semi-supervised learning strategies. However,
despite extensive research, MRI reconstruction remains a challenging problem
that has yet to be fully resolved. This survey provides a systematic review of
MRI reconstruction methods, covering key aspects such as data acquisition and
preprocessing, publicly available datasets, single and multi-modal
reconstruction models, training strategies, and evaluation metrics based on
image reconstruction and downstream tasks. Additionally, we analyze the major
challenges in this field and explore potential future directions.
| no_new_dataset | 0.947914 |
2503.07103 | Alessandro Giagnorio | Alessandro Giagnorio and Antonio Mastropaolo and Saima Afrin and
Massimiliano Di Penta and Gabriele Bavota | Quantizing Large Language Models for Code Generation: A Differentiated
Replication | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs) have shown an impressive capability in code
generation and, specifically, to automatically implement requirements described
in natural language. The LLM effectiveness generally increases with its size:
The higher the number of LLM's trainable parameters the better its ability to
implement code. However, when it comes to deploying LLM-based code generators,
larger LLMs pose significant challenges related to their memory (and,
consequently, carbon) footprint. A previous work by Wei et al. proposed to
leverage quantization techniques to reduce the memory footprint of LLM-based
code generators without substantially degrading their effectiveness. In short,
they studied LLMs featuring up to 16B parameters, quantizing their precision
from floating point 32 bits down to int 8 bits and showing their limited impact
on code generation performance. Given the fast pace at which LLM capabilities
and quantization techniques are evolving, in this work we present a
differentiated replication of the work by Wei et al. in which we consider (i)
on the one side, more recent and larger code-related LLMs, of up to 34B
parameters; (ii) the latest advancements in model quantization techniques,
which allow pushing the compression to the extreme quantization level of 2 bits
per model parameter and; (iii) different types of calibration datasets to guide
the quantization process, including code-specific ones. Our empirical
evaluation reveals that the new frontier for LLM quantization is 4-bit
precision, resulting in an average memory footprint reduction of 70% compared
to the original model without observing any significant decrease in
performance. Additionally, when the quantization becomes even more extreme (3
and 2 bits), a code-specific calibration dataset helps to limit the loss of
performance.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:26:08 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Giagnorio",
"Alessandro",
""
],
[
"Mastropaolo",
"Antonio",
""
],
[
"Afrin",
"Saima",
""
],
[
"Di Penta",
"Massimiliano",
""
],
[
"Bavota",
"Gabriele",
""
]
]
| TITLE: Quantizing Large Language Models for Code Generation: A Differentiated
Replication
ABSTRACT: Large Language Models (LLMs) have shown an impressive capability in code
generation and, specifically, to automatically implement requirements described
in natural language. The LLM effectiveness generally increases with its size:
The higher the number of LLM's trainable parameters the better its ability to
implement code. However, when it comes to deploying LLM-based code generators,
larger LLMs pose significant challenges related to their memory (and,
consequently, carbon) footprint. A previous work by Wei et al. proposed to
leverage quantization techniques to reduce the memory footprint of LLM-based
code generators without substantially degrading their effectiveness. In short,
they studied LLMs featuring up to 16B parameters, quantizing their precision
from floating point 32 bits down to int 8 bits and showing their limited impact
on code generation performance. Given the fast pace at which LLM capabilities
and quantization techniques are evolving, in this work we present a
differentiated replication of the work by Wei et al. in which we consider (i)
on the one side, more recent and larger code-related LLMs, of up to 34B
parameters; (ii) the latest advancements in model quantization techniques,
which allow pushing the compression to the extreme quantization level of 2 bits
per model parameter and; (iii) different types of calibration datasets to guide
the quantization process, including code-specific ones. Our empirical
evaluation reveals that the new frontier for LLM quantization is 4-bit
precision, resulting in an average memory footprint reduction of 70% compared
to the original model without observing any significant decrease in
performance. Additionally, when the quantization becomes even more extreme (3
and 2 bits), a code-specific calibration dataset helps to limit the loss of
performance.
| no_new_dataset | 0.949106 |
2503.07107 | William Guicquero | Yanis Basso-Bert, Anca Molnos, Romain Lemaire, William Guicquero and
Antoine Dupret | Towards Experience Replay for Class-Incremental Learning in Fully-Binary
Networks | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Binary Neural Networks (BNNs) are a promising approach to enable Artificial
Neural Network (ANN) implementation on ultra-low power edge devices. Such
devices may compute data in highly dynamic environments, in which the classes
targeted for inference can evolve or even novel classes may arise, requiring
continual learning. Class Incremental Learning (CIL) is a common type of
continual learning for classification problems, that has been scarcely
addressed in the context of BNNs. Furthermore, most of existing BNNs models are
not fully binary, as they require several real-valued network layers, at the
input, the output, and for batch normalization. This paper goes a step further,
enabling class incremental learning in Fully-Binarized NNs (FBNNs) through four
main contributions. We firstly revisit the FBNN design and its training
procedure that is suitable to CIL. Secondly, we explore loss balancing, a
method to trade-off the performance of past and current classes. Thirdly, we
propose a semi-supervised method to pre-train the feature extractor of the FBNN
for transferable representations. Fourthly, two conventional CIL methods, \ie,
Latent and Native replay, are thoroughly compared. These contributions are
exemplified first on the CIFAR100 dataset, before being scaled up to address
the CORE50 continual learning benchmark. The final results based on our 3Mb
FBNN on CORE50 exhibit at par and better performance than conventional
real-valued larger NN models.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:31:32 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Basso-Bert",
"Yanis",
""
],
[
"Molnos",
"Anca",
""
],
[
"Lemaire",
"Romain",
""
],
[
"Guicquero",
"William",
""
],
[
"Dupret",
"Antoine",
""
]
]
| TITLE: Towards Experience Replay for Class-Incremental Learning in Fully-Binary
Networks
ABSTRACT: Binary Neural Networks (BNNs) are a promising approach to enable Artificial
Neural Network (ANN) implementation on ultra-low power edge devices. Such
devices may compute data in highly dynamic environments, in which the classes
targeted for inference can evolve or even novel classes may arise, requiring
continual learning. Class Incremental Learning (CIL) is a common type of
continual learning for classification problems, that has been scarcely
addressed in the context of BNNs. Furthermore, most of existing BNNs models are
not fully binary, as they require several real-valued network layers, at the
input, the output, and for batch normalization. This paper goes a step further,
enabling class incremental learning in Fully-Binarized NNs (FBNNs) through four
main contributions. We firstly revisit the FBNN design and its training
procedure that is suitable to CIL. Secondly, we explore loss balancing, a
method to trade-off the performance of past and current classes. Thirdly, we
propose a semi-supervised method to pre-train the feature extractor of the FBNN
for transferable representations. Fourthly, two conventional CIL methods, \ie,
Latent and Native replay, are thoroughly compared. These contributions are
exemplified first on the CIFAR100 dataset, before being scaled up to address
the CORE50 continual learning benchmark. The final results based on our 3Mb
FBNN on CORE50 exhibit at par and better performance than conventional
real-valued larger NN models.
| no_new_dataset | 0.941601 |
2503.07109 | Merve Cigdem Ipek | Merve Cigdem Ipek and Sevil Sen | Explainable Android Malware Detection and Malicious Code Localization
Using Graph Attention | This paper has 13 pages and contains 5 images (3 figures within the
paper and 2 author photos). It is being submitted to IEEE Transactions on
Information Forensics and Security for consideration | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the escalating threat of malware, particularly on mobile devices, the
demand for effective analysis methods has never been higher. While existing
security solutions, including AI-based approaches, offer promise, their lack of
transparency constraints the understanding of detected threats. Manual analysis
remains time-consuming and reliant on scarce expertise. To address these
challenges, we propose a novel approach called XAIDroid that leverages graph
neural networks (GNNs) and graph attention mechanisms for automatically
locating malicious code snippets within malware. By representing code as API
call graphs, XAIDroid captures semantic context and enhances resilience against
obfuscation. Utilizing the Graph Attention Model (GAM) and Graph Attention
Network (GAT), we assign importance scores to API nodes, facilitating focused
attention on critical information for malicious code localization. Evaluation
on synthetic and real-world malware datasets demonstrates the efficacy of our
approach, achieving high recall and F1-score rates for malicious code
localization. The successful implementation of automatic malicious code
localization enhances the scalability, interpretability, and reliability of
malware analysis.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:33:37 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ipek",
"Merve Cigdem",
""
],
[
"Sen",
"Sevil",
""
]
]
| TITLE: Explainable Android Malware Detection and Malicious Code Localization
Using Graph Attention
ABSTRACT: With the escalating threat of malware, particularly on mobile devices, the
demand for effective analysis methods has never been higher. While existing
security solutions, including AI-based approaches, offer promise, their lack of
transparency constraints the understanding of detected threats. Manual analysis
remains time-consuming and reliant on scarce expertise. To address these
challenges, we propose a novel approach called XAIDroid that leverages graph
neural networks (GNNs) and graph attention mechanisms for automatically
locating malicious code snippets within malware. By representing code as API
call graphs, XAIDroid captures semantic context and enhances resilience against
obfuscation. Utilizing the Graph Attention Model (GAM) and Graph Attention
Network (GAT), we assign importance scores to API nodes, facilitating focused
attention on critical information for malicious code localization. Evaluation
on synthetic and real-world malware datasets demonstrates the efficacy of our
approach, achieving high recall and F1-score rates for malicious code
localization. The successful implementation of automatic malicious code
localization enhances the scalability, interpretability, and reliability of
malware analysis.
| no_new_dataset | 0.945096 |
2503.07110 | Chaoran E | Chaoran E, Chenghan Chen, Yuyang Shi, Haiyun Wang, Peixin Hua, Xiwen
Zhang | A LSTM-Transformer Model for pulsation control of pVADs | null | null | null | null | physics.med-ph cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Methods: A method of the pulsation for a pVAD is proposed (AP-pVAD Model).
AP-pVAD Model consists of two parts: NPQ Model and LSTM-Transformer Model.
(1)The NPQ Model determines the mathematical relationship between motor speed,
pressure, and flow rate for the pVAD. (2)The Attention module of Transformer
neural network is integrated into the LSTM neural network to form the new
LSTM-Transformer Model to predict the pulsation time characteristic points for
adjusting the motor speed of the pVAD. Results: The AP-pVAD Model is validated
in three hydraulic experiments and an animal experiment. (1)The pressure
provided by pVAD calculated with the NPQ Model has a maximum error of only 2.15
mmHg compared to the expected values. (2)The pulsation time characteristic
points predicted by the LSTM-Transformer Model shows a maximum prediction error
of 1.78ms, which is significantly lower than other methods. (3)The in-vivo test
of pVAD in animal experiment has significant improvements in aortic pressure.
Animals survive for over 27 hours after the initiation of pVAD operation.
Conclusion: (1)For a given pVAD, motor speed has a linear relationship with
pressure and a quadratic relationship with flow. (2)Deep learning can be used
to predict pulsation characteristic time points, with the LSTM-Transformer
Model demonstrating minimal prediction error and better robust performance
under conditions of limited dataset sizes, elevated noise levels, and diverse
hyperparameter combinations, demonstrating its feasibility and effectiveness.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:33:59 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"E",
"Chaoran",
""
],
[
"Chen",
"Chenghan",
""
],
[
"Shi",
"Yuyang",
""
],
[
"Wang",
"Haiyun",
""
],
[
"Hua",
"Peixin",
""
],
[
"Zhang",
"Xiwen",
""
]
]
| TITLE: A LSTM-Transformer Model for pulsation control of pVADs
ABSTRACT: Methods: A method of the pulsation for a pVAD is proposed (AP-pVAD Model).
AP-pVAD Model consists of two parts: NPQ Model and LSTM-Transformer Model.
(1)The NPQ Model determines the mathematical relationship between motor speed,
pressure, and flow rate for the pVAD. (2)The Attention module of Transformer
neural network is integrated into the LSTM neural network to form the new
LSTM-Transformer Model to predict the pulsation time characteristic points for
adjusting the motor speed of the pVAD. Results: The AP-pVAD Model is validated
in three hydraulic experiments and an animal experiment. (1)The pressure
provided by pVAD calculated with the NPQ Model has a maximum error of only 2.15
mmHg compared to the expected values. (2)The pulsation time characteristic
points predicted by the LSTM-Transformer Model shows a maximum prediction error
of 1.78ms, which is significantly lower than other methods. (3)The in-vivo test
of pVAD in animal experiment has significant improvements in aortic pressure.
Animals survive for over 27 hours after the initiation of pVAD operation.
Conclusion: (1)For a given pVAD, motor speed has a linear relationship with
pressure and a quadratic relationship with flow. (2)Deep learning can be used
to predict pulsation characteristic time points, with the LSTM-Transformer
Model demonstrating minimal prediction error and better robust performance
under conditions of limited dataset sizes, elevated noise levels, and diverse
hyperparameter combinations, demonstrating its feasibility and effectiveness.
| no_new_dataset | 0.953057 |
2503.07115 | Hanqing Guo | Hanqing Guo, Xiuxiu Lin, Shiyu Zhao | YOLOMG: Vision-based Drone-to-Drone Detection with Appearance and
Pixel-Level Motion Fusion | 9 pages, 8 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Vision-based drone-to-drone detection has attracted increasing attention due
to its importance in numerous tasks such as vision-based swarming, aerial
see-and-avoid, and malicious drone detection. However, existing methods often
encounter failures when the background is complex or the target is tiny. This
paper proposes a novel end-to-end framework that accurately identifies small
drones in complex environments using motion guidance. It starts by creating a
motion difference map to capture the motion characteristics of tiny drones.
Next, this motion difference map is combined with an RGB image using a bimodal
fusion module, allowing for adaptive feature learning of the drone. Finally,
the fused feature map is processed through an enhanced backbone and detection
head based on the YOLOv5 framework to achieve accurate detection results. To
validate our method, we propose a new dataset, named ARD100, which comprises
100 videos (202,467 frames) covering various challenging conditions and has the
smallest average object size compared with the existing drone detection
datasets. Extensive experiments on the ARD100 and NPS-Drones datasets show that
our proposed detector performs exceptionally well under challenging conditions
and surpasses state-of-the-art algorithms across various metrics. We publicly
release the codes and ARD100 dataset at https://github.com/Irisky123/YOLOMG.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 09:44:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Guo",
"Hanqing",
""
],
[
"Lin",
"Xiuxiu",
""
],
[
"Zhao",
"Shiyu",
""
]
]
| TITLE: YOLOMG: Vision-based Drone-to-Drone Detection with Appearance and
Pixel-Level Motion Fusion
ABSTRACT: Vision-based drone-to-drone detection has attracted increasing attention due
to its importance in numerous tasks such as vision-based swarming, aerial
see-and-avoid, and malicious drone detection. However, existing methods often
encounter failures when the background is complex or the target is tiny. This
paper proposes a novel end-to-end framework that accurately identifies small
drones in complex environments using motion guidance. It starts by creating a
motion difference map to capture the motion characteristics of tiny drones.
Next, this motion difference map is combined with an RGB image using a bimodal
fusion module, allowing for adaptive feature learning of the drone. Finally,
the fused feature map is processed through an enhanced backbone and detection
head based on the YOLOv5 framework to achieve accurate detection results. To
validate our method, we propose a new dataset, named ARD100, which comprises
100 videos (202,467 frames) covering various challenging conditions and has the
smallest average object size compared with the existing drone detection
datasets. Extensive experiments on the ARD100 and NPS-Drones datasets show that
our proposed detector performs exceptionally well under challenging conditions
and surpasses state-of-the-art algorithms across various metrics. We publicly
release the codes and ARD100 dataset at https://github.com/Irisky123/YOLOMG.
| new_dataset | 0.956513 |
2503.07137 | Siyuan Mu | Siyuan Mu and Sen Lin | A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and
Applications | 28 pages, 3 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) has achieved astonishing successes in many
domains, especially with the recent breakthroughs in the development of
foundational large models. These large models, leveraging their extensive
training data, provide versatile solutions for a wide range of downstream
tasks. However, as modern datasets become increasingly diverse and complex, the
development of large AI models faces two major challenges: (1) the enormous
consumption of computational resources and deployment difficulties, and (2) the
difficulty in fitting heterogeneous and complex data, which limits the
usability of the models. Mixture of Experts (MoE) models has recently attracted
much attention in addressing these challenges, by dynamically selecting and
activating the most relevant sub-models to process input data. It has been
shown that MoEs can significantly improve model performance and efficiency with
fewer resources, particularly excelling in handling large-scale, multimodal
data. Given the tremendous potential MoE has demonstrated across various
domains, it is urgent to provide a comprehensive summary of recent advancements
of MoEs in many important fields. Existing surveys on MoE have their
limitations, e.g., being outdated or lacking discussion on certain key areas,
and we aim to address these gaps. In this paper, we first introduce the basic
design of MoE, including gating functions, expert networks, routing mechanisms,
training strategies, and system design. We then explore the algorithm design of
MoE in important machine learning paradigms such as continual learning,
meta-learning, multi-task learning, and reinforcement learning. Additionally,
we summarize theoretical studies aimed at understanding MoE and review its
applications in computer vision and natural language processing. Finally, we
discuss promising future research directions.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:08:55 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Mu",
"Siyuan",
""
],
[
"Lin",
"Sen",
""
]
]
| TITLE: A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and
Applications
ABSTRACT: Artificial intelligence (AI) has achieved astonishing successes in many
domains, especially with the recent breakthroughs in the development of
foundational large models. These large models, leveraging their extensive
training data, provide versatile solutions for a wide range of downstream
tasks. However, as modern datasets become increasingly diverse and complex, the
development of large AI models faces two major challenges: (1) the enormous
consumption of computational resources and deployment difficulties, and (2) the
difficulty in fitting heterogeneous and complex data, which limits the
usability of the models. Mixture of Experts (MoE) models has recently attracted
much attention in addressing these challenges, by dynamically selecting and
activating the most relevant sub-models to process input data. It has been
shown that MoEs can significantly improve model performance and efficiency with
fewer resources, particularly excelling in handling large-scale, multimodal
data. Given the tremendous potential MoE has demonstrated across various
domains, it is urgent to provide a comprehensive summary of recent advancements
of MoEs in many important fields. Existing surveys on MoE have their
limitations, e.g., being outdated or lacking discussion on certain key areas,
and we aim to address these gaps. In this paper, we first introduce the basic
design of MoE, including gating functions, expert networks, routing mechanisms,
training strategies, and system design. We then explore the algorithm design of
MoE in important machine learning paradigms such as continual learning,
meta-learning, multi-task learning, and reinforcement learning. Additionally,
we summarize theoretical studies aimed at understanding MoE and review its
applications in computer vision and natural language processing. Finally, we
discuss promising future research directions.
| no_new_dataset | 0.943243 |
2503.07144 | Shengkun Ma | Shengkun Ma, Hao Peng, Lei Hou, Juanzi Li | MRCEval: A Comprehensive, Challenging and Accessible Machine Reading
Comprehension Benchmark | Under review | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine Reading Comprehension (MRC) is an essential task in evaluating
natural language understanding. Existing MRC datasets primarily assess specific
aspects of reading comprehension (RC), lacking a comprehensive MRC benchmark.
To fill this gap, we first introduce a novel taxonomy that categorizes the key
capabilities required for RC. Based on this taxonomy, we construct MRCEval, an
MRC benchmark that leverages advanced Large Language Models (LLMs) as both
sample generators and selection judges. MRCEval is a comprehensive, challenging
and accessible benchmark designed to assess the RC capabilities of LLMs
thoroughly, covering 13 distinct RC skills with a total of 2.1K high-quality
multi-choice questions. We perform an extensive evaluation of 28 widely used
open-source and proprietary models, highlighting that MRC continues to present
significant challenges even in the era of LLMs.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:20:05 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ma",
"Shengkun",
""
],
[
"Peng",
"Hao",
""
],
[
"Hou",
"Lei",
""
],
[
"Li",
"Juanzi",
""
]
]
| TITLE: MRCEval: A Comprehensive, Challenging and Accessible Machine Reading
Comprehension Benchmark
ABSTRACT: Machine Reading Comprehension (MRC) is an essential task in evaluating
natural language understanding. Existing MRC datasets primarily assess specific
aspects of reading comprehension (RC), lacking a comprehensive MRC benchmark.
To fill this gap, we first introduce a novel taxonomy that categorizes the key
capabilities required for RC. Based on this taxonomy, we construct MRCEval, an
MRC benchmark that leverages advanced Large Language Models (LLMs) as both
sample generators and selection judges. MRCEval is a comprehensive, challenging
and accessible benchmark designed to assess the RC capabilities of LLMs
thoroughly, covering 13 distinct RC skills with a total of 2.1K high-quality
multi-choice questions. We perform an extensive evaluation of 28 widely used
open-source and proprietary models, highlighting that MRC continues to present
significant challenges even in the era of LLMs.
| new_dataset | 0.87397 |
2503.07152 | Yuheng Liu | Yuheng Liu, Xinke Li, Yuning Zhang, Lu Qi, Xin Li, Wenping Wang,
Chongshou Li, Xueting Li, Ming-Hsuan Yang | Controllable 3D Outdoor Scene Generation via Scene Graphs | Project Page: https://yuheng.ink/project-page/control-3d-scene/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Three-dimensional scene generation is crucial in computer vision, with
applications spanning autonomous driving, gaming and the metaverse. Current
methods either lack user control or rely on imprecise, non-intuitive
conditions. In this work, we propose a method that uses, scene graphs, an
accessible, user friendly control format to generate outdoor 3D scenes. We
develop an interactive system that transforms a sparse scene graph into a dense
BEV (Bird's Eye View) Embedding Map, which guides a conditional diffusion model
to generate 3D scenes that match the scene graph description. During inference,
users can easily create or modify scene graphs to generate large-scale outdoor
scenes. We create a large-scale dataset with paired scene graphs and 3D
semantic scenes to train the BEV embedding and diffusion models. Experimental
results show that our approach consistently produces high-quality 3D urban
scenes closely aligned with the input scene graphs. To the best of our
knowledge, this is the first approach to generate 3D outdoor scenes conditioned
on scene graphs.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:26:08 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Yuheng",
""
],
[
"Li",
"Xinke",
""
],
[
"Zhang",
"Yuning",
""
],
[
"Qi",
"Lu",
""
],
[
"Li",
"Xin",
""
],
[
"Wang",
"Wenping",
""
],
[
"Li",
"Chongshou",
""
],
[
"Li",
"Xueting",
""
],
[
"Yang",
"Ming-Hsuan",
""
]
]
| TITLE: Controllable 3D Outdoor Scene Generation via Scene Graphs
ABSTRACT: Three-dimensional scene generation is crucial in computer vision, with
applications spanning autonomous driving, gaming and the metaverse. Current
methods either lack user control or rely on imprecise, non-intuitive
conditions. In this work, we propose a method that uses, scene graphs, an
accessible, user friendly control format to generate outdoor 3D scenes. We
develop an interactive system that transforms a sparse scene graph into a dense
BEV (Bird's Eye View) Embedding Map, which guides a conditional diffusion model
to generate 3D scenes that match the scene graph description. During inference,
users can easily create or modify scene graphs to generate large-scale outdoor
scenes. We create a large-scale dataset with paired scene graphs and 3D
semantic scenes to train the BEV embedding and diffusion models. Experimental
results show that our approach consistently produces high-quality 3D urban
scenes closely aligned with the input scene graphs. To the best of our
knowledge, this is the first approach to generate 3D outdoor scenes conditioned
on scene graphs.
| new_dataset | 0.951278 |
2503.07153 | Yuanlong Wu | Yuanlong Wu, Mingxing Nie, Tao Zhu, Liming Chen, Huansheng Ning,
Yaping Wan | PTMs-TSCIL Pre-Trained Models Based Class-Incremental Learning | 13 pages,6 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Class-incremental learning (CIL) for time series data faces critical
challenges in balancing stability against catastrophic forgetting and
plasticity for new knowledge acquisition, particularly under real-world
constraints where historical data access is restricted. While pre-trained
models (PTMs) have shown promise in CIL for vision and NLP domains, their
potential in time series class-incremental learning (TSCIL) remains
underexplored due to the scarcity of large-scale time series pre-trained
models. Prompted by the recent emergence of large-scale pre-trained models
(PTMs) for time series data, we present the first exploration of PTM-based Time
Series Class-Incremental Learning (TSCIL). Our approach leverages frozen PTM
backbones coupled with incrementally tuning the shared adapter, preserving
generalization capabilities while mitigating feature drift through knowledge
distillation. Furthermore, we introduce a Feature Drift Compensation Network
(DCN), designed with a novel two-stage training strategy to precisely model
feature space transformations across incremental tasks. This allows for
accurate projection of old class prototypes into the new feature space. By
employing DCN-corrected prototypes, we effectively enhance the unified
classifier retraining, mitigating model feature drift and alleviating
catastrophic forgetting. Extensive experiments on five real-world datasets
demonstrate state-of-the-art performance, with our method yielding final
accuracy gains of 1.4%-6.1% across all datasets compared to existing PTM-based
approaches. Our work establishes a new paradigm for TSCIL, providing insights
into stability-plasticity optimization for continual learning systems.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:27:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wu",
"Yuanlong",
""
],
[
"Nie",
"Mingxing",
""
],
[
"Zhu",
"Tao",
""
],
[
"Chen",
"Liming",
""
],
[
"Ning",
"Huansheng",
""
],
[
"Wan",
"Yaping",
""
]
]
| TITLE: PTMs-TSCIL Pre-Trained Models Based Class-Incremental Learning
ABSTRACT: Class-incremental learning (CIL) for time series data faces critical
challenges in balancing stability against catastrophic forgetting and
plasticity for new knowledge acquisition, particularly under real-world
constraints where historical data access is restricted. While pre-trained
models (PTMs) have shown promise in CIL for vision and NLP domains, their
potential in time series class-incremental learning (TSCIL) remains
underexplored due to the scarcity of large-scale time series pre-trained
models. Prompted by the recent emergence of large-scale pre-trained models
(PTMs) for time series data, we present the first exploration of PTM-based Time
Series Class-Incremental Learning (TSCIL). Our approach leverages frozen PTM
backbones coupled with incrementally tuning the shared adapter, preserving
generalization capabilities while mitigating feature drift through knowledge
distillation. Furthermore, we introduce a Feature Drift Compensation Network
(DCN), designed with a novel two-stage training strategy to precisely model
feature space transformations across incremental tasks. This allows for
accurate projection of old class prototypes into the new feature space. By
employing DCN-corrected prototypes, we effectively enhance the unified
classifier retraining, mitigating model feature drift and alleviating
catastrophic forgetting. Extensive experiments on five real-world datasets
demonstrate state-of-the-art performance, with our method yielding final
accuracy gains of 1.4%-6.1% across all datasets compared to existing PTM-based
approaches. Our work establishes a new paradigm for TSCIL, providing insights
into stability-plasticity optimization for continual learning systems.
| no_new_dataset | 0.948489 |
2503.07170 | Ming Wang | Ming Wang, Fang Wang, Minghao Hu, Li He, Haiyang Wang, Jun Zhang,
Tianwei Yan, Li Li, Zhunchen Luo, Wei Luo, Xiaoying Bai, Guotong Geng | DeFine: A Decomposed and Fine-Grained Annotated Dataset for Long-form
Article Generation | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-form article generation (LFAG) presents challenges such as maintaining
logical consistency, comprehensive topic coverage, and narrative coherence
across extended articles. Existing datasets often lack both the hierarchical
structure and fine-grained annotation needed to effectively decompose tasks,
resulting in shallow, disorganized article generation. To address these
limitations, we introduce DeFine, a Decomposed and Fine-grained annotated
dataset for long-form article generation. DeFine is characterized by its
hierarchical decomposition strategy and the integration of domain-specific
knowledge with multi-level annotations, ensuring granular control and enhanced
depth in article generation. To construct the dataset, a multi-agent
collaborative pipeline is proposed, which systematically segments the
generation process into four parts: Data Miner, Cite Retreiver, Q&A Annotator
and Data Cleaner. To validate the effectiveness of DeFine, we designed and
tested three LFAG baselines: the web retrieval, the local retrieval, and the
grounded reference. We fine-tuned the Qwen2-7b-Instruct model using the DeFine
training dataset. The experimental results showed significant improvements in
text quality, specifically in topic coverage, depth of information, and content
fidelity. Our dataset publicly available to facilitate future research.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:48:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wang",
"Ming",
""
],
[
"Wang",
"Fang",
""
],
[
"Hu",
"Minghao",
""
],
[
"He",
"Li",
""
],
[
"Wang",
"Haiyang",
""
],
[
"Zhang",
"Jun",
""
],
[
"Yan",
"Tianwei",
""
],
[
"Li",
"Li",
""
],
[
"Luo",
"Zhunchen",
""
],
[
"Luo",
"Wei",
""
],
[
"Bai",
"Xiaoying",
""
],
[
"Geng",
"Guotong",
""
]
]
| TITLE: DeFine: A Decomposed and Fine-Grained Annotated Dataset for Long-form
Article Generation
ABSTRACT: Long-form article generation (LFAG) presents challenges such as maintaining
logical consistency, comprehensive topic coverage, and narrative coherence
across extended articles. Existing datasets often lack both the hierarchical
structure and fine-grained annotation needed to effectively decompose tasks,
resulting in shallow, disorganized article generation. To address these
limitations, we introduce DeFine, a Decomposed and Fine-grained annotated
dataset for long-form article generation. DeFine is characterized by its
hierarchical decomposition strategy and the integration of domain-specific
knowledge with multi-level annotations, ensuring granular control and enhanced
depth in article generation. To construct the dataset, a multi-agent
collaborative pipeline is proposed, which systematically segments the
generation process into four parts: Data Miner, Cite Retreiver, Q&A Annotator
and Data Cleaner. To validate the effectiveness of DeFine, we designed and
tested three LFAG baselines: the web retrieval, the local retrieval, and the
grounded reference. We fine-tuned the Qwen2-7b-Instruct model using the DeFine
training dataset. The experimental results showed significant improvements in
text quality, specifically in topic coverage, depth of information, and content
fidelity. Our dataset publicly available to facilitate future research.
| new_dataset | 0.967808 |
2503.07173 | Kazuya Nishimura | Kazuya Nishimura, Ryoma Bise, Yasuhiro Kojima | Towards Spatial Transcriptomics-guided Pathological Image Recognition
with Batch-Agnostic Encoder | Accepted to ISBI 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial transcriptomics (ST) is a novel technique that simultaneously
captures pathological images and gene expression profiling with spatial
coordinates. Since ST is closely related to pathological features such as
disease subtypes, it may be valuable to augment image representation with
pathological information. However, there are no attempts to leverage ST for
image recognition ({\it i.e,} patch-level classification of subtypes of
pathological image.). One of the big challenges is significant batch effects in
spatial transcriptomics that make it difficult to extract pathological features
of images from ST. In this paper, we propose a batch-agnostic contrastive
learning framework that can extract consistent signals from gene expression of
ST in multiple patients. To extract consistent signals from ST, we utilize the
batch-agnostic gene encoder that is trained in a variational inference manner.
Experiments demonstrated the effectiveness of our framework on a publicly
available dataset. Code is publicly available at
https://github.com/naivete5656/TPIRBAE
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:50:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Nishimura",
"Kazuya",
""
],
[
"Bise",
"Ryoma",
""
],
[
"Kojima",
"Yasuhiro",
""
]
]
| TITLE: Towards Spatial Transcriptomics-guided Pathological Image Recognition
with Batch-Agnostic Encoder
ABSTRACT: Spatial transcriptomics (ST) is a novel technique that simultaneously
captures pathological images and gene expression profiling with spatial
coordinates. Since ST is closely related to pathological features such as
disease subtypes, it may be valuable to augment image representation with
pathological information. However, there are no attempts to leverage ST for
image recognition ({\it i.e,} patch-level classification of subtypes of
pathological image.). One of the big challenges is significant batch effects in
spatial transcriptomics that make it difficult to extract pathological features
of images from ST. In this paper, we propose a batch-agnostic contrastive
learning framework that can extract consistent signals from gene expression of
ST in multiple patients. To extract consistent signals from ST, we utilize the
batch-agnostic gene encoder that is trained in a variational inference manner.
Experiments demonstrated the effectiveness of our framework on a publicly
available dataset. Code is publicly available at
https://github.com/naivete5656/TPIRBAE
| no_new_dataset | 0.9462 |
2503.07181 | Valentin Guillaume | Maxime Maria, Valentin Guillaume, Simon Guionniere, Nicolas Dacquay,
Cyprien Plateau Holleville, Vincent Larroque, Jean Larde, Yassine Naimi, Jean
Philip Piquemal, Guillaume Levieux, Nathalie Lagarde, Stephane Merillou,
Matthieu Montes | Interactive visualization of large molecular systems with VTX: example
with a minimal whole-cell model | See Free-fly navigation of Marrink23 cell model with VTX at:
https://youtu.be/zMrAFuqxL3Y | null | null | null | physics.chem-ph physics.bio-ph q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | VTX is an open-source molecular visualization software designed to overcome
the scaling limitations of existing real-time molecular visualization software
when handling massive molecular datasets. VTX employs a meshless molecular
graphics engine utilizing impostor-based techniques and adaptive
level-of-detail (LOD) rendering. This approach significantly reduces memory
usage and enables real-time visualization and manipulation of large molecular
systems. Performance benchmarks against VMD, PyMOL, and ChimeraX using a
114-million-bead Martini minimal whole-cell model demonstrate VTX's efficiency,
maintaining consistent frame rates even under interactive manipulation on
standard computer hardware. VTX incorporates features such as screen-space
ambient occlusion (SSAO) for enhanced depth perception and free-fly navigation
for intuitive exploration of large molecular systems. VTX is open-source and
free for non commercial use. Binaries for Windows and Ubuntu Linux are
available at \href{http://vtx.drugdesign.fr}{http://vtx.drugdesign.fr}. VTX
source code is available at
\href{https://github.com/VTX-Molecular-Visualization}{https://github.com/VTX-Molecular-Visualization}.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 10:58:28 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Maria",
"Maxime",
""
],
[
"Guillaume",
"Valentin",
""
],
[
"Guionniere",
"Simon",
""
],
[
"Dacquay",
"Nicolas",
""
],
[
"Holleville",
"Cyprien Plateau",
""
],
[
"Larroque",
"Vincent",
""
],
[
"Larde",
"Jean",
""
],
[
"Naimi",
"Yassine",
""
],
[
"Piquemal",
"Jean Philip",
""
],
[
"Levieux",
"Guillaume",
""
],
[
"Lagarde",
"Nathalie",
""
],
[
"Merillou",
"Stephane",
""
],
[
"Montes",
"Matthieu",
""
]
]
| TITLE: Interactive visualization of large molecular systems with VTX: example
with a minimal whole-cell model
ABSTRACT: VTX is an open-source molecular visualization software designed to overcome
the scaling limitations of existing real-time molecular visualization software
when handling massive molecular datasets. VTX employs a meshless molecular
graphics engine utilizing impostor-based techniques and adaptive
level-of-detail (LOD) rendering. This approach significantly reduces memory
usage and enables real-time visualization and manipulation of large molecular
systems. Performance benchmarks against VMD, PyMOL, and ChimeraX using a
114-million-bead Martini minimal whole-cell model demonstrate VTX's efficiency,
maintaining consistent frame rates even under interactive manipulation on
standard computer hardware. VTX incorporates features such as screen-space
ambient occlusion (SSAO) for enhanced depth perception and free-fly navigation
for intuitive exploration of large molecular systems. VTX is open-source and
free for non commercial use. Binaries for Windows and Ubuntu Linux are
available at \href{http://vtx.drugdesign.fr}{http://vtx.drugdesign.fr}. VTX
source code is available at
\href{https://github.com/VTX-Molecular-Visualization}{https://github.com/VTX-Molecular-Visualization}.
| no_new_dataset | 0.948442 |
2503.07185 | Vasiliki Sideri-Lampretsa | Vasiliki Sideri-Lampretsa, Daniel Rueckert, Huaqi Qiu | Evaluation of Alignment-Regularity Characteristics in Deformable Image
Registration | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Evaluating deformable image registration (DIR) is challenging due to the
inherent trade-off between achieving high alignment accuracy and maintaining
deformation regularity. In this work, we introduce a novel evaluation scheme
based on the alignment-regularity characteristic (ARC) to systematically
capture and analyze this trade-off. We first introduce the ARC curves, which
describe the performance of a given registration algorithm as a spectrum
measured by alignment and regularity metrics. We further adopt a
HyperNetwork-based approach that learns to continuously interpolate across the
full regularization range, accelerating the construction and improving the
sample density of ARC curves. We empirically demonstrate our evaluation scheme
using representative learning-based deformable image registration methods with
various network architectures and transformation models on two public datasets.
We present a range of findings not evident from existing evaluation practices
and provide general recommendations for model evaluation and selection using
our evaluation scheme. All code relevant is made publicly available.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:10:35 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Sideri-Lampretsa",
"Vasiliki",
""
],
[
"Rueckert",
"Daniel",
""
],
[
"Qiu",
"Huaqi",
""
]
]
| TITLE: Evaluation of Alignment-Regularity Characteristics in Deformable Image
Registration
ABSTRACT: Evaluating deformable image registration (DIR) is challenging due to the
inherent trade-off between achieving high alignment accuracy and maintaining
deformation regularity. In this work, we introduce a novel evaluation scheme
based on the alignment-regularity characteristic (ARC) to systematically
capture and analyze this trade-off. We first introduce the ARC curves, which
describe the performance of a given registration algorithm as a spectrum
measured by alignment and regularity metrics. We further adopt a
HyperNetwork-based approach that learns to continuously interpolate across the
full regularization range, accelerating the construction and improving the
sample density of ARC curves. We empirically demonstrate our evaluation scheme
using representative learning-based deformable image registration methods with
various network architectures and transformation models on two public datasets.
We present a range of findings not evident from existing evaluation practices
and provide general recommendations for model evaluation and selection using
our evaluation scheme. All code relevant is made publicly available.
| no_new_dataset | 0.948394 |
2503.07190 | Tessa Pulli | Melvin Reka, Tessa Pulli, Markus Vincze | Multi-Modal 3D Mesh Reconstruction from Images and Text | under review | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | 6D object pose estimation for unseen objects is essential in robotics but
traditionally relies on trained models that require large datasets, high
computational costs, and struggle to generalize. Zero-shot approaches eliminate
the need for training but depend on pre-existing 3D object models, which are
often impractical to obtain. To address this, we propose a language-guided
few-shot 3D reconstruction method, reconstructing a 3D mesh from few input
images. In the proposed pipeline, receives a set of input images and a language
query. A combination of GroundingDINO and Segment Anything Model outputs
segmented masks from which a sparse point cloud is reconstructed with VGGSfM.
Subsequently, the mesh is reconstructed with the Gaussian Splatting method
SuGAR. In a final cleaning step, artifacts are removed, resulting in the final
3D mesh of the queried object. We evaluate the method in terms of accuracy and
quality of the geometry and texture. Furthermore, we study the impact of
imaging conditions such as viewing angle, number of input images, and image
overlap on 3D object reconstruction quality, efficiency, and computational
scalability.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:18:17 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Reka",
"Melvin",
""
],
[
"Pulli",
"Tessa",
""
],
[
"Vincze",
"Markus",
""
]
]
| TITLE: Multi-Modal 3D Mesh Reconstruction from Images and Text
ABSTRACT: 6D object pose estimation for unseen objects is essential in robotics but
traditionally relies on trained models that require large datasets, high
computational costs, and struggle to generalize. Zero-shot approaches eliminate
the need for training but depend on pre-existing 3D object models, which are
often impractical to obtain. To address this, we propose a language-guided
few-shot 3D reconstruction method, reconstructing a 3D mesh from few input
images. In the proposed pipeline, receives a set of input images and a language
query. A combination of GroundingDINO and Segment Anything Model outputs
segmented masks from which a sparse point cloud is reconstructed with VGGSfM.
Subsequently, the mesh is reconstructed with the Gaussian Splatting method
SuGAR. In a final cleaning step, artifacts are removed, resulting in the final
3D mesh of the queried object. We evaluate the method in terms of accuracy and
quality of the geometry and texture. Furthermore, we study the impact of
imaging conditions such as viewing angle, number of input images, and image
overlap on 3D object reconstruction quality, efficiency, and computational
scalability.
| no_new_dataset | 0.951504 |
2503.07195 | Lia Shahnazaryan | Lia Shahnazaryan, Patrick Simianer, Joern Wuebker | Contextual Cues in Machine Translation: Investigating the Potential of
Multi-Source Input Strategies in LLMs and NMT Systems | 11 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | We explore the impact of multi-source input strategies on machine translation
(MT) quality, comparing GPT-4o, a large language model (LLM), with a
traditional multilingual neural machine translation (NMT) system. Using
intermediate language translations as contextual cues, we evaluate their
effectiveness in enhancing English and Chinese translations into Portuguese.
Results suggest that contextual information significantly improves translation
quality for domain-specific datasets and potentially for linguistically distant
language pairs, with diminishing returns observed in benchmarks with high
linguistic variability. Additionally, we demonstrate that shallow fusion, a
multi-source approach we apply within the NMT system, shows improved results
when using high-resource languages as context for other translation pairs,
highlighting the importance of strategic context language selection.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:23:44 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Shahnazaryan",
"Lia",
""
],
[
"Simianer",
"Patrick",
""
],
[
"Wuebker",
"Joern",
""
]
]
| TITLE: Contextual Cues in Machine Translation: Investigating the Potential of
Multi-Source Input Strategies in LLMs and NMT Systems
ABSTRACT: We explore the impact of multi-source input strategies on machine translation
(MT) quality, comparing GPT-4o, a large language model (LLM), with a
traditional multilingual neural machine translation (NMT) system. Using
intermediate language translations as contextual cues, we evaluate their
effectiveness in enhancing English and Chinese translations into Portuguese.
Results suggest that contextual information significantly improves translation
quality for domain-specific datasets and potentially for linguistically distant
language pairs, with diminishing returns observed in benchmarks with high
linguistic variability. Additionally, we demonstrate that shallow fusion, a
multi-source approach we apply within the NMT system, shows improved results
when using high-resource languages as context for other translation pairs,
highlighting the importance of strategic context language selection.
| no_new_dataset | 0.951729 |
2503.07209 | Ruochen Pi | Ruochen Pi and Lianlei Shan | Synthetic Lung X-ray Generation through Cross-Attention and Affinity
Transformation | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collecting and annotating medical images is a time-consuming and
resource-intensive task. However, generating synthetic data through models such
as Diffusion offers a cost-effective alternative. This paper introduces a new
method for the automatic generation of accurate semantic masks from synthetic
lung X-ray images based on a stable diffusion model trained on text-image
pairs. This method uses cross-attention mapping between text and image to
extend text-driven image synthesis to semantic mask generation. It employs
text-guided cross-attention information to identify specific areas in an image
and combines this with innovative techniques to produce high-resolution,
class-differentiated pixel masks. This approach significantly reduces the costs
associated with data collection and annotation. The experimental results
demonstrate that segmentation models trained on synthetic data generated using
the method are comparable to, and in some cases even better than, models
trained on real datasets. This shows the effectiveness of the method and its
potential to revolutionize medical image analysis.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:48:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Pi",
"Ruochen",
""
],
[
"Shan",
"Lianlei",
""
]
]
| TITLE: Synthetic Lung X-ray Generation through Cross-Attention and Affinity
Transformation
ABSTRACT: Collecting and annotating medical images is a time-consuming and
resource-intensive task. However, generating synthetic data through models such
as Diffusion offers a cost-effective alternative. This paper introduces a new
method for the automatic generation of accurate semantic masks from synthetic
lung X-ray images based on a stable diffusion model trained on text-image
pairs. This method uses cross-attention mapping between text and image to
extend text-driven image synthesis to semantic mask generation. It employs
text-guided cross-attention information to identify specific areas in an image
and combines this with innovative techniques to produce high-resolution,
class-differentiated pixel masks. This approach significantly reduces the costs
associated with data collection and annotation. The experimental results
demonstrate that segmentation models trained on synthetic data generated using
the method are comparable to, and in some cases even better than, models
trained on real datasets. This shows the effectiveness of the method and its
potential to revolutionize medical image analysis.
| no_new_dataset | 0.957358 |
2503.07214 | Jimin Sohn Ms. | Jimin Sohn, David R. Mortensen | Cross-Lingual IPA Contrastive Learning for Zero-Shot NER | 17 pages, 6 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Existing approaches to zero-shot Named Entity Recognition (NER) for
low-resource languages have primarily relied on machine translation, whereas
more recent methods have shifted focus to phonemic representation. Building
upon this, we investigate how reducing the phonemic representation gap in IPA
transcription between languages with similar phonetic characteristics enables
models trained on high-resource languages to perform effectively on
low-resource languages. In this work, we propose CONtrastive Learning with IPA
(CONLIPA) dataset containing 10 English and high resource languages IPA pairs
from 10 frequently used language families. We also propose a cross-lingual IPA
Contrastive learning method (IPAC) using the CONLIPA dataset. Furthermore, our
proposed dataset and methodology demonstrate a substantial average gain when
compared to the best performing baseline.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:52:33 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Sohn",
"Jimin",
""
],
[
"Mortensen",
"David R.",
""
]
]
| TITLE: Cross-Lingual IPA Contrastive Learning for Zero-Shot NER
ABSTRACT: Existing approaches to zero-shot Named Entity Recognition (NER) for
low-resource languages have primarily relied on machine translation, whereas
more recent methods have shifted focus to phonemic representation. Building
upon this, we investigate how reducing the phonemic representation gap in IPA
transcription between languages with similar phonetic characteristics enables
models trained on high-resource languages to perform effectively on
low-resource languages. In this work, we propose CONtrastive Learning with IPA
(CONLIPA) dataset containing 10 English and high resource languages IPA pairs
from 10 frequently used language families. We also propose a cross-lingual IPA
Contrastive learning method (IPAC) using the CONLIPA dataset. Furthermore, our
proposed dataset and methodology demonstrate a substantial average gain when
compared to the best performing baseline.
| new_dataset | 0.964656 |
2503.07215 | Peipei Liu | Peipei Liu, Jian Sun, Li Chen, Zhaoteng Yan, Peizheng Zhang, Dapeng
Sun, Dawei Wang, Dan Li | Control Flow-Augmented Decompiler based on Large Language Model | null | null | null | null | cs.SE cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binary decompilation plays a crucial role in various tasks related to
security threat analysis and software engineering, such as binary vulnerability
detection and software supply chain analysis. Current prevalent binary
decompilation methods primarily rely on large language models (LLMs) and can be
broadly classified into two main approaches: prompt-based decompilation and
end-toend decompilation. Prompt-based methods typically require significant
effort to analyze and summarize the predicted data to extract aspect-specific
expert knowledge, which is then fed into a general purpose large language model
to address specific decompilation tasks. End-to-end methods, on the other hand,
carefully construct training datasets or neural networks to perform
post-training on general-purpose large language models, thereby obtaining
domain-specific large language models for decompiling the predicted data.
However, both existing approaches still face significant challenges, including
the absence of rich semantic representations of the input code and the neglect
of control flow information, which is crucial for accurate decompilation.
Furthermore, most current decompilation techniques are specifically tailored
for the x86 architecture, making it difficult to efficiently adapt and
generalize them to other bit width or instruction architectures. To address
these limitations, we propose a novel end-to-end decompilation LLM, CFADecLLM,
which aims to enhance existing end-to-end decompilation methods. We conduct
extensive experiments on the public dataset Humaneval and Exebench across four
optimization levels, and results demonstrate that our approach outperforms
existing methods in multiple metrics, validating its effectiveness and
superiority.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:52:48 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Peipei",
""
],
[
"Sun",
"Jian",
""
],
[
"Chen",
"Li",
""
],
[
"Yan",
"Zhaoteng",
""
],
[
"Zhang",
"Peizheng",
""
],
[
"Sun",
"Dapeng",
""
],
[
"Wang",
"Dawei",
""
],
[
"Li",
"Dan",
""
]
]
| TITLE: Control Flow-Augmented Decompiler based on Large Language Model
ABSTRACT: Binary decompilation plays a crucial role in various tasks related to
security threat analysis and software engineering, such as binary vulnerability
detection and software supply chain analysis. Current prevalent binary
decompilation methods primarily rely on large language models (LLMs) and can be
broadly classified into two main approaches: prompt-based decompilation and
end-toend decompilation. Prompt-based methods typically require significant
effort to analyze and summarize the predicted data to extract aspect-specific
expert knowledge, which is then fed into a general purpose large language model
to address specific decompilation tasks. End-to-end methods, on the other hand,
carefully construct training datasets or neural networks to perform
post-training on general-purpose large language models, thereby obtaining
domain-specific large language models for decompiling the predicted data.
However, both existing approaches still face significant challenges, including
the absence of rich semantic representations of the input code and the neglect
of control flow information, which is crucial for accurate decompilation.
Furthermore, most current decompilation techniques are specifically tailored
for the x86 architecture, making it difficult to efficiently adapt and
generalize them to other bit width or instruction architectures. To address
these limitations, we propose a novel end-to-end decompilation LLM, CFADecLLM,
which aims to enhance existing end-to-end decompilation methods. We conduct
extensive experiments on the public dataset Humaneval and Exebench across four
optimization levels, and results demonstrate that our approach outperforms
existing methods in multiple metrics, validating its effectiveness and
superiority.
| no_new_dataset | 0.940844 |
2503.07227 | Ben Jourdan | Ben Jourdan, Gregory Schwartzman, Peter Macgregor, He Sun | Coreset Spectral Clustering | null | null | null | null | cs.LG cs.DS | http://creativecommons.org/licenses/by/4.0/ | Coresets have become an invaluable tool for solving $k$-means and kernel
$k$-means clustering problems on large datasets with small numbers of clusters.
On the other hand, spectral clustering works well on sparse graphs and has
recently been extended to scale efficiently to large numbers of clusters. We
exploit the connection between kernel $k$-means and the normalised cut problem
to combine the benefits of both. Our main result is a coreset spectral
clustering algorithm for graphs that clusters a coreset graph to infer a good
labelling of the original graph. We prove that an $\alpha$-approximation for
the normalised cut problem on the coreset graph is an $O(\alpha)$-approximation
on the original. We also improve the running time of the state-of-the-art
coreset algorithm for kernel $k$-means on sparse kernels, from $\tilde{O}(nk)$
to $\tilde{O}(n\cdot \min \{k, d_{avg}\})$, where $d_{avg}$ is the average
number of non-zero entries in each row of the $n\times n$ kernel matrix. Our
experiments confirm our coreset algorithm is asymptotically faster on large
real-world graphs with many clusters, and show that our clustering algorithm
overcomes the main challenge faced by coreset kernel $k$-means on sparse
kernels which is getting stuck in local optima.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:14:02 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Jourdan",
"Ben",
""
],
[
"Schwartzman",
"Gregory",
""
],
[
"Macgregor",
"Peter",
""
],
[
"Sun",
"He",
""
]
]
| TITLE: Coreset Spectral Clustering
ABSTRACT: Coresets have become an invaluable tool for solving $k$-means and kernel
$k$-means clustering problems on large datasets with small numbers of clusters.
On the other hand, spectral clustering works well on sparse graphs and has
recently been extended to scale efficiently to large numbers of clusters. We
exploit the connection between kernel $k$-means and the normalised cut problem
to combine the benefits of both. Our main result is a coreset spectral
clustering algorithm for graphs that clusters a coreset graph to infer a good
labelling of the original graph. We prove that an $\alpha$-approximation for
the normalised cut problem on the coreset graph is an $O(\alpha)$-approximation
on the original. We also improve the running time of the state-of-the-art
coreset algorithm for kernel $k$-means on sparse kernels, from $\tilde{O}(nk)$
to $\tilde{O}(n\cdot \min \{k, d_{avg}\})$, where $d_{avg}$ is the average
number of non-zero entries in each row of the $n\times n$ kernel matrix. Our
experiments confirm our coreset algorithm is asymptotically faster on large
real-world graphs with many clusters, and show that our clustering algorithm
overcomes the main challenge faced by coreset kernel $k$-means on sparse
kernels which is getting stuck in local optima.
| no_new_dataset | 0.949435 |
2503.07234 | Haicheng Liao | Haicheng Liao, Hanlin Kong, Bonan Wang, Chengyue Wang, Wang Ye,
Zhengbing He, Chengzhong Xu, Zhenning Li | CoT-Drive: Efficient Motion Forecasting for Autonomous Driving with LLMs
and Chain-of-Thought Prompting | null | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate motion forecasting is crucial for safe autonomous driving (AD). This
study proposes CoT-Drive, a novel approach that enhances motion forecasting by
leveraging large language models (LLMs) and a chain-of-thought (CoT) prompting
method. We introduce a teacher-student knowledge distillation strategy to
effectively transfer LLMs' advanced scene understanding capabilities to
lightweight language models (LMs), ensuring that CoT-Drive operates in
real-time on edge devices while maintaining comprehensive scene understanding
and generalization capabilities. By leveraging CoT prompting techniques for
LLMs without additional training, CoT-Drive generates semantic annotations that
significantly improve the understanding of complex traffic environments,
thereby boosting the accuracy and robustness of predictions. Additionally, we
present two new scene description datasets, Highway-Text and Urban-Text,
designed for fine-tuning lightweight LMs to generate context-specific semantic
annotations. Comprehensive evaluations of five real-world datasets demonstrate
that CoT-Drive outperforms existing models, highlighting its effectiveness and
efficiency in handling complex traffic scenarios. Overall, this study is the
first to consider the practical application of LLMs in this field. It pioneers
the training and use of a lightweight LLM surrogate for motion forecasting,
setting a new benchmark and showcasing the potential of integrating LLMs into
AD systems.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:17:38 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liao",
"Haicheng",
""
],
[
"Kong",
"Hanlin",
""
],
[
"Wang",
"Bonan",
""
],
[
"Wang",
"Chengyue",
""
],
[
"Ye",
"Wang",
""
],
[
"He",
"Zhengbing",
""
],
[
"Xu",
"Chengzhong",
""
],
[
"Li",
"Zhenning",
""
]
]
| TITLE: CoT-Drive: Efficient Motion Forecasting for Autonomous Driving with LLMs
and Chain-of-Thought Prompting
ABSTRACT: Accurate motion forecasting is crucial for safe autonomous driving (AD). This
study proposes CoT-Drive, a novel approach that enhances motion forecasting by
leveraging large language models (LLMs) and a chain-of-thought (CoT) prompting
method. We introduce a teacher-student knowledge distillation strategy to
effectively transfer LLMs' advanced scene understanding capabilities to
lightweight language models (LMs), ensuring that CoT-Drive operates in
real-time on edge devices while maintaining comprehensive scene understanding
and generalization capabilities. By leveraging CoT prompting techniques for
LLMs without additional training, CoT-Drive generates semantic annotations that
significantly improve the understanding of complex traffic environments,
thereby boosting the accuracy and robustness of predictions. Additionally, we
present two new scene description datasets, Highway-Text and Urban-Text,
designed for fine-tuning lightweight LMs to generate context-specific semantic
annotations. Comprehensive evaluations of five real-world datasets demonstrate
that CoT-Drive outperforms existing models, highlighting its effectiveness and
efficiency in handling complex traffic scenarios. Overall, this study is the
first to consider the practical application of LLMs in this field. It pioneers
the training and use of a lightweight LLM surrogate for motion forecasting,
setting a new benchmark and showcasing the potential of integrating LLMs into
AD systems.
| new_dataset | 0.961678 |
2503.07235 | Haowen Bai | Haowen Bai, Jiangshe Zhang, Zixiang Zhao, Lilun Deng, Yukun Cui,
Shuang Xu | Retinex-MEF: Retinex-based Glare Effects Aware Unsupervised
Multi-Exposure Image Fusion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-exposure image fusion consolidates multiple low dynamic range images of
the same scene into a singular high dynamic range image. Retinex theory, which
separates image illumination from scene reflectance, is naturally adopted to
ensure consistent scene representation and effective information fusion across
varied exposure levels. However, the conventional pixel-wise multiplication of
illumination and reflectance inadequately models the glare effect induced by
overexposure. To better adapt this theory for multi-exposure image fusion, we
introduce an unsupervised and controllable method
termed~\textbf{(Retinex-MEF)}. Specifically, our method decomposes
multi-exposure images into separate illumination components and a shared
reflectance component, and effectively modeling the glare induced by
overexposure. Employing a bidirectional loss constraint to learn the common
reflectance component, our approach effectively mitigates the glare effect.
Furthermore, we establish a controllable exposure fusion criterion, enabling
global exposure adjustments while preserving contrast, thus overcoming the
constraints of fixed-level fusion. A series of experiments across multiple
datasets, including underexposure-overexposure fusion, exposure control fusion,
and homogeneous extreme exposure fusion, demonstrate the effective
decomposition and flexible fusion capability of our model.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:19:03 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Bai",
"Haowen",
""
],
[
"Zhang",
"Jiangshe",
""
],
[
"Zhao",
"Zixiang",
""
],
[
"Deng",
"Lilun",
""
],
[
"Cui",
"Yukun",
""
],
[
"Xu",
"Shuang",
""
]
]
| TITLE: Retinex-MEF: Retinex-based Glare Effects Aware Unsupervised
Multi-Exposure Image Fusion
ABSTRACT: Multi-exposure image fusion consolidates multiple low dynamic range images of
the same scene into a singular high dynamic range image. Retinex theory, which
separates image illumination from scene reflectance, is naturally adopted to
ensure consistent scene representation and effective information fusion across
varied exposure levels. However, the conventional pixel-wise multiplication of
illumination and reflectance inadequately models the glare effect induced by
overexposure. To better adapt this theory for multi-exposure image fusion, we
introduce an unsupervised and controllable method
termed~\textbf{(Retinex-MEF)}. Specifically, our method decomposes
multi-exposure images into separate illumination components and a shared
reflectance component, and effectively modeling the glare induced by
overexposure. Employing a bidirectional loss constraint to learn the common
reflectance component, our approach effectively mitigates the glare effect.
Furthermore, we establish a controllable exposure fusion criterion, enabling
global exposure adjustments while preserving contrast, thus overcoming the
constraints of fixed-level fusion. A series of experiments across multiple
datasets, including underexposure-overexposure fusion, exposure control fusion,
and homogeneous extreme exposure fusion, demonstrate the effective
decomposition and flexible fusion capability of our model.
| no_new_dataset | 0.950365 |
2503.07237 | Seyoung Song | Junyeong Park, Seogyeong Jeong, Seyoung Song, Yohan Lee, Alice Oh | LLM-C3MOD: A Human-LLM Collaborative System for Cross-Cultural Hate
Speech Moderation | Accepted to NAACL 2025 Workshop - C3NLP (Workshop on Cross-Cultural
Considerations in NLP) | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Content moderation is a global challenge, yet major tech platforms prioritize
high-resource languages, leaving low-resource languages with scarce native
moderators. Since effective moderation depends on understanding contextual
cues, this imbalance increases the risk of improper moderation due to
non-native moderators' limited cultural understanding. Through a user study, we
identify that non-native moderators struggle with interpreting
culturally-specific knowledge, sentiment, and internet culture in the hate
speech moderation. To assist them, we present LLM-C3MOD, a human-LLM
collaborative pipeline with three steps: (1) RAG-enhanced cultural context
annotations; (2) initial LLM-based moderation; and (3) targeted human
moderation for cases lacking LLM consensus. Evaluated on a Korean hate speech
dataset with Indonesian and German participants, our system achieves 78%
accuracy (surpassing GPT-4o's 71% baseline), while reducing human workload by
83.6%. Notably, human moderators excel at nuanced contents where LLMs struggle.
Our findings suggest that non-native moderators, when properly supported by
LLMs, can effectively contribute to cross-cultural hate speech moderation.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:20:20 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Park",
"Junyeong",
""
],
[
"Jeong",
"Seogyeong",
""
],
[
"Song",
"Seyoung",
""
],
[
"Lee",
"Yohan",
""
],
[
"Oh",
"Alice",
""
]
]
| TITLE: LLM-C3MOD: A Human-LLM Collaborative System for Cross-Cultural Hate
Speech Moderation
ABSTRACT: Content moderation is a global challenge, yet major tech platforms prioritize
high-resource languages, leaving low-resource languages with scarce native
moderators. Since effective moderation depends on understanding contextual
cues, this imbalance increases the risk of improper moderation due to
non-native moderators' limited cultural understanding. Through a user study, we
identify that non-native moderators struggle with interpreting
culturally-specific knowledge, sentiment, and internet culture in the hate
speech moderation. To assist them, we present LLM-C3MOD, a human-LLM
collaborative pipeline with three steps: (1) RAG-enhanced cultural context
annotations; (2) initial LLM-based moderation; and (3) targeted human
moderation for cases lacking LLM consensus. Evaluated on a Korean hate speech
dataset with Indonesian and German participants, our system achieves 78%
accuracy (surpassing GPT-4o's 71% baseline), while reducing human workload by
83.6%. Notably, human moderators excel at nuanced contents where LLMs struggle.
Our findings suggest that non-native moderators, when properly supported by
LLMs, can effectively contribute to cross-cultural hate speech moderation.
| no_new_dataset | 0.93233 |
2503.07243 | Gangyang Li | Gangyang Li, Xiuwei Shang, Shaoyin Cheng, Junqi Zhang, Li Hu, Xu Zhu,
Weiming Zhang, Nenghai Yu | Beyond the Edge of Function: Unraveling the Patterns of Type Recovery in
Binary Code | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Type recovery is a crucial step in binary code analysis, holding significant
importance for reverse engineering and various security applications. Existing
works typically simply target type identifiers within binary code and achieve
type recovery by analyzing variable characteristics within functions. However,
we find that the types in real-world binary programs are more complex and often
follow specific distribution patterns.
In this paper, to gain a profound understanding of the variable type recovery
problem in binary code, we first conduct a comprehensive empirical study. We
utilize the TYDA dataset, which includes 163,643 binary programs across four
architectures and four compiler optimization options, fully reflecting the
complexity and diversity of real-world programs. We carefully study the unique
patterns that characterize types and variables in binary code, and also
investigate the impact of compiler optimizations on them, yielding many
valuable insights.
Based on our empirical findings, we propose ByteTR, a framework for
recovering variable types in binary code. We decouple the target type set to
address the issue of unbalanced type distribution and perform static program
analysis to tackle the impact of compiler optimizations on variable storage. In
light of the ubiquity of variable propagation across functions observed in our
study, ByteTR conducts inter-procedural analysis to trace variable propagation
and employs a gated graph neural network to capture long-range data flow
dependencies for variable type recovery. We conduct extensive experiments to
evaluate the performance of ByteTR. The results demonstrate that ByteTR leads
state-of-the-art works in both effectiveness and efficiency. Moreover, in real
CTF challenge case, the pseudo code optimized by ByteTR significantly improves
readability, surpassing leading tools IDA and Ghidra.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:27:05 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Gangyang",
""
],
[
"Shang",
"Xiuwei",
""
],
[
"Cheng",
"Shaoyin",
""
],
[
"Zhang",
"Junqi",
""
],
[
"Hu",
"Li",
""
],
[
"Zhu",
"Xu",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Yu",
"Nenghai",
""
]
]
| TITLE: Beyond the Edge of Function: Unraveling the Patterns of Type Recovery in
Binary Code
ABSTRACT: Type recovery is a crucial step in binary code analysis, holding significant
importance for reverse engineering and various security applications. Existing
works typically simply target type identifiers within binary code and achieve
type recovery by analyzing variable characteristics within functions. However,
we find that the types in real-world binary programs are more complex and often
follow specific distribution patterns.
In this paper, to gain a profound understanding of the variable type recovery
problem in binary code, we first conduct a comprehensive empirical study. We
utilize the TYDA dataset, which includes 163,643 binary programs across four
architectures and four compiler optimization options, fully reflecting the
complexity and diversity of real-world programs. We carefully study the unique
patterns that characterize types and variables in binary code, and also
investigate the impact of compiler optimizations on them, yielding many
valuable insights.
Based on our empirical findings, we propose ByteTR, a framework for
recovering variable types in binary code. We decouple the target type set to
address the issue of unbalanced type distribution and perform static program
analysis to tackle the impact of compiler optimizations on variable storage. In
light of the ubiquity of variable propagation across functions observed in our
study, ByteTR conducts inter-procedural analysis to trace variable propagation
and employs a gated graph neural network to capture long-range data flow
dependencies for variable type recovery. We conduct extensive experiments to
evaluate the performance of ByteTR. The results demonstrate that ByteTR leads
state-of-the-art works in both effectiveness and efficiency. Moreover, in real
CTF challenge case, the pseudo code optimized by ByteTR significantly improves
readability, surpassing leading tools IDA and Ghidra.
| no_new_dataset | 0.94256 |
2503.07249 | Shuyuan Zheng | Feng Huang, Shuyuan Zheng, Zhaobing Qiu, Huanxian Liu, Huanxin Bai,
Liqiong Chen | Text-IRSTD: Leveraging Semantic Text to Promote Infrared Small Target
Detection in Complex Scenes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Infrared small target detection is currently a hot and challenging task in
computer vision. Existing methods usually focus on mining visual features of
targets, which struggles to cope with complex and diverse detection scenarios.
The main reason is that infrared small targets have limited image information
on their own, thus relying only on visual features fails to discriminate
targets and interferences, leading to lower detection performance. To address
this issue, we introduce a novel approach leveraging semantic text to guide
infrared small target detection, called Text-IRSTD. It innovatively expands
classical IRSTD to text-guided IRSTD, providing a new research idea. On the one
hand, we devise a novel fuzzy semantic text prompt to accommodate ambiguous
target categories. On the other hand, we propose a progressive cross-modal
semantic interaction decoder (PCSID) to facilitate information fusion between
texts and images. In addition, we construct a new benchmark consisting of 2,755
infrared images of different scenarios with fuzzy semantic textual annotations,
called FZDT. Extensive experimental results demonstrate that our method
achieves better detection performance and target contour recovery than the
state-of-the-art methods. Moreover, proposed Text-IRSTD shows strong
generalization and wide application prospects in unseen detection scenarios.
The dataset and code will be publicly released after acceptance of this paper.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:33:07 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Huang",
"Feng",
""
],
[
"Zheng",
"Shuyuan",
""
],
[
"Qiu",
"Zhaobing",
""
],
[
"Liu",
"Huanxian",
""
],
[
"Bai",
"Huanxin",
""
],
[
"Chen",
"Liqiong",
""
]
]
| TITLE: Text-IRSTD: Leveraging Semantic Text to Promote Infrared Small Target
Detection in Complex Scenes
ABSTRACT: Infrared small target detection is currently a hot and challenging task in
computer vision. Existing methods usually focus on mining visual features of
targets, which struggles to cope with complex and diverse detection scenarios.
The main reason is that infrared small targets have limited image information
on their own, thus relying only on visual features fails to discriminate
targets and interferences, leading to lower detection performance. To address
this issue, we introduce a novel approach leveraging semantic text to guide
infrared small target detection, called Text-IRSTD. It innovatively expands
classical IRSTD to text-guided IRSTD, providing a new research idea. On the one
hand, we devise a novel fuzzy semantic text prompt to accommodate ambiguous
target categories. On the other hand, we propose a progressive cross-modal
semantic interaction decoder (PCSID) to facilitate information fusion between
texts and images. In addition, we construct a new benchmark consisting of 2,755
infrared images of different scenarios with fuzzy semantic textual annotations,
called FZDT. Extensive experimental results demonstrate that our method
achieves better detection performance and target contour recovery than the
state-of-the-art methods. Moreover, proposed Text-IRSTD shows strong
generalization and wide application prospects in unseen detection scenarios.
The dataset and code will be publicly released after acceptance of this paper.
| new_dataset | 0.959837 |
2503.07269 | Nedjma Ousidhoum | Shamsuddeen Hassan Muhammad, Nedjma Ousidhoum, Idris Abdulmumin, Seid
Muhie Yimam, Jan Philip Wahle, Terry Ruas, Meriem Beloucif, Christine De
Kock, Tadesse Destaw Belay, Ibrahim Said Ahmad, Nirmal Surange, Daniela
Teodorescu, David Ifeoluwa Adelani, Alham Fikri Aji, Felermino Ali, Vladimir
Araujo, Abinew Ali Ayele, Oana Ignat, Alexander Panchenko, Yi Zhou, Saif M.
Mohammad | SemEval-2025 Task 11: Bridging the Gap in Text-Based Emotion Detection | SemEval2025 Task11 (Task Description Paper). arXiv admin note: text
overlap with arXiv:2502.11926 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present our shared task on text-based emotion detection, covering more
than 30 languages from seven distinct language families. These languages are
predominantly low-resource and spoken across various continents. The data
instances are multi-labeled into six emotional classes, with additional
datasets in 11 languages annotated for emotion intensity. Participants were
asked to predict labels in three tracks: (a) emotion labels in monolingual
settings, (b) emotion intensity scores, and (c) emotion labels in cross-lingual
settings. The task attracted over 700 participants. We received final
submissions from more than 200 teams and 93 system description papers. We
report baseline results, as well as findings on the best-performing systems,
the most common approaches, and the most effective methods across various
tracks and languages. The datasets for this task are publicly available.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 12:49:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Ousidhoum",
"Nedjma",
""
],
[
"Abdulmumin",
"Idris",
""
],
[
"Yimam",
"Seid Muhie",
""
],
[
"Wahle",
"Jan Philip",
""
],
[
"Ruas",
"Terry",
""
],
[
"Beloucif",
"Meriem",
""
],
[
"De Kock",
"Christine",
""
],
[
"Belay",
"Tadesse Destaw",
""
],
[
"Ahmad",
"Ibrahim Said",
""
],
[
"Surange",
"Nirmal",
""
],
[
"Teodorescu",
"Daniela",
""
],
[
"Adelani",
"David Ifeoluwa",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Ali",
"Felermino",
""
],
[
"Araujo",
"Vladimir",
""
],
[
"Ayele",
"Abinew Ali",
""
],
[
"Ignat",
"Oana",
""
],
[
"Panchenko",
"Alexander",
""
],
[
"Zhou",
"Yi",
""
],
[
"Mohammad",
"Saif M.",
""
]
]
| TITLE: SemEval-2025 Task 11: Bridging the Gap in Text-Based Emotion Detection
ABSTRACT: We present our shared task on text-based emotion detection, covering more
than 30 languages from seven distinct language families. These languages are
predominantly low-resource and spoken across various continents. The data
instances are multi-labeled into six emotional classes, with additional
datasets in 11 languages annotated for emotion intensity. Participants were
asked to predict labels in three tracks: (a) emotion labels in monolingual
settings, (b) emotion intensity scores, and (c) emotion labels in cross-lingual
settings. The task attracted over 700 participants. We received final
submissions from more than 200 teams and 93 system description papers. We
report baseline results, as well as findings on the best-performing systems,
the most common approaches, and the most effective methods across various
tracks and languages. The datasets for this task are publicly available.
| no_new_dataset | 0.675818 |
2503.07282 | Yani Huang | Yani Huang, Richong Zhang, Zhijie Nie, Junfan Chen, Xuefeng Zhang | A Graph-based Verification Framework for Fact-Checking | 13pages, 4figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fact-checking plays a crucial role in combating misinformation. Existing
methods using large language models (LLMs) for claim decomposition face two key
limitations: (1) insufficient decomposition, introducing unnecessary complexity
to the verification process, and (2) ambiguity of mentions, leading to
incorrect verification results. To address these challenges, we suggest
introducing a claim graph consisting of triplets to address the insufficient
decomposition problem and reduce mention ambiguity through graph structure.
Based on this core idea, we propose a graph-based framework, GraphFC, for
fact-checking. The framework features three key components: graph construction,
which builds both claim and evidence graphs; graph-guided planning, which
prioritizes the triplet verification order; and graph-guided checking, which
verifies the triples one by one between claim and evidence graphs. Extensive
experiments show that GraphFC enables fine-grained decomposition while
resolving referential ambiguities through relational constraints, achieving
state-of-the-art performance across three datasets.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 13:02:29 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Huang",
"Yani",
""
],
[
"Zhang",
"Richong",
""
],
[
"Nie",
"Zhijie",
""
],
[
"Chen",
"Junfan",
""
],
[
"Zhang",
"Xuefeng",
""
]
]
| TITLE: A Graph-based Verification Framework for Fact-Checking
ABSTRACT: Fact-checking plays a crucial role in combating misinformation. Existing
methods using large language models (LLMs) for claim decomposition face two key
limitations: (1) insufficient decomposition, introducing unnecessary complexity
to the verification process, and (2) ambiguity of mentions, leading to
incorrect verification results. To address these challenges, we suggest
introducing a claim graph consisting of triplets to address the insufficient
decomposition problem and reduce mention ambiguity through graph structure.
Based on this core idea, we propose a graph-based framework, GraphFC, for
fact-checking. The framework features three key components: graph construction,
which builds both claim and evidence graphs; graph-guided planning, which
prioritizes the triplet verification order; and graph-guided checking, which
verifies the triples one by one between claim and evidence graphs. Extensive
experiments show that GraphFC enables fine-grained decomposition while
resolving referential ambiguities through relational constraints, achieving
state-of-the-art performance across three datasets.
| no_new_dataset | 0.943556 |
2503.07294 | Thomas Boucher | Thomas Boucher and Evangelos B. Mazomenos | Distilling Knowledge into Quantum Vision Transformers for Biomedical
Image Classification | Submitted for MICCAI 2025 | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum vision transformers (QViTs) build on vision transformers (ViTs) by
replacing linear layers within the self-attention mechanism with parameterised
quantum neural networks (QNNs), harnessing quantum mechanical properties to
improve feature representation. This hybrid approach aims to achieve superior
performance, with significantly reduced model complexity as a result of the
enriched feature representation, requiring fewer parameters. This paper
proposes a novel QViT model for biomedical image classification and
investigates its performance against comparable ViTs across eight diverse
datasets, encompassing various modalities and classification tasks. We assess
models trained from scratch and those pre-trained using knowledge distillation
(KD) from high-quality teacher models. Our findings demonstrate that QViTs
outperform comparable ViTs with average ROC AUC (0.863 vs 0.846) and accuracy
(0.710 vs 0.687) when trained from scratch, and even compete with
state-of-the-art classical models in multiple tasks, whilst being significantly
more efficient (89% reduction in GFLOPs and 99.99% in parameter number).
Additionally, we find that QViTs and ViTs respond equally well to KD, with QViT
pre-training performance scaling with model complexity. This is the first
investigation into the efficacy of deploying QViTs with KD for computer-aided
diagnosis. Our results highlight the enormous potential of quantum machine
learning (QML) in biomedical image analysis.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 13:16:48 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Boucher",
"Thomas",
""
],
[
"Mazomenos",
"Evangelos B.",
""
]
]
| TITLE: Distilling Knowledge into Quantum Vision Transformers for Biomedical
Image Classification
ABSTRACT: Quantum vision transformers (QViTs) build on vision transformers (ViTs) by
replacing linear layers within the self-attention mechanism with parameterised
quantum neural networks (QNNs), harnessing quantum mechanical properties to
improve feature representation. This hybrid approach aims to achieve superior
performance, with significantly reduced model complexity as a result of the
enriched feature representation, requiring fewer parameters. This paper
proposes a novel QViT model for biomedical image classification and
investigates its performance against comparable ViTs across eight diverse
datasets, encompassing various modalities and classification tasks. We assess
models trained from scratch and those pre-trained using knowledge distillation
(KD) from high-quality teacher models. Our findings demonstrate that QViTs
outperform comparable ViTs with average ROC AUC (0.863 vs 0.846) and accuracy
(0.710 vs 0.687) when trained from scratch, and even compete with
state-of-the-art classical models in multiple tasks, whilst being significantly
more efficient (89% reduction in GFLOPs and 99.99% in parameter number).
Additionally, we find that QViTs and ViTs respond equally well to KD, with QViT
pre-training performance scaling with model complexity. This is the first
investigation into the efficacy of deploying QViTs with KD for computer-aided
diagnosis. Our results highlight the enormous potential of quantum machine
learning (QML) in biomedical image analysis.
| no_new_dataset | 0.947962 |
2503.07307 | Bo Huang | Bo Huang, Wenlun Xu, Qizhuo Han, Haodong Jing, Ying Li | AttenST: A Training-Free Attention-Driven Style Transfer Framework with
Pre-Trained Diffusion Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While diffusion models have achieved remarkable progress in style transfer
tasks, existing methods typically rely on fine-tuning or optimizing pre-trained
models during inference, leading to high computational costs and challenges in
balancing content preservation with style integration. To address these
limitations, we introduce AttenST, a training-free attention-driven style
transfer framework. Specifically, we propose a style-guided self-attention
mechanism that conditions self-attention on the reference style by retaining
the query of the content image while substituting its key and value with those
from the style image, enabling effective style feature integration. To mitigate
style information loss during inversion, we introduce a style-preserving
inversion strategy that refines inversion accuracy through multiple resampling
steps. Additionally, we propose a content-aware adaptive instance
normalization, which integrates content statistics into the normalization
process to optimize style fusion while mitigating the content degradation.
Furthermore, we introduce a dual-feature cross-attention mechanism to fuse
content and style features, ensuring a harmonious synthesis of structural
fidelity and stylistic expression. Extensive experiments demonstrate that
AttenST outperforms existing methods, achieving state-of-the-art performance in
style transfer dataset.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 13:28:36 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Huang",
"Bo",
""
],
[
"Xu",
"Wenlun",
""
],
[
"Han",
"Qizhuo",
""
],
[
"Jing",
"Haodong",
""
],
[
"Li",
"Ying",
""
]
]
| TITLE: AttenST: A Training-Free Attention-Driven Style Transfer Framework with
Pre-Trained Diffusion Models
ABSTRACT: While diffusion models have achieved remarkable progress in style transfer
tasks, existing methods typically rely on fine-tuning or optimizing pre-trained
models during inference, leading to high computational costs and challenges in
balancing content preservation with style integration. To address these
limitations, we introduce AttenST, a training-free attention-driven style
transfer framework. Specifically, we propose a style-guided self-attention
mechanism that conditions self-attention on the reference style by retaining
the query of the content image while substituting its key and value with those
from the style image, enabling effective style feature integration. To mitigate
style information loss during inversion, we introduce a style-preserving
inversion strategy that refines inversion accuracy through multiple resampling
steps. Additionally, we propose a content-aware adaptive instance
normalization, which integrates content statistics into the normalization
process to optimize style fusion while mitigating the content degradation.
Furthermore, we introduce a dual-feature cross-attention mechanism to fuse
content and style features, ensuring a harmonious synthesis of structural
fidelity and stylistic expression. Extensive experiments demonstrate that
AttenST outperforms existing methods, achieving state-of-the-art performance in
style transfer dataset.
| no_new_dataset | 0.943815 |
2503.07313 | Aeysha Bhatti | Aeysha Bhatti, Trudie Sandrock, Johane Nienkemper-Swanepoel | The influence of missing data mechanisms and simple missing data
handling techniques on fairness | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | Fairness of machine learning algorithms is receiving increasing attention, as
such algorithms permeate the day-to-day aspects of our lives. One way in which
bias can manifest in a dataset is through missing values. If data are missing,
these data are often assumed to be missing completely randomly; in reality the
propensity of data being missing is often tied to the demographic
characteristics of individuals. There is limited research into how missing
values and the handling thereof can impact the fairness of an algorithm. Most
researchers either apply listwise deletion or tend to use the simpler methods
of imputation (e.g. mean or mode) compared to the more advanced ones (e.g.
multiple imputation); we therefore study the impact of the simpler methods on
the fairness of algorithms. The starting point of the study is the mechanism of
missingness, leading into how the missing data are processed and finally how
this impacts fairness. Three popular datasets in the field of fairness are
amputed in a simulation study. The results show that under certain scenarios
the impact on fairness can be pronounced when the missingness mechanism is
missing at random. Furthermore, elementary missing data handling techniques
like listwise deletion and mode imputation can lead to higher fairness compared
to more complex imputation methods like k-nearest neighbour imputation, albeit
often at the cost of lower accuracy.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 13:32:25 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Bhatti",
"Aeysha",
""
],
[
"Sandrock",
"Trudie",
""
],
[
"Nienkemper-Swanepoel",
"Johane",
""
]
]
| TITLE: The influence of missing data mechanisms and simple missing data
handling techniques on fairness
ABSTRACT: Fairness of machine learning algorithms is receiving increasing attention, as
such algorithms permeate the day-to-day aspects of our lives. One way in which
bias can manifest in a dataset is through missing values. If data are missing,
these data are often assumed to be missing completely randomly; in reality the
propensity of data being missing is often tied to the demographic
characteristics of individuals. There is limited research into how missing
values and the handling thereof can impact the fairness of an algorithm. Most
researchers either apply listwise deletion or tend to use the simpler methods
of imputation (e.g. mean or mode) compared to the more advanced ones (e.g.
multiple imputation); we therefore study the impact of the simpler methods on
the fairness of algorithms. The starting point of the study is the mechanism of
missingness, leading into how the missing data are processed and finally how
this impacts fairness. Three popular datasets in the field of fairness are
amputed in a simulation study. The results show that under certain scenarios
the impact on fairness can be pronounced when the missingness mechanism is
missing at random. Furthermore, elementary missing data handling techniques
like listwise deletion and mode imputation can lead to higher fairness compared
to more complex imputation methods like k-nearest neighbour imputation, albeit
often at the cost of lower accuracy.
| no_new_dataset | 0.942718 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.