Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.12383 | Songen Gu | Songen Gu, Haoxuan Song, Binjie Liu, Qian Yu, Sanyi Zhang, Haiyong
Jiang, Jin Huang, Feng Tian | VRsketch2Gaussian: 3D VR Sketch Guided 3D Object Generation with
Gaussian Splatting | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose VRSketch2Gaussian, a first VR sketch-guided, multi-modal, native
3D object generation framework that incorporates a 3D Gaussian Splatting
representation. As part of our work, we introduce VRSS, the first large-scale
paired dataset containing VR sketches, text, images, and 3DGS, bridging the gap
in multi-modal VR sketch-based generation. Our approach features the following
key innovations: 1) Sketch-CLIP feature alignment. We propose a two-stage
alignment strategy that bridges the domain gap between sparse VR sketch
embeddings and rich CLIP embeddings, facilitating both VR sketch-based
retrieval and generation tasks. 2) Fine-Grained multi-modal conditioning. We
disentangle the 3D generation process by using explicit VR sketches for
geometric conditioning and text descriptions for appearance control. To
facilitate this, we propose a generalizable VR sketch encoder that effectively
aligns different modalities. 3) Efficient and high-fidelity 3D native
generation. Our method leverages a 3D-native generation approach that enables
fast and texture-rich 3D object synthesis. Experiments conducted on our VRSS
dataset demonstrate that our method achieves high-quality, multi-modal VR
sketch-based 3D generation. We believe our VRSS dataset and VRsketch2Gaussian
method will be beneficial for the 3D generation community.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 07:03:13 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Gu",
"Songen",
""
],
[
"Song",
"Haoxuan",
""
],
[
"Liu",
"Binjie",
""
],
[
"Yu",
"Qian",
""
],
[
"Zhang",
"Sanyi",
""
],
[
"Jiang",
"Haiyong",
""
],
[
"Huang",
"Jin",
""
],
[
"Tian",
"Feng",
""
]
] | TITLE: VRsketch2Gaussian: 3D VR Sketch Guided 3D Object Generation with
Gaussian Splatting
ABSTRACT: We propose VRSketch2Gaussian, a first VR sketch-guided, multi-modal, native
3D object generation framework that incorporates a 3D Gaussian Splatting
representation. As part of our work, we introduce VRSS, the first large-scale
paired dataset containing VR sketches, text, images, and 3DGS, bridging the gap
in multi-modal VR sketch-based generation. Our approach features the following
key innovations: 1) Sketch-CLIP feature alignment. We propose a two-stage
alignment strategy that bridges the domain gap between sparse VR sketch
embeddings and rich CLIP embeddings, facilitating both VR sketch-based
retrieval and generation tasks. 2) Fine-Grained multi-modal conditioning. We
disentangle the 3D generation process by using explicit VR sketches for
geometric conditioning and text descriptions for appearance control. To
facilitate this, we propose a generalizable VR sketch encoder that effectively
aligns different modalities. 3) Efficient and high-fidelity 3D native
generation. Our method leverages a 3D-native generation approach that enables
fast and texture-rich 3D object synthesis. Experiments conducted on our VRSS
dataset demonstrate that our method achieves high-quality, multi-modal VR
sketch-based 3D generation. We believe our VRSS dataset and VRsketch2Gaussian
method will be beneficial for the 3D generation community.
|
2503.12385 | Yutao Hu | Yutao Hu, Sen Li, Jincheng Yan, Wenqi Shao, Xiaoyan Luo | Car-1000: A New Large Scale Fine-Grained Visual Categorization Dataset | accepted to The Eleventh Workshop on Fine-Grained Visual
Categorization in CVPR 2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Fine-grained visual categorization (FGVC) is a challenging but significant
task in computer vision, which aims to recognize different sub-categories of
birds, cars, airplanes, etc. Among them, recognizing models of different cars
has significant application value in autonomous driving, traffic surveillance
and scene understanding, which has received considerable attention in the past
few years. However, Stanford-Car, the most widely used fine-grained dataset for
car recognition, only has 196 different categories and only includes vehicle
models produced earlier than 2013. Due to the rapid advancements in the
automotive industry during recent years, the appearances of various car models
have become increasingly intricate and sophisticated. Consequently, the
previous Stanford-Car dataset fails to capture this evolving landscape and
cannot satisfy the requirements of automotive industry. To address these
challenges, in our paper, we introduce Car-1000, a large-scale dataset designed
specifically for fine-grained visual categorization of diverse car models.
Car-1000 encompasses vehicles from 165 different automakers, spanning a wide
range of 1000 distinct car models. Additionally, we have reproduced several
state-of-the-art FGVC methods on the Car-1000 dataset, establishing a new
benchmark for research in this field. We hope that our work will offer a fresh
perspective for future FGVC researchers. Our dataset is available at
https://github.com/toggle1995/Car-1000.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 07:14:58 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Hu",
"Yutao",
""
],
[
"Li",
"Sen",
""
],
[
"Yan",
"Jincheng",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Luo",
"Xiaoyan",
""
]
] | TITLE: Car-1000: A New Large Scale Fine-Grained Visual Categorization Dataset
ABSTRACT: Fine-grained visual categorization (FGVC) is a challenging but significant
task in computer vision, which aims to recognize different sub-categories of
birds, cars, airplanes, etc. Among them, recognizing models of different cars
has significant application value in autonomous driving, traffic surveillance
and scene understanding, which has received considerable attention in the past
few years. However, Stanford-Car, the most widely used fine-grained dataset for
car recognition, only has 196 different categories and only includes vehicle
models produced earlier than 2013. Due to the rapid advancements in the
automotive industry during recent years, the appearances of various car models
have become increasingly intricate and sophisticated. Consequently, the
previous Stanford-Car dataset fails to capture this evolving landscape and
cannot satisfy the requirements of automotive industry. To address these
challenges, in our paper, we introduce Car-1000, a large-scale dataset designed
specifically for fine-grained visual categorization of diverse car models.
Car-1000 encompasses vehicles from 165 different automakers, spanning a wide
range of 1000 distinct car models. Additionally, we have reproduced several
state-of-the-art FGVC methods on the Car-1000 dataset, establishing a new
benchmark for research in this field. We hope that our work will offer a fresh
perspective for future FGVC researchers. Our dataset is available at
https://github.com/toggle1995/Car-1000.
|
2503.12387 | Yanpeng Jia | Yanpeng Jia, Shiyi Wang, Shiliang Shao, Yue Wang, Fu Zhang, and Ting
Wang | M2UD: A Multi-model, Multi-scenario, Uneven-terrain Dataset for Ground
Robot with Localization and Mapping Evaluation | 18 pages, 12 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Ground robots play a crucial role in inspection, exploration, rescue, and
other applications. In recent years, advancements in LiDAR technology have made
sensors more accurate, lightweight, and cost-effective. Therefore, researchers
increasingly integrate sensors, for SLAM studies, providing robust technical
support for ground robots and expanding their application domains. Public
datasets are essential for advancing SLAM technology. However, existing
datasets for ground robots are typically restricted to flat-terrain motion with
3 DOF and cover only a limited range of scenarios. Although handheld devices
and UAV exhibit richer and more aggressive movements, their datasets are
predominantly confined to small-scale environments due to endurance
limitations. To fill these gap, we introduce M2UD, a multi-modal,
multi-scenario, uneven-terrain SLAM dataset for ground robots. This dataset
contains a diverse range of highly challenging environments, including cities,
open fields, long corridors, and mixed scenarios. Additionally, it presents
extreme weather conditions. The aggressive motion and degradation
characteristics of this dataset not only pose challenges for testing and
evaluating existing SLAM methods but also advance the development of more
advanced SLAM algorithms. To benchmark SLAM algorithms, M2UD provides smoothed
ground truth localization data obtained via RTK and introduces a novel
localization evaluation metric that considers both accuracy and efficiency.
Additionally, we utilize a high-precision laser scanner to acquire ground truth
maps of two representative scenes, facilitating the development and evaluation
of mapping algorithms. We select 12 localization sequences and 2 mapping
sequences to evaluate several classical SLAM algorithms, verifying usability of
the dataset. To enhance usability, the dataset is accompanied by a suite of
development kits.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 07:16:49 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jia",
"Yanpeng",
""
],
[
"Wang",
"Shiyi",
""
],
[
"Shao",
"Shiliang",
""
],
[
"Wang",
"Yue",
""
],
[
"Zhang",
"Fu",
""
],
[
"Wang",
"Ting",
""
]
] | TITLE: M2UD: A Multi-model, Multi-scenario, Uneven-terrain Dataset for Ground
Robot with Localization and Mapping Evaluation
ABSTRACT: Ground robots play a crucial role in inspection, exploration, rescue, and
other applications. In recent years, advancements in LiDAR technology have made
sensors more accurate, lightweight, and cost-effective. Therefore, researchers
increasingly integrate sensors, for SLAM studies, providing robust technical
support for ground robots and expanding their application domains. Public
datasets are essential for advancing SLAM technology. However, existing
datasets for ground robots are typically restricted to flat-terrain motion with
3 DOF and cover only a limited range of scenarios. Although handheld devices
and UAV exhibit richer and more aggressive movements, their datasets are
predominantly confined to small-scale environments due to endurance
limitations. To fill these gap, we introduce M2UD, a multi-modal,
multi-scenario, uneven-terrain SLAM dataset for ground robots. This dataset
contains a diverse range of highly challenging environments, including cities,
open fields, long corridors, and mixed scenarios. Additionally, it presents
extreme weather conditions. The aggressive motion and degradation
characteristics of this dataset not only pose challenges for testing and
evaluating existing SLAM methods but also advance the development of more
advanced SLAM algorithms. To benchmark SLAM algorithms, M2UD provides smoothed
ground truth localization data obtained via RTK and introduces a novel
localization evaluation metric that considers both accuracy and efficiency.
Additionally, we utilize a high-precision laser scanner to acquire ground truth
maps of two representative scenes, facilitating the development and evaluation
of mapping algorithms. We select 12 localization sequences and 2 mapping
sequences to evaluate several classical SLAM algorithms, verifying usability of
the dataset. To enhance usability, the dataset is accompanied by a suite of
development kits.
|
2503.12404 | Jianhao Yang | Jianhao Yang, Wenshuo Yu, Yuanchao Lv, Jiance Sun, Bokang Sun and
Mingyang Liu | SAM2-ELNet: Label Enhancement and Automatic Annotation for Remote
Sensing Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote sensing image segmentation is crucial for environmental monitoring,
disaster assessment, and resource management, directly affecting the accuracy
and efficiency of surface information extraction. The performance of existing
supervised models in remote sensing image segmentation tasks highly depends on
the quality of label data. However, current label data mainly relies on manual
annotation, which comes with high time costs and is subject to subjective
interference, resulting in distortion of label boundaries and often a loss of
detail. To solve the above problems, our work proposes an Edge-enhanced
Labeling Network, called SAM2-ELNet, which incorporates a labeling module and
an edge attention mechanism. This model effectively addresses issues such as
label detail loss, fragmentation, and inaccurate boundaries. Due to the
scarcity of manually annotated remote sensing data, the feature extraction
capabilities of traditional neural networks are limited. Our method uses the
Hiera backbone of the pre-trained self-supervised large model segment anything
model 2 (SAM2) as the encoder, achieves high-quality and efficient feature
extraction even with small samples by fine-tuning on downstream tasks. This
study compared the training effects of original and enhanced labels on the
manually annotated Deep-SAR Oil Spill (SOS) dataset. Results showed that the
model trained with enhanced labels performed better and had a lower final loss,
indicating closer alignment with the real data distribution. Our work also
explores the potential of extending the model into an efficient automatic
annotation framework through generalization experiments, facilitating
large-scale remote sensing image interpretation and intelligent recognition.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 08:11:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yang",
"Jianhao",
""
],
[
"Yu",
"Wenshuo",
""
],
[
"Lv",
"Yuanchao",
""
],
[
"Sun",
"Jiance",
""
],
[
"Sun",
"Bokang",
""
],
[
"Liu",
"Mingyang",
""
]
] | TITLE: SAM2-ELNet: Label Enhancement and Automatic Annotation for Remote
Sensing Segmentation
ABSTRACT: Remote sensing image segmentation is crucial for environmental monitoring,
disaster assessment, and resource management, directly affecting the accuracy
and efficiency of surface information extraction. The performance of existing
supervised models in remote sensing image segmentation tasks highly depends on
the quality of label data. However, current label data mainly relies on manual
annotation, which comes with high time costs and is subject to subjective
interference, resulting in distortion of label boundaries and often a loss of
detail. To solve the above problems, our work proposes an Edge-enhanced
Labeling Network, called SAM2-ELNet, which incorporates a labeling module and
an edge attention mechanism. This model effectively addresses issues such as
label detail loss, fragmentation, and inaccurate boundaries. Due to the
scarcity of manually annotated remote sensing data, the feature extraction
capabilities of traditional neural networks are limited. Our method uses the
Hiera backbone of the pre-trained self-supervised large model segment anything
model 2 (SAM2) as the encoder, achieves high-quality and efficient feature
extraction even with small samples by fine-tuning on downstream tasks. This
study compared the training effects of original and enhanced labels on the
manually annotated Deep-SAR Oil Spill (SOS) dataset. Results showed that the
model trained with enhanced labels performed better and had a lower final loss,
indicating closer alignment with the real data distribution. Our work also
explores the potential of extending the model into an efficient automatic
annotation framework through generalization experiments, facilitating
large-scale remote sensing image interpretation and intelligent recognition.
|
2503.12418 | Shuo Gao | Shuo Gao, Jingyang Zhang, Jun Xue, Meng Yang, Yang Chen, and Guangquan
Zhou | A Causality-Inspired Model for Intima-Media Thickening Assessment in
Ultrasound Videos | 10 pages, 5 figures, conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Carotid atherosclerosis represents a significant health risk, with its early
diagnosis primarily dependent on ultrasound-based assessments of carotid
intima-media thickening. However, during carotid ultrasound screening,
significant view variations cause style shifts, impairing content cues related
to thickening, such as lumen anatomy, which introduces spurious correlations
that hinder assessment. Therefore, we propose a novel causal-inspired method
for assessing carotid intima-media thickening in frame-wise ultrasound videos,
which focuses on two aspects: eliminating spurious correlations caused by style
and enhancing causal content correlations. Specifically, we introduce a novel
Spurious Correlation Elimination (SCE) module to remove non-causal style
effects by enforcing prediction invariance with style perturbations.
Simultaneously, we propose a Causal Equivalence Consolidation (CEC) module to
strengthen causal content correlation through adversarial optimization during
content randomization. Simultaneously, we design a Causal Transition
Augmentation (CTA) module to ensure smooth causal flow by integrating an
auxiliary pathway with text prompts and connecting it through contrastive
learning. The experimental results on our in-house carotid ultrasound video
dataset achieved an accuracy of 86.93\%, demonstrating the superior performance
of the proposed method. Code is available at
\href{https://github.com/xielaobanyy/causal-imt}{https://github.com/xielaobanyy/causal-imt}.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 09:07:20 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Gao",
"Shuo",
""
],
[
"Zhang",
"Jingyang",
""
],
[
"Xue",
"Jun",
""
],
[
"Yang",
"Meng",
""
],
[
"Chen",
"Yang",
""
],
[
"Zhou",
"Guangquan",
""
]
] | TITLE: A Causality-Inspired Model for Intima-Media Thickening Assessment in
Ultrasound Videos
ABSTRACT: Carotid atherosclerosis represents a significant health risk, with its early
diagnosis primarily dependent on ultrasound-based assessments of carotid
intima-media thickening. However, during carotid ultrasound screening,
significant view variations cause style shifts, impairing content cues related
to thickening, such as lumen anatomy, which introduces spurious correlations
that hinder assessment. Therefore, we propose a novel causal-inspired method
for assessing carotid intima-media thickening in frame-wise ultrasound videos,
which focuses on two aspects: eliminating spurious correlations caused by style
and enhancing causal content correlations. Specifically, we introduce a novel
Spurious Correlation Elimination (SCE) module to remove non-causal style
effects by enforcing prediction invariance with style perturbations.
Simultaneously, we propose a Causal Equivalence Consolidation (CEC) module to
strengthen causal content correlation through adversarial optimization during
content randomization. Simultaneously, we design a Causal Transition
Augmentation (CTA) module to ensure smooth causal flow by integrating an
auxiliary pathway with text prompts and connecting it through contrastive
learning. The experimental results on our in-house carotid ultrasound video
dataset achieved an accuracy of 86.93\%, demonstrating the superior performance
of the proposed method. Code is available at
\href{https://github.com/xielaobanyy/causal-imt}{https://github.com/xielaobanyy/causal-imt}.
|
2503.12419 | Kailun Yang | Luming Wang, Hao Shi, Xiaoting Yin, Kailun Yang, Kaiwei Wang | EgoEvGesture: Gesture Recognition Based on Egocentric Event Camera | The dataset and models are made publicly available at
https://github.com/3190105222/EgoEv_Gesture | null | null | null | cs.CV cs.RO eess.IV physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Egocentric gesture recognition is a pivotal technology for enhancing natural
human-computer interaction, yet traditional RGB-based solutions suffer from
motion blur and illumination variations in dynamic scenarios. While event
cameras show distinct advantages in handling high dynamic range with ultra-low
power consumption, existing RGB-based architectures face inherent limitations
in processing asynchronous event streams due to their synchronous frame-based
nature. Moreover, from an egocentric perspective, event cameras record data
that include events generated by both head movements and hand gestures, thereby
increasing the complexity of gesture recognition. To address this, we propose a
novel network architecture specifically designed for event data processing,
incorporating (1) a lightweight CNN with asymmetric depthwise convolutions to
reduce parameters while preserving spatiotemporal features, (2) a plug-and-play
state-space model as context block that decouples head movement noise from
gesture dynamics, and (3) a parameter-free Bins-Temporal Shift Module (BSTM)
that shifts features along bins and temporal dimensions to fuse sparse events
efficiently. We further build the EgoEvGesture dataset, the first large-scale
dataset for egocentric gesture recognition using event cameras. Experimental
results demonstrate that our method achieves 62.7% accuracy in heterogeneous
testing with only 7M parameters, 3.1% higher than state-of-the-art approaches.
Notable misclassifications in freestyle motions stem from high inter-personal
variability and unseen test patterns differing from training data. Moreover,
our approach achieved a remarkable accuracy of 96.97% on DVS128 Gesture,
demonstrating strong cross-dataset generalization capability. The dataset and
models are made publicly available at
https://github.com/3190105222/EgoEv_Gesture.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 09:08:02 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Luming",
""
],
[
"Shi",
"Hao",
""
],
[
"Yin",
"Xiaoting",
""
],
[
"Yang",
"Kailun",
""
],
[
"Wang",
"Kaiwei",
""
]
] | TITLE: EgoEvGesture: Gesture Recognition Based on Egocentric Event Camera
ABSTRACT: Egocentric gesture recognition is a pivotal technology for enhancing natural
human-computer interaction, yet traditional RGB-based solutions suffer from
motion blur and illumination variations in dynamic scenarios. While event
cameras show distinct advantages in handling high dynamic range with ultra-low
power consumption, existing RGB-based architectures face inherent limitations
in processing asynchronous event streams due to their synchronous frame-based
nature. Moreover, from an egocentric perspective, event cameras record data
that include events generated by both head movements and hand gestures, thereby
increasing the complexity of gesture recognition. To address this, we propose a
novel network architecture specifically designed for event data processing,
incorporating (1) a lightweight CNN with asymmetric depthwise convolutions to
reduce parameters while preserving spatiotemporal features, (2) a plug-and-play
state-space model as context block that decouples head movement noise from
gesture dynamics, and (3) a parameter-free Bins-Temporal Shift Module (BSTM)
that shifts features along bins and temporal dimensions to fuse sparse events
efficiently. We further build the EgoEvGesture dataset, the first large-scale
dataset for egocentric gesture recognition using event cameras. Experimental
results demonstrate that our method achieves 62.7% accuracy in heterogeneous
testing with only 7M parameters, 3.1% higher than state-of-the-art approaches.
Notable misclassifications in freestyle motions stem from high inter-personal
variability and unseen test patterns differing from training data. Moreover,
our approach achieved a remarkable accuracy of 96.97% on DVS128 Gesture,
demonstrating strong cross-dataset generalization capability. The dataset and
models are made publicly available at
https://github.com/3190105222/EgoEv_Gesture.
|
2503.12427 | Bocheng Wang | Bocheng Wang, Chusheng Zeng, Mulin Chen, Xuelong Li | Towards Learnable Anchor for Deep Multi-View Clustering | Accepted by AAAI25 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep multi-view clustering incorporating graph learning has presented
tremendous potential. Most methods encounter costly square time consumption
w.r.t. data size. Theoretically, anchor-based graph learning can alleviate this
limitation, but related deep models mainly rely on manual discretization
approaches to select anchors, which indicates that 1) the anchors are fixed
during model training and 2) they may deviate from the true cluster
distribution. Consequently, the unreliable anchors may corrupt clustering
results. In this paper, we propose the Deep Multi-view Anchor Clustering (DMAC)
model that performs clustering in linear time. Concretely, the initial anchors
are intervened by the positive-incentive noise sampled from Gaussian
distribution, such that they can be optimized with a newly designed anchor
learning loss, which promotes a clear relationship between samples and anchors.
Afterwards, anchor graph convolution is devised to model the cluster structure
formed by the anchors, and the mutual information maximization loss is built to
provide cross-view clustering guidance. In this way, the learned anchors can
better represent clusters. With the optimal anchors, the full sample graph is
calculated to derive a discriminative embedding for clustering. Extensive
experiments on several datasets demonstrate the superior performance and
efficiency of DMAC compared to state-of-the-art competitors.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 09:38:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Bocheng",
""
],
[
"Zeng",
"Chusheng",
""
],
[
"Chen",
"Mulin",
""
],
[
"Li",
"Xuelong",
""
]
] | TITLE: Towards Learnable Anchor for Deep Multi-View Clustering
ABSTRACT: Deep multi-view clustering incorporating graph learning has presented
tremendous potential. Most methods encounter costly square time consumption
w.r.t. data size. Theoretically, anchor-based graph learning can alleviate this
limitation, but related deep models mainly rely on manual discretization
approaches to select anchors, which indicates that 1) the anchors are fixed
during model training and 2) they may deviate from the true cluster
distribution. Consequently, the unreliable anchors may corrupt clustering
results. In this paper, we propose the Deep Multi-view Anchor Clustering (DMAC)
model that performs clustering in linear time. Concretely, the initial anchors
are intervened by the positive-incentive noise sampled from Gaussian
distribution, such that they can be optimized with a newly designed anchor
learning loss, which promotes a clear relationship between samples and anchors.
Afterwards, anchor graph convolution is devised to model the cluster structure
formed by the anchors, and the mutual information maximization loss is built to
provide cross-view clustering guidance. In this way, the learned anchors can
better represent clusters. With the optimal anchors, the full sample graph is
calculated to derive a discriminative embedding for clustering. Extensive
experiments on several datasets demonstrate the superior performance and
efficiency of DMAC compared to state-of-the-art competitors.
|
2503.12434 | Shangheng Du | Shangheng Du, Jiabao Zhao, Jinxin Shi, Zhentao Xie, Xin Jiang, Yanhong
Bai, Liang He | A Survey on the Optimization of Large Language Model-based Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of Large Language Models (LLMs), LLM-based agents
have been widely adopted in various fields, becoming essential for autonomous
decision-making and interactive tasks. However, current work typically relies
on prompt design or fine-tuning strategies applied to vanilla LLMs, which often
leads to limited effectiveness or suboptimal performance in complex
agent-related environments. Although LLM optimization techniques can improve
model performance across many general tasks, they lack specialized optimization
towards critical agent functionalities such as long-term planning, dynamic
environmental interaction, and complex decision-making. Although numerous
recent studies have explored various strategies to optimize LLM-based agents
for complex agent tasks, a systematic review summarizing and comparing these
methods from a holistic perspective is still lacking. In this survey, we
provide a comprehensive review of LLM-based agent optimization approaches,
categorizing them into parameter-driven and parameter-free methods. We first
focus on parameter-driven optimization, covering fine-tuning-based
optimization, reinforcement learning-based optimization, and hybrid strategies,
analyzing key aspects such as trajectory data construction, fine-tuning
techniques, reward function design, and optimization algorithms. Additionally,
we briefly discuss parameter-free strategies that optimize agent behavior
through prompt engineering and external knowledge retrieval. Finally, we
summarize the datasets and benchmarks used for evaluation and tuning, review
key applications of LLM-based agents, and discuss major challenges and
promising future directions. Our repository for related references is available
at https://github.com/YoungDubbyDu/LLM-Agent-Optimization.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 10:09:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Du",
"Shangheng",
""
],
[
"Zhao",
"Jiabao",
""
],
[
"Shi",
"Jinxin",
""
],
[
"Xie",
"Zhentao",
""
],
[
"Jiang",
"Xin",
""
],
[
"Bai",
"Yanhong",
""
],
[
"He",
"Liang",
""
]
] | TITLE: A Survey on the Optimization of Large Language Model-based Agents
ABSTRACT: With the rapid development of Large Language Models (LLMs), LLM-based agents
have been widely adopted in various fields, becoming essential for autonomous
decision-making and interactive tasks. However, current work typically relies
on prompt design or fine-tuning strategies applied to vanilla LLMs, which often
leads to limited effectiveness or suboptimal performance in complex
agent-related environments. Although LLM optimization techniques can improve
model performance across many general tasks, they lack specialized optimization
towards critical agent functionalities such as long-term planning, dynamic
environmental interaction, and complex decision-making. Although numerous
recent studies have explored various strategies to optimize LLM-based agents
for complex agent tasks, a systematic review summarizing and comparing these
methods from a holistic perspective is still lacking. In this survey, we
provide a comprehensive review of LLM-based agent optimization approaches,
categorizing them into parameter-driven and parameter-free methods. We first
focus on parameter-driven optimization, covering fine-tuning-based
optimization, reinforcement learning-based optimization, and hybrid strategies,
analyzing key aspects such as trajectory data construction, fine-tuning
techniques, reward function design, and optimization algorithms. Additionally,
we briefly discuss parameter-free strategies that optimize agent behavior
through prompt engineering and external knowledge retrieval. Finally, we
summarize the datasets and benchmarks used for evaluation and tuning, review
key applications of LLM-based agents, and discuss major challenges and
promising future directions. Our repository for related references is available
at https://github.com/YoungDubbyDu/LLM-Agent-Optimization.
|
2503.12437 | Zhiyuan Xi | Zhiyuan Xi, Kun Zhu, Yuanyuan Xu, Tong Zhang | Mentor-Telemachus Bond: Transferring Knowledge in Semantic Communication
via Contrastive Learning | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Encoder, decoder and knowledge base are three major components for semantic
communication. Recent advances have achieved significant progress in the
encoder-decoder design. However, there remains a considerable gap in the
construction and utilization of knowledge base, which plays important roles in
establishing consensus among communication participants through knowledge
transferring and sharing. Current knowledge base designs typically involve
complex structures, which lead to significant computational overheads and heavy
reliance on manually annotated datasets, making it difficult to adapt to
existing encoder-decoder models. Hence, without knowledge transferring and
sharing within the network results in poor generalization of encoder-decoder.
This necessitates model training for specific tasks and datasets, significantly
limiting the scalability of semantic communication systems to larger networks.
To address these challenges, we propose an innovative Contrastive
Representations Learning based Semantic Communication Framework (CRLSC). In
CRLSC, the server-side pre-trained large model utilizes large-scale public
datasets to construct shared knowledge base. Local-side encoders in terminal
devices conduct training guided by shared knowledge base. These trained
encoders can then build private knowledge bases from private datasets and
fine-tune decoders for specific tasks. This simple and effective approach can
facilitate the knowledge transferring across large-scale heterogeneous
networks.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 10:16:51 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Xi",
"Zhiyuan",
""
],
[
"Zhu",
"Kun",
""
],
[
"Xu",
"Yuanyuan",
""
],
[
"Zhang",
"Tong",
""
]
] | TITLE: Mentor-Telemachus Bond: Transferring Knowledge in Semantic Communication
via Contrastive Learning
ABSTRACT: Encoder, decoder and knowledge base are three major components for semantic
communication. Recent advances have achieved significant progress in the
encoder-decoder design. However, there remains a considerable gap in the
construction and utilization of knowledge base, which plays important roles in
establishing consensus among communication participants through knowledge
transferring and sharing. Current knowledge base designs typically involve
complex structures, which lead to significant computational overheads and heavy
reliance on manually annotated datasets, making it difficult to adapt to
existing encoder-decoder models. Hence, without knowledge transferring and
sharing within the network results in poor generalization of encoder-decoder.
This necessitates model training for specific tasks and datasets, significantly
limiting the scalability of semantic communication systems to larger networks.
To address these challenges, we propose an innovative Contrastive
Representations Learning based Semantic Communication Framework (CRLSC). In
CRLSC, the server-side pre-trained large model utilizes large-scale public
datasets to construct shared knowledge base. Local-side encoders in terminal
devices conduct training guided by shared knowledge base. These trained
encoders can then build private knowledge bases from private datasets and
fine-tune decoders for specific tasks. This simple and effective approach can
facilitate the knowledge transferring across large-scale heterogeneous
networks.
|
2503.12440 | Tsz Chung Cheng | Tsz Chung Cheng, Chung Shing Cheng, Chaak Ming Lau, Eugene Tin-Ho Lam,
Chun Yat Wong, Hoi On Yu and Cheuk Hei Chong | HKCanto-Eval: A Benchmark for Evaluating Cantonese Language
Understanding and Cultural Comprehension in LLMs | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The ability of language models to comprehend and interact in diverse
linguistic and cultural landscapes is crucial. The Cantonese language used in
Hong Kong presents unique challenges for natural language processing due to its
rich cultural nuances and lack of dedicated evaluation datasets. The
HKCanto-Eval benchmark addresses this gap by evaluating the performance of
large language models (LLMs) on Cantonese language understanding tasks,
extending to English and Written Chinese for cross-lingual evaluation.
HKCanto-Eval integrates cultural and linguistic nuances intrinsic to Hong Kong,
providing a robust framework for assessing language models in realistic
scenarios. Additionally, the benchmark includes questions designed to tap into
the underlying linguistic metaknowledge of the models. Our findings indicate
that while proprietary models generally outperform open-weight models,
significant limitations remain in handling Cantonese-specific linguistic and
cultural knowledge, highlighting the need for more targeted training data and
evaluation methods. The code can be accessed at
https://github.com/hon9kon9ize/hkeval2025
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 10:26:24 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Cheng",
"Tsz Chung",
""
],
[
"Cheng",
"Chung Shing",
""
],
[
"Lau",
"Chaak Ming",
""
],
[
"Lam",
"Eugene Tin-Ho",
""
],
[
"Wong",
"Chun Yat",
""
],
[
"Yu",
"Hoi On",
""
],
[
"Chong",
"Cheuk Hei",
""
]
] | TITLE: HKCanto-Eval: A Benchmark for Evaluating Cantonese Language
Understanding and Cultural Comprehension in LLMs
ABSTRACT: The ability of language models to comprehend and interact in diverse
linguistic and cultural landscapes is crucial. The Cantonese language used in
Hong Kong presents unique challenges for natural language processing due to its
rich cultural nuances and lack of dedicated evaluation datasets. The
HKCanto-Eval benchmark addresses this gap by evaluating the performance of
large language models (LLMs) on Cantonese language understanding tasks,
extending to English and Written Chinese for cross-lingual evaluation.
HKCanto-Eval integrates cultural and linguistic nuances intrinsic to Hong Kong,
providing a robust framework for assessing language models in realistic
scenarios. Additionally, the benchmark includes questions designed to tap into
the underlying linguistic metaknowledge of the models. Our findings indicate
that while proprietary models generally outperform open-weight models,
significant limitations remain in handling Cantonese-specific linguistic and
cultural knowledge, highlighting the need for more targeted training data and
evaluation methods. The code can be accessed at
https://github.com/hon9kon9ize/hkeval2025
|
2503.12441 | Yuda Zou | Yuda Zou, Zelong Liu, Yuliang Gu, Bo Du, Yongchao Xu | Consistent-Point: Consistent Pseudo-Points for Semi-Supervised Crowd
Counting and Localization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowd counting and localization are important in applications such as public
security and traffic management. Existing methods have achieved impressive
results thanks to extensive laborious annotations. This paper propose a novel
point-localization-based semi-supervised crowd counting and localization method
termed Consistent-Point. We identify and address two inconsistencies of
pseudo-points, which have not been adequately explored. To enhance their
position consistency, we aggregate the positions of neighboring auxiliary
proposal-points. Additionally, an instance-wise uncertainty calibration is
proposed to improve the class consistency of pseudo-points. By generating more
consistent pseudo-points, Consistent-Point provides more stable supervision to
the training process, yielding improved results. Extensive experiments across
five widely used datasets and three different labeled ratio settings
demonstrate that our method achieves state-of-the-art performance in crowd
localization while also attaining impressive crowd counting results. The code
will be available.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 10:31:52 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zou",
"Yuda",
""
],
[
"Liu",
"Zelong",
""
],
[
"Gu",
"Yuliang",
""
],
[
"Du",
"Bo",
""
],
[
"Xu",
"Yongchao",
""
]
] | TITLE: Consistent-Point: Consistent Pseudo-Points for Semi-Supervised Crowd
Counting and Localization
ABSTRACT: Crowd counting and localization are important in applications such as public
security and traffic management. Existing methods have achieved impressive
results thanks to extensive laborious annotations. This paper propose a novel
point-localization-based semi-supervised crowd counting and localization method
termed Consistent-Point. We identify and address two inconsistencies of
pseudo-points, which have not been adequately explored. To enhance their
position consistency, we aggregate the positions of neighboring auxiliary
proposal-points. Additionally, an instance-wise uncertainty calibration is
proposed to improve the class consistency of pseudo-points. By generating more
consistent pseudo-points, Consistent-Point provides more stable supervision to
the training process, yielding improved results. Extensive experiments across
five widely used datasets and three different labeled ratio settings
demonstrate that our method achieves state-of-the-art performance in crowd
localization while also attaining impressive crowd counting results. The code
will be available.
|
2503.12451 | Hossein Ranjbar | Hossein Ranjbar and Alireza Taheri | ISLR101: an Iranian Word-Level Sign Language Recognition Dataset | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Sign language recognition involves modeling complex multichannel information,
such as hand shapes and movements while relying on sufficient sign
language-specific data. However, sign languages are often under-resourced,
posing a significant challenge for research and development in this field. To
address this gap, we introduce ISLR101, the first publicly available Iranian
Sign Language dataset for isolated sign language recognition. This
comprehensive dataset includes 4,614 videos covering 101 distinct signs,
recorded by 10 different signers (3 deaf individuals, 2 sign language
interpreters, and 5 L2 learners) against varied backgrounds, with a resolution
of 800x600 pixels and a frame rate of 25 frames per second. It also includes
skeleton pose information extracted using OpenPose. We establish both a visual
appearance-based and a skeleton-based framework as baseline models, thoroughly
training and evaluating them on ISLR101. These models achieve 97.01% and 94.02%
accuracy on the test set, respectively. Additionally, we publish the train,
validation, and test splits to facilitate fair comparisons.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 10:57:01 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ranjbar",
"Hossein",
""
],
[
"Taheri",
"Alireza",
""
]
] | TITLE: ISLR101: an Iranian Word-Level Sign Language Recognition Dataset
ABSTRACT: Sign language recognition involves modeling complex multichannel information,
such as hand shapes and movements while relying on sufficient sign
language-specific data. However, sign languages are often under-resourced,
posing a significant challenge for research and development in this field. To
address this gap, we introduce ISLR101, the first publicly available Iranian
Sign Language dataset for isolated sign language recognition. This
comprehensive dataset includes 4,614 videos covering 101 distinct signs,
recorded by 10 different signers (3 deaf individuals, 2 sign language
interpreters, and 5 L2 learners) against varied backgrounds, with a resolution
of 800x600 pixels and a frame rate of 25 frames per second. It also includes
skeleton pose information extracted using OpenPose. We establish both a visual
appearance-based and a skeleton-based framework as baseline models, thoroughly
training and evaluating them on ISLR101. These models achieve 97.01% and 94.02%
accuracy on the test set, respectively. Additionally, we publish the train,
validation, and test splits to facilitate fair comparisons.
|
2503.12453 | Annika M\"utze | Edgar Heinert, Thomas Gottwald, Annika M\"utze, Matthias Rottmann | Shape Bias and Robustness Evaluation via Cue Decomposition for Image
Classification and Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous works studied how deep neural networks (DNNs) perceive image content
in terms of their biases towards different image cues, such as texture and
shape. Previous methods to measure shape and texture biases are typically
style-transfer-based and limited to DNNs for image classification. In this
work, we provide a new evaluation procedure consisting of 1) a
cue-decomposition method that comprises two AI-free data pre-processing methods
extracting shape and texture cues, respectively, and 2) a novel
cue-decomposition shape bias evaluation metric that leverages the
cue-decomposition data. For application purposes we introduce a corresponding
cue-decomposition robustness metric that allows for the estimation of the
robustness of a DNN w.r.t. image corruptions. In our numerical experiments, our
findings for biases in image classification DNNs align with those of previous
evaluation metrics. However, our cue-decomposition robustness metric shows
superior results in terms of estimating the robustness of DNNs. Furthermore,
our results for DNNs on the semantic segmentation datasets Cityscapes and
ADE20k for the first time shed light into the biases of semantic segmentation
DNNs.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 11:17:03 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Heinert",
"Edgar",
""
],
[
"Gottwald",
"Thomas",
""
],
[
"Mütze",
"Annika",
""
],
[
"Rottmann",
"Matthias",
""
]
] | TITLE: Shape Bias and Robustness Evaluation via Cue Decomposition for Image
Classification and Segmentation
ABSTRACT: Previous works studied how deep neural networks (DNNs) perceive image content
in terms of their biases towards different image cues, such as texture and
shape. Previous methods to measure shape and texture biases are typically
style-transfer-based and limited to DNNs for image classification. In this
work, we provide a new evaluation procedure consisting of 1) a
cue-decomposition method that comprises two AI-free data pre-processing methods
extracting shape and texture cues, respectively, and 2) a novel
cue-decomposition shape bias evaluation metric that leverages the
cue-decomposition data. For application purposes we introduce a corresponding
cue-decomposition robustness metric that allows for the estimation of the
robustness of a DNN w.r.t. image corruptions. In our numerical experiments, our
findings for biases in image classification DNNs align with those of previous
evaluation metrics. However, our cue-decomposition robustness metric shows
superior results in terms of estimating the robustness of DNNs. Furthermore,
our results for DNNs on the semantic segmentation datasets Cityscapes and
ADE20k for the first time shed light into the biases of semantic segmentation
DNNs.
|
2503.12466 | Jiahang Cao | Jiahang Cao, Qiang Zhang, Hanzhong Guo, Jiaxu Wang, Hao Cheng, Renjing
Xu | Modality-Composable Diffusion Policy via Inference-Time
Distribution-level Composition | Accepted to ICLR 2025 Generative Models for Robot Learning Workshop | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion Policy (DP) has attracted significant attention as an effective
method for policy representation due to its capacity to model
multi-distribution dynamics. However, current DPs are often based on a single
visual modality (e.g., RGB or point cloud), limiting their accuracy and
generalization potential. Although training a generalized DP capable of
handling heterogeneous multimodal data would enhance performance, it entails
substantial computational and data-related costs. To address these challenges,
we propose a novel policy composition method: by leveraging multiple
pre-trained DPs based on individual visual modalities, we can combine their
distributional scores to form a more expressive Modality-Composable Diffusion
Policy (MCDP), without the need for additional training. Through extensive
empirical experiments on the RoboTwin dataset, we demonstrate the potential of
MCDP to improve both adaptability and performance. This exploration aims to
provide valuable insights into the flexible composition of existing DPs,
facilitating the development of generalizable cross-modality, cross-domain, and
even cross-embodiment policies. Our code is open-sourced at
https://github.com/AndyCao1125/MCDP.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 11:40:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Cao",
"Jiahang",
""
],
[
"Zhang",
"Qiang",
""
],
[
"Guo",
"Hanzhong",
""
],
[
"Wang",
"Jiaxu",
""
],
[
"Cheng",
"Hao",
""
],
[
"Xu",
"Renjing",
""
]
] | TITLE: Modality-Composable Diffusion Policy via Inference-Time
Distribution-level Composition
ABSTRACT: Diffusion Policy (DP) has attracted significant attention as an effective
method for policy representation due to its capacity to model
multi-distribution dynamics. However, current DPs are often based on a single
visual modality (e.g., RGB or point cloud), limiting their accuracy and
generalization potential. Although training a generalized DP capable of
handling heterogeneous multimodal data would enhance performance, it entails
substantial computational and data-related costs. To address these challenges,
we propose a novel policy composition method: by leveraging multiple
pre-trained DPs based on individual visual modalities, we can combine their
distributional scores to form a more expressive Modality-Composable Diffusion
Policy (MCDP), without the need for additional training. Through extensive
empirical experiments on the RoboTwin dataset, we demonstrate the potential of
MCDP to improve both adaptability and performance. This exploration aims to
provide valuable insights into the flexible composition of existing DPs,
facilitating the development of generalizable cross-modality, cross-domain, and
even cross-embodiment policies. Our code is open-sourced at
https://github.com/AndyCao1125/MCDP.
|
2503.12470 | Mei Han | Han Mei and Kunqian Li and Shuaixin Liu and Chengzhi Ma and Qianli
Jiang | DPF-Net: Physical Imaging Model Embedded Data-Driven Underwater Image
Enhancement | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to the complex interplay of light absorption and scattering in the
underwater environment, underwater images experience significant degradation.
This research presents a two-stage underwater image enhancement network called
the Data-Driven and Physical Parameters Fusion Network (DPF-Net), which
harnesses the robustness of physical imaging models alongside the generality
and efficiency of data-driven methods. We first train a physical parameter
estimate module using synthetic datasets to guarantee the trustworthiness of
the physical parameters, rather than solely learning the fitting relationship
between raw and reference images by the application of the imaging equation, as
is common in prior studies. This module is subsequently trained in conjunction
with an enhancement network, where the estimated physical parameters are
integrated into a data-driven model within the embedding space. To maintain the
uniformity of the restoration process amid underwater imaging degradation, we
propose a physics-based degradation consistency loss. Additionally, we suggest
an innovative weak reference loss term utilizing the entire dataset, which
alleviates our model's reliance on the quality of individual reference images.
Our proposed DPF-Net demonstrates superior performance compared to other
benchmark methods across multiple test sets, achieving state-of-the-art
results. The source code and pre-trained models are available on the project
home page: https://github.com/OUCVisionGroup/DPF-Net.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 11:53:18 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Mei",
"Han",
""
],
[
"Li",
"Kunqian",
""
],
[
"Liu",
"Shuaixin",
""
],
[
"Ma",
"Chengzhi",
""
],
[
"Jiang",
"Qianli",
""
]
] | TITLE: DPF-Net: Physical Imaging Model Embedded Data-Driven Underwater Image
Enhancement
ABSTRACT: Due to the complex interplay of light absorption and scattering in the
underwater environment, underwater images experience significant degradation.
This research presents a two-stage underwater image enhancement network called
the Data-Driven and Physical Parameters Fusion Network (DPF-Net), which
harnesses the robustness of physical imaging models alongside the generality
and efficiency of data-driven methods. We first train a physical parameter
estimate module using synthetic datasets to guarantee the trustworthiness of
the physical parameters, rather than solely learning the fitting relationship
between raw and reference images by the application of the imaging equation, as
is common in prior studies. This module is subsequently trained in conjunction
with an enhancement network, where the estimated physical parameters are
integrated into a data-driven model within the embedding space. To maintain the
uniformity of the restoration process amid underwater imaging degradation, we
propose a physics-based degradation consistency loss. Additionally, we suggest
an innovative weak reference loss term utilizing the entire dataset, which
alleviates our model's reliance on the quality of individual reference images.
Our proposed DPF-Net demonstrates superior performance compared to other
benchmark methods across multiple test sets, achieving state-of-the-art
results. The source code and pre-trained models are available on the project
home page: https://github.com/OUCVisionGroup/DPF-Net.
|
2503.12472 | Lijing Lu | Wenbo Dai, Lijing Lu, Zhihang Li | Diffusion-based Synthetic Data Generation for Visible-Infrared Person
Re-Identification | AAAI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The performance of models is intricately linked to the abundance of training
data. In Visible-Infrared person Re-IDentification (VI-ReID) tasks, collecting
and annotating large-scale images of each individual under various cameras and
modalities is tedious, time-expensive, costly and must comply with data
protection laws, posing a severe challenge in meeting dataset requirements.
Current research investigates the generation of synthetic data as an efficient
and privacy-ensuring alternative to collecting real data in the field. However,
a specific data synthesis technique tailored for VI-ReID models has yet to be
explored. In this paper, we present a novel data generation framework, dubbed
Diffusion-based VI-ReID data Expansion (DiVE), that automatically obtain
massive RGB-IR paired images with identity preserving by decoupling identity
and modality to improve the performance of VI-ReID models. Specifically,
identity representation is acquired from a set of samples sharing the same ID,
whereas the modality of images is learned by fine-tuning the Stable Diffusion
(SD) on modality-specific data. DiVE extend the text-driven image synthesis to
identity-preserving RGB-IR multimodal image synthesis. This approach
significantly reduces data collection and annotation costs by directly
incorporating synthetic data into ReID model training. Experiments have
demonstrated that VI-ReID models trained on synthetic data produced by DiVE
consistently exhibit notable enhancements. In particular, the state-of-the-art
method, CAJ, trained with synthetic images, achieves an improvement of about
$9\%$ in mAP over the baseline on the LLCM dataset. Code:
https://github.com/BorgDiven/DiVE
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 11:54:37 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Dai",
"Wenbo",
""
],
[
"Lu",
"Lijing",
""
],
[
"Li",
"Zhihang",
""
]
] | TITLE: Diffusion-based Synthetic Data Generation for Visible-Infrared Person
Re-Identification
ABSTRACT: The performance of models is intricately linked to the abundance of training
data. In Visible-Infrared person Re-IDentification (VI-ReID) tasks, collecting
and annotating large-scale images of each individual under various cameras and
modalities is tedious, time-expensive, costly and must comply with data
protection laws, posing a severe challenge in meeting dataset requirements.
Current research investigates the generation of synthetic data as an efficient
and privacy-ensuring alternative to collecting real data in the field. However,
a specific data synthesis technique tailored for VI-ReID models has yet to be
explored. In this paper, we present a novel data generation framework, dubbed
Diffusion-based VI-ReID data Expansion (DiVE), that automatically obtain
massive RGB-IR paired images with identity preserving by decoupling identity
and modality to improve the performance of VI-ReID models. Specifically,
identity representation is acquired from a set of samples sharing the same ID,
whereas the modality of images is learned by fine-tuning the Stable Diffusion
(SD) on modality-specific data. DiVE extend the text-driven image synthesis to
identity-preserving RGB-IR multimodal image synthesis. This approach
significantly reduces data collection and annotation costs by directly
incorporating synthetic data into ReID model training. Experiments have
demonstrated that VI-ReID models trained on synthetic data produced by DiVE
consistently exhibit notable enhancements. In particular, the state-of-the-art
method, CAJ, trained with synthetic images, achieves an improvement of about
$9\%$ in mAP over the baseline on the LLCM dataset. Code:
https://github.com/BorgDiven/DiVE
|
2503.12483 | Ruwei Pan | Ruwei Pan, Hongyu Zhang | Modularization is Better: Effective Code Generation with Modular
Prompting | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models are transforming software development by automatically
generating code. Current prompting techniques such as Chain-of-Thought (CoT)
suggest tasks step by step and the reasoning process follows a linear
structure, which hampers the understanding of complex programming problems,
particularly those requiring hierarchical solutions. Inspired by the principle
of modularization in software development, in this work, we propose a novel
prompting technique, called MoT, to enhance the code generation performance of
LLMs. At first, MoT exploits modularization principles to decompose complex
programming problems into smaller, independent reasoning steps, enabling a more
structured and interpretable problem-solving process. This hierarchical
structure improves the LLM's ability to comprehend complex programming
problems. Then, it structures the reasoning process using an MLR Graph
(Multi-Level Reasoning Graph), which hierarchically organizes reasoning steps.
This approach enhances modular understanding and ensures better alignment
between reasoning steps and the generated code, significantly improving code
generation performance. Our experiments on two advanced LLMs (GPT-4o-mini and
DeepSeek-R1), comparing MoT to six baseline prompting techniques across six
widely used datasets, HumanEval, HumanEval-ET, HumanEval+, MBPP, MBPP-ET, and
MBPP+, demonstrate that MoT significantly outperforms existing baselines (e.g.,
CoT and SCoT), achieving Pass@1 scores ranging from 58.1% to 95.1%. The
experimental results confirm that MoT significantly enhances the performance of
LLM-based code generation.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 12:23:23 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Pan",
"Ruwei",
""
],
[
"Zhang",
"Hongyu",
""
]
] | TITLE: Modularization is Better: Effective Code Generation with Modular
Prompting
ABSTRACT: Large Language Models are transforming software development by automatically
generating code. Current prompting techniques such as Chain-of-Thought (CoT)
suggest tasks step by step and the reasoning process follows a linear
structure, which hampers the understanding of complex programming problems,
particularly those requiring hierarchical solutions. Inspired by the principle
of modularization in software development, in this work, we propose a novel
prompting technique, called MoT, to enhance the code generation performance of
LLMs. At first, MoT exploits modularization principles to decompose complex
programming problems into smaller, independent reasoning steps, enabling a more
structured and interpretable problem-solving process. This hierarchical
structure improves the LLM's ability to comprehend complex programming
problems. Then, it structures the reasoning process using an MLR Graph
(Multi-Level Reasoning Graph), which hierarchically organizes reasoning steps.
This approach enhances modular understanding and ensures better alignment
between reasoning steps and the generated code, significantly improving code
generation performance. Our experiments on two advanced LLMs (GPT-4o-mini and
DeepSeek-R1), comparing MoT to six baseline prompting techniques across six
widely used datasets, HumanEval, HumanEval-ET, HumanEval+, MBPP, MBPP-ET, and
MBPP+, demonstrate that MoT significantly outperforms existing baselines (e.g.,
CoT and SCoT), achieving Pass@1 scores ranging from 58.1% to 95.1%. The
experimental results confirm that MoT significantly enhances the performance of
LLM-based code generation.
|
2503.12495 | Kun Zhan | Xuan Ma, Zewen Lv, Chengcai Ma, Tao Zhang, Yuelan Xin, Kun Zhan | BS-Mamba for Black-Soil Area Detection On the Qinghai-Tibetan Plateau | Journal of Applied Remote Sensing, 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Extremely degraded grassland on the Qinghai-Tibetan Plateau (QTP) presents a
significant environmental challenge due to overgrazing, climate change, and
rodent activity, which degrade vegetation cover and soil quality. These
extremely degraded grassland on QTP, commonly referred to as black-soil area,
require accurate assessment to guide effective restoration efforts. In this
paper, we present a newly created QTP black-soil dataset, annotated under
expert guidance. We introduce a novel neural network model, BS-Mamba,
specifically designed for the black-soil area detection using UAV remote
sensing imagery. The BS-Mamba model demonstrates higher accuracy in identifying
black-soil area across two independent test datasets than the state-of-the-art
models. This research contributes to grassland restoration by providing an
efficient method for assessing the extent of black-soil area on the QTP.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 13:11:48 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ma",
"Xuan",
""
],
[
"Lv",
"Zewen",
""
],
[
"Ma",
"Chengcai",
""
],
[
"Zhang",
"Tao",
""
],
[
"Xin",
"Yuelan",
""
],
[
"Zhan",
"Kun",
""
]
] | TITLE: BS-Mamba for Black-Soil Area Detection On the Qinghai-Tibetan Plateau
ABSTRACT: Extremely degraded grassland on the Qinghai-Tibetan Plateau (QTP) presents a
significant environmental challenge due to overgrazing, climate change, and
rodent activity, which degrade vegetation cover and soil quality. These
extremely degraded grassland on QTP, commonly referred to as black-soil area,
require accurate assessment to guide effective restoration efforts. In this
paper, we present a newly created QTP black-soil dataset, annotated under
expert guidance. We introduce a novel neural network model, BS-Mamba,
specifically designed for the black-soil area detection using UAV remote
sensing imagery. The BS-Mamba model demonstrates higher accuracy in identifying
black-soil area across two independent test datasets than the state-of-the-art
models. This research contributes to grassland restoration by providing an
efficient method for assessing the extent of black-soil area on the QTP.
|
2503.12499 | Wen Gu | Wen Gu, Zhaoxing Li, Jan Buermann, Jim Dilkes, Dimitris Michailidis,
Shinobu Hasegawa, Vahid Yazdanpanah, Sebastian Stein | Facilitating Automated Online Consensus Building through Parallel
Thinking | null | null | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Consensus building is inherently challenging due to the diverse opinions held
by stakeholders. Effective facilitation is crucial to support the consensus
building process and enable efficient group decision making. However, the
effectiveness of facilitation is often constrained by human factors such as
limited experience and scalability. In this research, we propose a Parallel
Thinking-based Facilitation Agent (PTFA) that facilitates online, text-based
consensus building processes. The PTFA automatically collects textual posts and
leverages large language models (LLMs) to perform all of the six distinct roles
of the well-established Six Thinking Hats technique in parallel thinking. To
illustrate the potential of PTFA, a pilot study was carried out and PTFA's
ability in idea generation, emotional probing, and deeper analysis of ideas was
demonstrated. Furthermore, a comprehensive dataset that contains not only the
conversational content among the participants but also between the participants
and the agent is constructed for future study.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 13:32:35 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Gu",
"Wen",
""
],
[
"Li",
"Zhaoxing",
""
],
[
"Buermann",
"Jan",
""
],
[
"Dilkes",
"Jim",
""
],
[
"Michailidis",
"Dimitris",
""
],
[
"Hasegawa",
"Shinobu",
""
],
[
"Yazdanpanah",
"Vahid",
""
],
[
"Stein",
"Sebastian",
""
]
] | TITLE: Facilitating Automated Online Consensus Building through Parallel
Thinking
ABSTRACT: Consensus building is inherently challenging due to the diverse opinions held
by stakeholders. Effective facilitation is crucial to support the consensus
building process and enable efficient group decision making. However, the
effectiveness of facilitation is often constrained by human factors such as
limited experience and scalability. In this research, we propose a Parallel
Thinking-based Facilitation Agent (PTFA) that facilitates online, text-based
consensus building processes. The PTFA automatically collects textual posts and
leverages large language models (LLMs) to perform all of the six distinct roles
of the well-established Six Thinking Hats technique in parallel thinking. To
illustrate the potential of PTFA, a pilot study was carried out and PTFA's
ability in idea generation, emotional probing, and deeper analysis of ideas was
demonstrated. Furthermore, a comprehensive dataset that contains not only the
conversational content among the participants but also between the participants
and the agent is constructed for future study.
|
2503.12506 | Zhongju Yuan | Zhongju Yuan, Geraint Wiggins, Dick Botteldooren | A General Close-loop Predictive Coding Framework for Auditory Working
Memory | null | null | null | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Auditory working memory is essential for various daily activities, such as
language acquisition, conversation. It involves the temporary storage and
manipulation of information that is no longer present in the environment. While
extensively studied in neuroscience and cognitive science, research on its
modeling within neural networks remains limited. To address this gap, we
propose a general framework based on a close-loop predictive coding paradigm to
perform short auditory signal memory tasks. The framework is evaluated on two
widely used benchmark datasets for environmental sound and speech,
demonstrating high semantic similarity across both datasets.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 13:57:37 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yuan",
"Zhongju",
""
],
[
"Wiggins",
"Geraint",
""
],
[
"Botteldooren",
"Dick",
""
]
] | TITLE: A General Close-loop Predictive Coding Framework for Auditory Working
Memory
ABSTRACT: Auditory working memory is essential for various daily activities, such as
language acquisition, conversation. It involves the temporary storage and
manipulation of information that is no longer present in the environment. While
extensively studied in neuroscience and cognitive science, research on its
modeling within neural networks remains limited. To address this gap, we
propose a general framework based on a close-loop predictive coding paradigm to
perform short auditory signal memory tasks. The framework is evaluated on two
widely used benchmark datasets for environmental sound and speech,
demonstrating high semantic similarity across both datasets.
|
2503.12515 | Pan Du | Pan Du, Delin An, Chaoli Wang, Jian-Xun Wang | AI-Powered Automated Model Construction for Patient-Specific CFD
Simulations of Aortic Flows | 42 pages, 8 figures | null | null | null | cs.CV cs.LG physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Image-based modeling is essential for understanding cardiovascular
hemodynamics and advancing the diagnosis and treatment of cardiovascular
diseases. Constructing patient-specific vascular models remains
labor-intensive, error-prone, and time-consuming, limiting their clinical
applications. This study introduces a deep-learning framework that automates
the creation of simulation-ready vascular models from medical images. The
framework integrates a segmentation module for accurate voxel-based vessel
delineation with a surface deformation module that performs anatomically
consistent and unsupervised surface refinements guided by medical image data.
By unifying voxel segmentation and surface deformation into a single cohesive
pipeline, the framework addresses key limitations of existing methods,
enhancing geometric accuracy and computational efficiency. Evaluated on
publicly available datasets, the proposed approach demonstrates
state-of-the-art performance in segmentation and mesh quality while
significantly reducing manual effort and processing time. This work advances
the scalability and reliability of image-based computational modeling,
facilitating broader applications in clinical and research settings.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 14:18:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Du",
"Pan",
""
],
[
"An",
"Delin",
""
],
[
"Wang",
"Chaoli",
""
],
[
"Wang",
"Jian-Xun",
""
]
] | TITLE: AI-Powered Automated Model Construction for Patient-Specific CFD
Simulations of Aortic Flows
ABSTRACT: Image-based modeling is essential for understanding cardiovascular
hemodynamics and advancing the diagnosis and treatment of cardiovascular
diseases. Constructing patient-specific vascular models remains
labor-intensive, error-prone, and time-consuming, limiting their clinical
applications. This study introduces a deep-learning framework that automates
the creation of simulation-ready vascular models from medical images. The
framework integrates a segmentation module for accurate voxel-based vessel
delineation with a surface deformation module that performs anatomically
consistent and unsupervised surface refinements guided by medical image data.
By unifying voxel segmentation and surface deformation into a single cohesive
pipeline, the framework addresses key limitations of existing methods,
enhancing geometric accuracy and computational efficiency. Evaluated on
publicly available datasets, the proposed approach demonstrates
state-of-the-art performance in segmentation and mesh quality while
significantly reducing manual effort and processing time. This work advances
the scalability and reliability of image-based computational modeling,
facilitating broader applications in clinical and research settings.
|
2503.12519 | Taein Kwon | Taein Kwon, Zador Pataki, Mahdi Rad and Marc Pollefeys | Multi Activity Sequence Alignment via Implicit Clustering | 19 pages, 10 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised temporal sequence alignment can provide rich and effective
representations for a wide range of applications. However, existing methods for
achieving optimal performance are mostly limited to aligning sequences of the
same activity only and require separate models to be trained for each activity.
We propose a novel framework that overcomes these limitations using sequence
alignment via implicit clustering. Specifically, our key idea is to perform
implicit clip-level clustering while aligning frames in sequences. This coupled
with our proposed dual augmentation technique enhances the network's ability to
learn generalizable and discriminative representations. Our experiments show
that our proposed method outperforms state-of-the-art results and highlight the
generalization capability of our framework with multi activity and different
modalities on three diverse datasets, H2O, PennAction, and IKEA ASM. We will
release our code upon acceptance.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 14:28:46 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kwon",
"Taein",
""
],
[
"Pataki",
"Zador",
""
],
[
"Rad",
"Mahdi",
""
],
[
"Pollefeys",
"Marc",
""
]
] | TITLE: Multi Activity Sequence Alignment via Implicit Clustering
ABSTRACT: Self-supervised temporal sequence alignment can provide rich and effective
representations for a wide range of applications. However, existing methods for
achieving optimal performance are mostly limited to aligning sequences of the
same activity only and require separate models to be trained for each activity.
We propose a novel framework that overcomes these limitations using sequence
alignment via implicit clustering. Specifically, our key idea is to perform
implicit clip-level clustering while aligning frames in sequences. This coupled
with our proposed dual augmentation technique enhances the network's ability to
learn generalizable and discriminative representations. Our experiments show
that our proposed method outperforms state-of-the-art results and highlight the
generalization capability of our framework with multi activity and different
modalities on three diverse datasets, H2O, PennAction, and IKEA ASM. We will
release our code upon acceptance.
|
2503.12527 | Yang Yi | Yang Yi, Kunqing Wang, Jinpu Zhang, Zhen Tan, Xiangke Wang, Hui Shen,
Dewen Hu | A Plug-and-Play Learning-based IMU Bias Factor for Robust
Visual-Inertial Odometry | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | The bias of low-cost Inertial Measurement Units (IMU) is a critical factor
affecting the performance of Visual-Inertial Odometry (VIO). In particular,
when visual tracking encounters errors, the optimized bias results may deviate
significantly from the true values, adversely impacting the system's stability
and localization precision. In this paper, we propose a novel plug-and-play
framework featuring the Inertial Prior Network (IPNet), which is designed to
accurately estimate IMU bias. Recognizing the substantial impact of initial
bias errors in low-cost inertial devices on system performance, our network
directly leverages raw IMU data to estimate the mean bias, eliminating the
dependency on historical estimates in traditional recursive predictions and
effectively preventing error propagation. Furthermore, we introduce an
iterative approach to calculate the mean value of the bias for network
training, addressing the lack of bias labels in many visual-inertial datasets.
The framework is evaluated on two public datasets and one self-collected
dataset. Extensive experiments demonstrate that our method significantly
enhances both localization precision and robustness, with the ATE-RMSE metric
improving on average by 46\%. The source code and video will be available at
\textcolor{red}{https://github.com/yiyscut/VIO-IPNet.git}.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 14:45:19 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yi",
"Yang",
""
],
[
"Wang",
"Kunqing",
""
],
[
"Zhang",
"Jinpu",
""
],
[
"Tan",
"Zhen",
""
],
[
"Wang",
"Xiangke",
""
],
[
"Shen",
"Hui",
""
],
[
"Hu",
"Dewen",
""
]
] | TITLE: A Plug-and-Play Learning-based IMU Bias Factor for Robust
Visual-Inertial Odometry
ABSTRACT: The bias of low-cost Inertial Measurement Units (IMU) is a critical factor
affecting the performance of Visual-Inertial Odometry (VIO). In particular,
when visual tracking encounters errors, the optimized bias results may deviate
significantly from the true values, adversely impacting the system's stability
and localization precision. In this paper, we propose a novel plug-and-play
framework featuring the Inertial Prior Network (IPNet), which is designed to
accurately estimate IMU bias. Recognizing the substantial impact of initial
bias errors in low-cost inertial devices on system performance, our network
directly leverages raw IMU data to estimate the mean bias, eliminating the
dependency on historical estimates in traditional recursive predictions and
effectively preventing error propagation. Furthermore, we introduce an
iterative approach to calculate the mean value of the bias for network
training, addressing the lack of bias labels in many visual-inertial datasets.
The framework is evaluated on two public datasets and one self-collected
dataset. Extensive experiments demonstrate that our method significantly
enhances both localization precision and robustness, with the ATE-RMSE metric
improving on average by 46\%. The source code and video will be available at
\textcolor{red}{https://github.com/yiyscut/VIO-IPNet.git}.
|
2503.12531 | Mehmet Kerem Turkcan | Mehmet Kerem Turkcan, Mattia Ballo, Filippo Filicori, Zoran Kostic | Towards Suturing World Models: Learning Predictive Models for Robotic
Surgical Tasks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce specialized diffusion-based generative models that capture the
spatiotemporal dynamics of fine-grained robotic surgical sub-stitch actions
through supervised learning on annotated laparoscopic surgery footage. The
proposed models form a foundation for data-driven world models capable of
simulating the biomechanical interactions and procedural dynamics of surgical
suturing with high temporal fidelity. Annotating a dataset of $\sim2K$ clips
extracted from simulation videos, we categorize surgical actions into
fine-grained sub-stitch classes including ideal and non-ideal executions of
needle positioning, targeting, driving, and withdrawal. We fine-tune two
state-of-the-art video diffusion models, LTX-Video and HunyuanVideo, to
generate high-fidelity surgical action sequences at $\ge$768x512 resolution and
$\ge$49 frames. For training our models, we explore both Low-Rank Adaptation
(LoRA) and full-model fine-tuning approaches. Our experimental results
demonstrate that these world models can effectively capture the dynamics of
suturing, potentially enabling improved training simulators, surgical skill
assessment tools, and autonomous surgical systems. The models also display the
capability to differentiate between ideal and non-ideal technique execution,
providing a foundation for building surgical training and evaluation systems.
We release our models for testing and as a foundation for future research.
Project Page: https://mkturkcan.github.io/suturingmodels/
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 14:51:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Turkcan",
"Mehmet Kerem",
""
],
[
"Ballo",
"Mattia",
""
],
[
"Filicori",
"Filippo",
""
],
[
"Kostic",
"Zoran",
""
]
] | TITLE: Towards Suturing World Models: Learning Predictive Models for Robotic
Surgical Tasks
ABSTRACT: We introduce specialized diffusion-based generative models that capture the
spatiotemporal dynamics of fine-grained robotic surgical sub-stitch actions
through supervised learning on annotated laparoscopic surgery footage. The
proposed models form a foundation for data-driven world models capable of
simulating the biomechanical interactions and procedural dynamics of surgical
suturing with high temporal fidelity. Annotating a dataset of $\sim2K$ clips
extracted from simulation videos, we categorize surgical actions into
fine-grained sub-stitch classes including ideal and non-ideal executions of
needle positioning, targeting, driving, and withdrawal. We fine-tune two
state-of-the-art video diffusion models, LTX-Video and HunyuanVideo, to
generate high-fidelity surgical action sequences at $\ge$768x512 resolution and
$\ge$49 frames. For training our models, we explore both Low-Rank Adaptation
(LoRA) and full-model fine-tuning approaches. Our experimental results
demonstrate that these world models can effectively capture the dynamics of
suturing, potentially enabling improved training simulators, surgical skill
assessment tools, and autonomous surgical systems. The models also display the
capability to differentiate between ideal and non-ideal technique execution,
providing a foundation for building surgical training and evaluation systems.
We release our models for testing and as a foundation for future research.
Project Page: https://mkturkcan.github.io/suturingmodels/
|
2503.12534 | Chichun Zhou | Huajie Liang, Di Wang, Yuchao Lu, Mengke Song, Lei Liu, Ling An, Ying
Liang, Xingjie Ma, Zhenyu Zhang and Chichun Zhou | Time-EAPCR-T: A Universal Deep Learning Approach for Anomaly Detection
in Industrial Equipment | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advancement of Industry 4.0, intelligent manufacturing extensively
employs sensors for real-time multidimensional data collection, playing a
crucial role in equipment monitoring, process optimisation, and efficiency
enhancement. Industrial data exhibit characteristics such as multi-source
heterogeneity, nonlinearity, strong coupling, and temporal interactions, while
also being affected by noise interference. These complexities make it
challenging for traditional anomaly detection methods to extract key features,
impacting detection accuracy and stability. Traditional machine learning
approaches often struggle with such complex data due to limitations in
processing capacity and generalisation ability, making them inadequate for
practical applications. While deep learning feature extraction modules have
demonstrated remarkable performance in image and text processing, they remain
ineffective when applied to multi-source heterogeneous industrial data lacking
explicit correlations. Moreover, existing multi-source heterogeneous data
processing techniques still rely on dimensionality reduction and feature
selection, which can lead to information loss and difficulty in capturing
high-order interactions. To address these challenges, this study applies the
EAPCR and Time-EAPCR models proposed in previous research and introduces a new
model, Time-EAPCR-T, where Transformer replaces the LSTM module in the
time-series processing component of Time-EAPCR. This modification effectively
addresses multi-source data heterogeneity, facilitates efficient multi-source
feature fusion, and enhances the temporal feature extraction capabilities of
multi-source industrial data.Experimental results demonstrate that the proposed
method outperforms existing approaches across four industrial datasets,
highlighting its broad application potential.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 14:54:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liang",
"Huajie",
""
],
[
"Wang",
"Di",
""
],
[
"Lu",
"Yuchao",
""
],
[
"Song",
"Mengke",
""
],
[
"Liu",
"Lei",
""
],
[
"An",
"Ling",
""
],
[
"Liang",
"Ying",
""
],
[
"Ma",
"Xingjie",
""
],
[
"Zhang",
"Zhenyu",
""
],
[
"Zhou",
"Chichun",
""
]
] | TITLE: Time-EAPCR-T: A Universal Deep Learning Approach for Anomaly Detection
in Industrial Equipment
ABSTRACT: With the advancement of Industry 4.0, intelligent manufacturing extensively
employs sensors for real-time multidimensional data collection, playing a
crucial role in equipment monitoring, process optimisation, and efficiency
enhancement. Industrial data exhibit characteristics such as multi-source
heterogeneity, nonlinearity, strong coupling, and temporal interactions, while
also being affected by noise interference. These complexities make it
challenging for traditional anomaly detection methods to extract key features,
impacting detection accuracy and stability. Traditional machine learning
approaches often struggle with such complex data due to limitations in
processing capacity and generalisation ability, making them inadequate for
practical applications. While deep learning feature extraction modules have
demonstrated remarkable performance in image and text processing, they remain
ineffective when applied to multi-source heterogeneous industrial data lacking
explicit correlations. Moreover, existing multi-source heterogeneous data
processing techniques still rely on dimensionality reduction and feature
selection, which can lead to information loss and difficulty in capturing
high-order interactions. To address these challenges, this study applies the
EAPCR and Time-EAPCR models proposed in previous research and introduces a new
model, Time-EAPCR-T, where Transformer replaces the LSTM module in the
time-series processing component of Time-EAPCR. This modification effectively
addresses multi-source data heterogeneity, facilitates efficient multi-source
feature fusion, and enhances the temporal feature extraction capabilities of
multi-source industrial data.Experimental results demonstrate that the proposed
method outperforms existing approaches across four industrial datasets,
highlighting its broad application potential.
|
2503.12536 | Lin-Chun Huang | Lin-Chun Huang, Ching Chieh Tsao, Fang-Yi Su, Jung-Hsien Chiang | Debiasing Diffusion Model: Enhancing Fairness through Latent
Representation Learning in Stable Diffusion Model | null | null | null | null | cs.LG cs.CV cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image generative models, particularly diffusion-based models, have surged in
popularity due to their remarkable ability to synthesize highly realistic
images. However, since these models are data-driven, they inherit biases from
the training datasets, frequently leading to disproportionate group
representations that exacerbate societal inequities. Traditionally, efforts to
debiase these models have relied on predefined sensitive attributes,
classifiers trained on such attributes, or large language models to steer
outputs toward fairness. However, these approaches face notable drawbacks:
predefined attributes do not adequately capture complex and continuous
variations among groups. To address these issues, we introduce the Debiasing
Diffusion Model (DDM), which leverages an indicator to learn latent
representations during training, promoting fairness through balanced
representations without requiring predefined sensitive attributes. This
approach not only demonstrates its effectiveness in scenarios previously
addressed by conventional techniques but also enhances fairness without relying
on predefined sensitive attributes as conditions. In this paper, we discuss the
limitations of prior bias mitigation techniques in diffusion-based models,
elaborate on the architecture of the DDM, and validate the effectiveness of our
approach through experiments.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 15:02:52 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Lin-Chun",
""
],
[
"Tsao",
"Ching Chieh",
""
],
[
"Su",
"Fang-Yi",
""
],
[
"Chiang",
"Jung-Hsien",
""
]
] | TITLE: Debiasing Diffusion Model: Enhancing Fairness through Latent
Representation Learning in Stable Diffusion Model
ABSTRACT: Image generative models, particularly diffusion-based models, have surged in
popularity due to their remarkable ability to synthesize highly realistic
images. However, since these models are data-driven, they inherit biases from
the training datasets, frequently leading to disproportionate group
representations that exacerbate societal inequities. Traditionally, efforts to
debiase these models have relied on predefined sensitive attributes,
classifiers trained on such attributes, or large language models to steer
outputs toward fairness. However, these approaches face notable drawbacks:
predefined attributes do not adequately capture complex and continuous
variations among groups. To address these issues, we introduce the Debiasing
Diffusion Model (DDM), which leverages an indicator to learn latent
representations during training, promoting fairness through balanced
representations without requiring predefined sensitive attributes. This
approach not only demonstrates its effectiveness in scenarios previously
addressed by conventional techniques but also enhances fairness without relying
on predefined sensitive attributes as conditions. In this paper, we discuss the
limitations of prior bias mitigation techniques in diffusion-based models,
elaborate on the architecture of the DDM, and validate the effectiveness of our
approach through experiments.
|
2503.12541 | Jiadong Zhou | Jiadong Zhou, Yadan Zeng, Huixu Dong, and I-Ming Chen | Histogram Transporter: Learning Rotation-Equivariant Orientation
Histograms for High-Precision Robotic Kitting | This manuscript is currently under review | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robotic kitting is a critical task in industrial automation that requires the
precise arrangement of objects into kits to support downstream production
processes. However, when handling complex kitting tasks that involve
fine-grained orientation alignment, existing approaches often suffer from
limited accuracy and computational efficiency. To address these challenges, we
propose Histogram Transporter, a novel kitting framework that learns
high-precision pick-and-place actions from scratch using only a few
demonstrations. First, our method extracts rotation-equivariant orientation
histograms (EOHs) from visual observations using an efficient Fourier-based
discretization strategy. These EOHs serve a dual purpose: improving picking
efficiency by directly modeling action success probabilities over
high-resolution orientations and enhancing placing accuracy by serving as
local, discriminative feature descriptors for object-to-placement matching.
Second, we introduce a subgroup alignment strategy in the place model that
compresses the full spectrum of EOHs into a compact orientation representation,
enabling efficient feature matching while preserving accuracy. Finally, we
examine the proposed framework on the simulated Hand-Tool Kitting Dataset
(HTKD), where it outperforms competitive baselines in both success rates and
computational efficiency. Further experiments on five Raven-10 tasks exhibits
the remarkable adaptability of our approach, with real-robot trials confirming
its applicability for real-world deployment.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 15:21:50 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhou",
"Jiadong",
""
],
[
"Zeng",
"Yadan",
""
],
[
"Dong",
"Huixu",
""
],
[
"Chen",
"I-Ming",
""
]
] | TITLE: Histogram Transporter: Learning Rotation-Equivariant Orientation
Histograms for High-Precision Robotic Kitting
ABSTRACT: Robotic kitting is a critical task in industrial automation that requires the
precise arrangement of objects into kits to support downstream production
processes. However, when handling complex kitting tasks that involve
fine-grained orientation alignment, existing approaches often suffer from
limited accuracy and computational efficiency. To address these challenges, we
propose Histogram Transporter, a novel kitting framework that learns
high-precision pick-and-place actions from scratch using only a few
demonstrations. First, our method extracts rotation-equivariant orientation
histograms (EOHs) from visual observations using an efficient Fourier-based
discretization strategy. These EOHs serve a dual purpose: improving picking
efficiency by directly modeling action success probabilities over
high-resolution orientations and enhancing placing accuracy by serving as
local, discriminative feature descriptors for object-to-placement matching.
Second, we introduce a subgroup alignment strategy in the place model that
compresses the full spectrum of EOHs into a compact orientation representation,
enabling efficient feature matching while preserving accuracy. Finally, we
examine the proposed framework on the simulated Hand-Tool Kitting Dataset
(HTKD), where it outperforms competitive baselines in both success rates and
computational efficiency. Further experiments on five Raven-10 tasks exhibits
the remarkable adaptability of our approach, with real-robot trials confirming
its applicability for real-world deployment.
|
2503.12543 | Andrea Longhin | Andrea Longhin | A quantitative analysis of Galilei's observations of Jupiter satellites
from the Sidereus Nuncius | null | null | null | null | physics.hist-ph astro-ph.EP | http://creativecommons.org/licenses/by/4.0/ | We analyse the observations of the satellites of Jupiter from the Sidereus
Nuncius (January 7 to March 1, 1610) and compare them to the predictions
obtained using a modern sky simulator, verifying them one by one. A sinusoidal
fit of the data obtained from the 64 available sketches, allows measuring the
relative major semi-axes of the satellites' orbits and their periods with a
statistical precision of 2-4\% and 0.1-0.3\% respectively. The periods are
basically unbiased while the orbits tend to be underestimated for Callisto by
about 12\%. The posterior fit error indicates that the positions of the
satellites are determined with a resolution of 0.4-0.6 Jupiter diameters in the
notation of Galilei corresponding to about 40- 70 arc sec i.e. similar to the
true angular diameter of Jupiter, in those days. We show that with this data
one can infer in a convincing way the third law of Kepler for the Jupiter
system. The 1:2 and 1:4 orbital resonance between the periods of Io and
Europa/Ganymede can be determined with \% precision. In order to obtain these
results it is important to separate the four datasets. This operation, which is
nowadays simple using a sky simulator, and is fully reported in this work, was
an extremely difficult task for Galilei as the analysis will evidence.
Nevertheless we show how the four periods might have been extracted using the
modern Lomb-Scargle technique without having to separate the four data-sets
already just using these early observations. We also perform a critical
evaluation of the accuracy of the observation of the Pleiades and other
clusters and the Moon.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 15:24:46 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Longhin",
"Andrea",
""
]
] | TITLE: A quantitative analysis of Galilei's observations of Jupiter satellites
from the Sidereus Nuncius
ABSTRACT: We analyse the observations of the satellites of Jupiter from the Sidereus
Nuncius (January 7 to March 1, 1610) and compare them to the predictions
obtained using a modern sky simulator, verifying them one by one. A sinusoidal
fit of the data obtained from the 64 available sketches, allows measuring the
relative major semi-axes of the satellites' orbits and their periods with a
statistical precision of 2-4\% and 0.1-0.3\% respectively. The periods are
basically unbiased while the orbits tend to be underestimated for Callisto by
about 12\%. The posterior fit error indicates that the positions of the
satellites are determined with a resolution of 0.4-0.6 Jupiter diameters in the
notation of Galilei corresponding to about 40- 70 arc sec i.e. similar to the
true angular diameter of Jupiter, in those days. We show that with this data
one can infer in a convincing way the third law of Kepler for the Jupiter
system. The 1:2 and 1:4 orbital resonance between the periods of Io and
Europa/Ganymede can be determined with \% precision. In order to obtain these
results it is important to separate the four datasets. This operation, which is
nowadays simple using a sky simulator, and is fully reported in this work, was
an extremely difficult task for Galilei as the analysis will evidence.
Nevertheless we show how the four periods might have been extracted using the
modern Lomb-Scargle technique without having to separate the four data-sets
already just using these early observations. We also perform a critical
evaluation of the accuracy of the observation of the Pleiades and other
clusters and the Moon.
|
2503.12545 | Zhaopan Xu | Zhaopan Xu, Pengfei Zhou, Weidong Tang, Jiaxin Ai, Wangbo Zhao,
Xiaojiang Peng, Kai Wang, Yang You, Wenqi Shao, Hongxun Yao, Kaipeng Zhang | PEBench: A Fictitious Dataset to Benchmark Machine Unlearning for
Multimodal Large Language Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In recent years, Multimodal Large Language Models (MLLMs) have demonstrated
remarkable advancements in tasks such as visual question answering, visual
understanding, and reasoning. However, this impressive progress relies on vast
amounts of data collected from the internet, raising significant concerns about
privacy and security. To address these issues, machine unlearning (MU) has
emerged as a promising solution, enabling the removal of specific knowledge
from an already trained model without requiring retraining from scratch.
Although MU for MLLMs has gained attention, current evaluations of its efficacy
remain incomplete, and the underlying problem is often poorly defined, which
hinders the development of strategies for creating more secure and trustworthy
systems. To bridge this gap, we introduce a benchmark, named PEBench, which
includes a dataset of personal entities and corresponding general event scenes,
designed to comprehensively assess the performance of MU for MLLMs. Through
PEBench, we aim to provide a standardized and robust framework to advance
research in secure and privacy-preserving multimodal models. We benchmarked 6
MU methods, revealing their strengths and limitations, and shedding light on
key challenges and opportunities for MU in MLLMs.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 15:26:20 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Xu",
"Zhaopan",
""
],
[
"Zhou",
"Pengfei",
""
],
[
"Tang",
"Weidong",
""
],
[
"Ai",
"Jiaxin",
""
],
[
"Zhao",
"Wangbo",
""
],
[
"Peng",
"Xiaojiang",
""
],
[
"Wang",
"Kai",
""
],
[
"You",
"Yang",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Yao",
"Hongxun",
""
],
[
"Zhang",
"Kaipeng",
""
]
] | TITLE: PEBench: A Fictitious Dataset to Benchmark Machine Unlearning for
Multimodal Large Language Models
ABSTRACT: In recent years, Multimodal Large Language Models (MLLMs) have demonstrated
remarkable advancements in tasks such as visual question answering, visual
understanding, and reasoning. However, this impressive progress relies on vast
amounts of data collected from the internet, raising significant concerns about
privacy and security. To address these issues, machine unlearning (MU) has
emerged as a promising solution, enabling the removal of specific knowledge
from an already trained model without requiring retraining from scratch.
Although MU for MLLMs has gained attention, current evaluations of its efficacy
remain incomplete, and the underlying problem is often poorly defined, which
hinders the development of strategies for creating more secure and trustworthy
systems. To bridge this gap, we introduce a benchmark, named PEBench, which
includes a dataset of personal entities and corresponding general event scenes,
designed to comprehensively assess the performance of MU for MLLMs. Through
PEBench, we aim to provide a standardized and robust framework to advance
research in secure and privacy-preserving multimodal models. We benchmarked 6
MU methods, revealing their strengths and limitations, and shedding light on
key challenges and opportunities for MU in MLLMs.
|
2503.12556 | Manas Gaur | Sarvesh Baskar, Tanmay Tulsidas Verelakar, Srinivasan Parthasarathy,
Manas Gaur | From Guessing to Asking: An Approach to Resolving the Persona Knowledge
Gap in LLMs during Multi-Turn Conversations | 12 pages, 1 Figure, Oral Presentation at NAACL 2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | In multi-turn dialogues, large language models (LLM) face a critical
challenge of ensuring coherence while adapting to user-specific information.
This study introduces the persona knowledge gap, the discrepancy between a
model's internal understanding and the knowledge required for coherent,
personalized conversations. While prior research has recognized these gaps,
computational methods for their identification and resolution remain
underexplored. We propose Conversation Preference Elicitation and
Recommendation (CPER), a novel framework that dynamically detects and resolves
persona knowledge gaps using intrinsic uncertainty quantification and
feedback-driven refinement. CPER consists of three key modules: a Contextual
Understanding Module for preference extraction, a Dynamic Feedback Module for
measuring uncertainty and refining persona alignment, and a Persona-Driven
Response Generation module for adapting responses based on accumulated user
context. We evaluate CPER on two real-world datasets: CCPE-M for preferential
movie recommendations and ESConv for mental health support. Using A/B testing,
human evaluators preferred CPER's responses 42% more often than baseline models
in CCPE-M and 27% more often in ESConv. A qualitative human evaluation confirms
that CPER's responses are preferred for maintaining contextual relevance and
coherence, particularly in longer (12+ turn) conversations.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 15:55:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Baskar",
"Sarvesh",
""
],
[
"Verelakar",
"Tanmay Tulsidas",
""
],
[
"Parthasarathy",
"Srinivasan",
""
],
[
"Gaur",
"Manas",
""
]
] | TITLE: From Guessing to Asking: An Approach to Resolving the Persona Knowledge
Gap in LLMs during Multi-Turn Conversations
ABSTRACT: In multi-turn dialogues, large language models (LLM) face a critical
challenge of ensuring coherence while adapting to user-specific information.
This study introduces the persona knowledge gap, the discrepancy between a
model's internal understanding and the knowledge required for coherent,
personalized conversations. While prior research has recognized these gaps,
computational methods for their identification and resolution remain
underexplored. We propose Conversation Preference Elicitation and
Recommendation (CPER), a novel framework that dynamically detects and resolves
persona knowledge gaps using intrinsic uncertainty quantification and
feedback-driven refinement. CPER consists of three key modules: a Contextual
Understanding Module for preference extraction, a Dynamic Feedback Module for
measuring uncertainty and refining persona alignment, and a Persona-Driven
Response Generation module for adapting responses based on accumulated user
context. We evaluate CPER on two real-world datasets: CCPE-M for preferential
movie recommendations and ESConv for mental health support. Using A/B testing,
human evaluators preferred CPER's responses 42% more often than baseline models
in CCPE-M and 27% more often in ESConv. A qualitative human evaluation confirms
that CPER's responses are preferred for maintaining contextual relevance and
coherence, particularly in longer (12+ turn) conversations.
|
2503.12559 | Xiao Wang | Xiao Wang, Qingyi Si, Jianlong Wu, Shiyu Zhu, Li Cao, Liqiang Nie | AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for
Video-language Understanding | null | null | null | null | cs.CV cs.CL cs.MM | http://creativecommons.org/licenses/by/4.0/ | Multimodal Large Language Models (MLLMs) have revolutionized video
understanding, yet are still limited by context length when processing long
videos. Recent methods compress videos by leveraging visual redundancy
uniformly, yielding promising results. Nevertheless, our quantitative analysis
shows that redundancy varies significantly across time and model layers,
necessitating a more flexible compression strategy. We propose AdaReTaKe, a
training-free method that flexibly reduces visual redundancy by allocating
compression ratios among time and layers with theoretical guarantees.
Integrated into state-of-the-art MLLMs, AdaReTaKe improves processing capacity
from 256 to 2048 frames while preserving critical information. Experiments on
VideoMME, MLVU, LongVideoBench, and LVBench datasets demonstrate that AdaReTaKe
outperforms existing methods by 2.3% and 2.8% for 7B and 72B models,
respectively, with even greater improvements of 5.9% and 6.0% on the longest
LVBench. Our code is available at
https://github.com/SCZwangxiao/video-FlexReduc.git.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 16:14:52 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Xiao",
""
],
[
"Si",
"Qingyi",
""
],
[
"Wu",
"Jianlong",
""
],
[
"Zhu",
"Shiyu",
""
],
[
"Cao",
"Li",
""
],
[
"Nie",
"Liqiang",
""
]
] | TITLE: AdaReTaKe: Adaptive Redundancy Reduction to Perceive Longer for
Video-language Understanding
ABSTRACT: Multimodal Large Language Models (MLLMs) have revolutionized video
understanding, yet are still limited by context length when processing long
videos. Recent methods compress videos by leveraging visual redundancy
uniformly, yielding promising results. Nevertheless, our quantitative analysis
shows that redundancy varies significantly across time and model layers,
necessitating a more flexible compression strategy. We propose AdaReTaKe, a
training-free method that flexibly reduces visual redundancy by allocating
compression ratios among time and layers with theoretical guarantees.
Integrated into state-of-the-art MLLMs, AdaReTaKe improves processing capacity
from 256 to 2048 frames while preserving critical information. Experiments on
VideoMME, MLVU, LongVideoBench, and LVBench datasets demonstrate that AdaReTaKe
outperforms existing methods by 2.3% and 2.8% for 7B and 72B models,
respectively, with even greater improvements of 5.9% and 6.0% on the longest
LVBench. Our code is available at
https://github.com/SCZwangxiao/video-FlexReduc.git.
|
2503.12560 | Li Zheng | Li Zheng, Hao Fei, Ting Dai, Zuquan Peng, Fei Li, Huisheng Ma, Chong
Teng, Donghong Ji | Multi-Granular Multimodal Clue Fusion for Meme Understanding | Accepted by AAAI2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the continuous emergence of various social media platforms frequently
used in daily life, the multimodal meme understanding (MMU) task has been
garnering increasing attention. MMU aims to explore and comprehend the meanings
of memes from various perspectives by performing tasks such as metaphor
recognition, sentiment analysis, intention detection, and offensiveness
detection. Despite making progress, limitations persist due to the loss of
fine-grained metaphorical visual clue and the neglect of multimodal text-image
weak correlation. To overcome these limitations, we propose a multi-granular
multimodal clue fusion model (MGMCF) to advance MMU. Firstly, we design an
object-level semantic mining module to extract object-level image feature
clues, achieving fine-grained feature clue extraction and enhancing the model's
ability to capture metaphorical details and semantics. Secondly, we propose a
brand-new global-local cross-modal interaction model to address the weak
correlation between text and images. This model facilitates effective
interaction between global multimodal contextual clues and local unimodal
feature clues, strengthening their representations through a bidirectional
cross-modal attention mechanism. Finally, we devise a dual-semantic guided
training strategy to enhance the model's understanding and alignment of
multimodal representations in the semantic space. Experiments conducted on the
widely-used MET-MEME bilingual dataset demonstrate significant improvements
over state-of-the-art baselines. Specifically, there is an 8.14% increase in
precision for offensiveness detection task, and respective accuracy
enhancements of 3.53%, 3.89%, and 3.52% for metaphor recognition, sentiment
analysis, and intention detection tasks. These results, underpinned by in-depth
analyses, underscore the effectiveness and potential of our approach for
advancing MMU.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 16:16:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zheng",
"Li",
""
],
[
"Fei",
"Hao",
""
],
[
"Dai",
"Ting",
""
],
[
"Peng",
"Zuquan",
""
],
[
"Li",
"Fei",
""
],
[
"Ma",
"Huisheng",
""
],
[
"Teng",
"Chong",
""
],
[
"Ji",
"Donghong",
""
]
] | TITLE: Multi-Granular Multimodal Clue Fusion for Meme Understanding
ABSTRACT: With the continuous emergence of various social media platforms frequently
used in daily life, the multimodal meme understanding (MMU) task has been
garnering increasing attention. MMU aims to explore and comprehend the meanings
of memes from various perspectives by performing tasks such as metaphor
recognition, sentiment analysis, intention detection, and offensiveness
detection. Despite making progress, limitations persist due to the loss of
fine-grained metaphorical visual clue and the neglect of multimodal text-image
weak correlation. To overcome these limitations, we propose a multi-granular
multimodal clue fusion model (MGMCF) to advance MMU. Firstly, we design an
object-level semantic mining module to extract object-level image feature
clues, achieving fine-grained feature clue extraction and enhancing the model's
ability to capture metaphorical details and semantics. Secondly, we propose a
brand-new global-local cross-modal interaction model to address the weak
correlation between text and images. This model facilitates effective
interaction between global multimodal contextual clues and local unimodal
feature clues, strengthening their representations through a bidirectional
cross-modal attention mechanism. Finally, we devise a dual-semantic guided
training strategy to enhance the model's understanding and alignment of
multimodal representations in the semantic space. Experiments conducted on the
widely-used MET-MEME bilingual dataset demonstrate significant improvements
over state-of-the-art baselines. Specifically, there is an 8.14% increase in
precision for offensiveness detection task, and respective accuracy
enhancements of 3.53%, 3.89%, and 3.52% for metaphor recognition, sentiment
analysis, and intention detection tasks. These results, underpinned by in-depth
analyses, underscore the effectiveness and potential of our approach for
advancing MMU.
|
2503.12563 | Yingzhen Yang | Yancheng Wang, Changyu Liu, Yingzhen Yang | Diffusion on Graph: Augmentation of Graph Structure for Node
Classification | Published in Transactions on Machine Learning Research (TMLR) 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph diffusion models have recently been proposed to synthesize entire
graphs, such as molecule graphs. Although existing methods have shown great
performance in generating entire graphs for graph-level learning tasks, no
graph diffusion models have been developed to generate synthetic graph
structures, that is, synthetic nodes and associated edges within a given graph,
for node-level learning tasks. Inspired by the research in the computer vision
literature using synthetic data for enhanced performance, we propose Diffusion
on Graph (DoG), which generates synthetic graph structures to boost the
performance of GNNs. The synthetic graph structures generated by DoG are
combined with the original graph to form an augmented graph for the training of
node-level learning tasks, such as node classification and graph contrastive
learning (GCL). To improve the efficiency of the generation process, a Bi-Level
Neighbor Map Decoder (BLND) is introduced in DoG. To mitigate the adverse
effect of the noise introduced by the synthetic graph structures, a low-rank
regularization method is proposed for the training of graph neural networks
(GNNs) on the augmented graphs. Extensive experiments on various graph datasets
for semi-supervised node classification and graph contrastive learning have
been conducted to demonstrate the effectiveness of DoG with low-rank
regularization. The code of DoG is available at
https://github.com/Statistical-Deep-Learning/DoG.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 16:39:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Yancheng",
""
],
[
"Liu",
"Changyu",
""
],
[
"Yang",
"Yingzhen",
""
]
] | TITLE: Diffusion on Graph: Augmentation of Graph Structure for Node
Classification
ABSTRACT: Graph diffusion models have recently been proposed to synthesize entire
graphs, such as molecule graphs. Although existing methods have shown great
performance in generating entire graphs for graph-level learning tasks, no
graph diffusion models have been developed to generate synthetic graph
structures, that is, synthetic nodes and associated edges within a given graph,
for node-level learning tasks. Inspired by the research in the computer vision
literature using synthetic data for enhanced performance, we propose Diffusion
on Graph (DoG), which generates synthetic graph structures to boost the
performance of GNNs. The synthetic graph structures generated by DoG are
combined with the original graph to form an augmented graph for the training of
node-level learning tasks, such as node classification and graph contrastive
learning (GCL). To improve the efficiency of the generation process, a Bi-Level
Neighbor Map Decoder (BLND) is introduced in DoG. To mitigate the adverse
effect of the noise introduced by the synthetic graph structures, a low-rank
regularization method is proposed for the training of graph neural networks
(GNNs) on the augmented graphs. Extensive experiments on various graph datasets
for semi-supervised node classification and graph contrastive learning have
been conducted to demonstrate the effectiveness of DoG with low-rank
regularization. The code of DoG is available at
https://github.com/Statistical-Deep-Learning/DoG.
|
2503.12575 | Dipesh Tamboli | Dipesh Tamboli, Souradip Chakraborty, Aditya Malusare, Biplab
Banerjee, Amrit Singh Bedi, Vaneet Aggarwal | BalancedDPO: Adaptive Multi-Metric Alignment | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-image (T2I) diffusion models have made remarkable advancements, yet
aligning them with diverse preferences remains a persistent challenge. Current
methods often optimize single metrics or depend on narrowly curated datasets,
leading to overfitting and limited generalization across key visual quality
metrics. We present BalancedDPO, a novel extension of Direct Preference
Optimization (DPO) that addresses these limitations by simultaneously aligning
T2I diffusion models with multiple metrics, including human preference, CLIP
score, and aesthetic quality. Our key novelty lies in aggregating consensus
labels from diverse metrics in the preference distribution space as compared to
existing reward mixing approaches, enabling robust and scalable multi-metric
alignment while maintaining the simplicity of the standard DPO pipeline that we
refer to as BalancedDPO. Our evaluations on the Pick-a-Pic, PartiPrompt and HPD
datasets show that BalancedDPO achieves state-of-the-art results, outperforming
existing approaches across all major metrics. BalancedDPO improves the average
win rates by 15%, 7.1%, and 10.3% on Pick-a-pic, PartiPrompt and HPD,
respectively, from the DiffusionDPO.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 17:06:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Tamboli",
"Dipesh",
""
],
[
"Chakraborty",
"Souradip",
""
],
[
"Malusare",
"Aditya",
""
],
[
"Banerjee",
"Biplab",
""
],
[
"Bedi",
"Amrit Singh",
""
],
[
"Aggarwal",
"Vaneet",
""
]
] | TITLE: BalancedDPO: Adaptive Multi-Metric Alignment
ABSTRACT: Text-to-image (T2I) diffusion models have made remarkable advancements, yet
aligning them with diverse preferences remains a persistent challenge. Current
methods often optimize single metrics or depend on narrowly curated datasets,
leading to overfitting and limited generalization across key visual quality
metrics. We present BalancedDPO, a novel extension of Direct Preference
Optimization (DPO) that addresses these limitations by simultaneously aligning
T2I diffusion models with multiple metrics, including human preference, CLIP
score, and aesthetic quality. Our key novelty lies in aggregating consensus
labels from diverse metrics in the preference distribution space as compared to
existing reward mixing approaches, enabling robust and scalable multi-metric
alignment while maintaining the simplicity of the standard DPO pipeline that we
refer to as BalancedDPO. Our evaluations on the Pick-a-Pic, PartiPrompt and HPD
datasets show that BalancedDPO achieves state-of-the-art results, outperforming
existing approaches across all major metrics. BalancedDPO improves the average
win rates by 15%, 7.1%, and 10.3% on Pick-a-pic, PartiPrompt and HPD,
respectively, from the DiffusionDPO.
|
2503.12592 | Harshit Yadav | Harshit | MoECollab: Democratizing LLM Development Through Collaborative Mixture
of Experts | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Model (LLM) development has become increasingly centralized,
limiting participation to well-resourced organizations. This paper introduces
MoECollab, a novel framework leveraging Mixture of Experts (MoE) architecture
to enable distributed, collaborative LLM development. By decomposing monolithic
models into specialized expert modules coordinated by a trainable gating
network, our framework allows diverse contributors to participate regardless of
computational resources. We provide a complete technical implementation with
mathematical foundations for expert dynamics, gating mechanisms, and
integration strategies. Experiments on multiple datasets demonstrate that our
approach achieves accuracy improvements of 3-7% over baseline models while
reducing computational requirements by 34%. Expert specialization yields
significant domain-specific gains, with improvements from 51% to 88% F1 score
in general classification and from 23% to 44% accuracy in news categorization.
We formalize the routing entropy optimization problem and demonstrate how
proper regularization techniques lead to 14% higher expert utilization rates.
These results validate MoECollab as an effective approach for democratizing LLM
development through architecturally-supported collaboration.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 17:52:40 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Harshit",
"",
""
]
] | TITLE: MoECollab: Democratizing LLM Development Through Collaborative Mixture
of Experts
ABSTRACT: Large Language Model (LLM) development has become increasingly centralized,
limiting participation to well-resourced organizations. This paper introduces
MoECollab, a novel framework leveraging Mixture of Experts (MoE) architecture
to enable distributed, collaborative LLM development. By decomposing monolithic
models into specialized expert modules coordinated by a trainable gating
network, our framework allows diverse contributors to participate regardless of
computational resources. We provide a complete technical implementation with
mathematical foundations for expert dynamics, gating mechanisms, and
integration strategies. Experiments on multiple datasets demonstrate that our
approach achieves accuracy improvements of 3-7% over baseline models while
reducing computational requirements by 34%. Expert specialization yields
significant domain-specific gains, with improvements from 51% to 88% F1 score
in general classification and from 23% to 44% accuracy in news categorization.
We formalize the routing entropy optimization problem and demonstrate how
proper regularization techniques lead to 14% higher expert utilization rates.
These results validate MoECollab as an effective approach for democratizing LLM
development through architecturally-supported collaboration.
|
2503.12595 | Dan Halperin | Dan Halperin, Niklas Eisl | Point Cloud Based Scene Segmentation: A Survey | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous driving is a safety-critical application, and it is therefore a
top priority that the accompanying assistance systems are able to provide
precise information about the surrounding environment of the vehicle. Tasks
such as 3D Object Detection deliver an insufficiently detailed understanding of
the surrounding scene because they only predict a bounding box for foreground
objects. In contrast, 3D Semantic Segmentation provides richer and denser
information about the environment by assigning a label to each individual
point, which is of paramount importance for autonomous driving tasks, such as
navigation or lane changes. To inspire future research, in this review paper,
we provide a comprehensive overview of the current state-of-the-art methods in
the field of Point Cloud Semantic Segmentation for autonomous driving. We
categorize the approaches into projection-based, 3D-based and hybrid methods.
Moreover, we discuss the most important and commonly used datasets for this
task and also emphasize the importance of synthetic data to support research
when real-world data is limited. We further present the results of the
different methods and compare them with respect to their segmentation accuracy
and efficiency.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 18:02:41 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Halperin",
"Dan",
""
],
[
"Eisl",
"Niklas",
""
]
] | TITLE: Point Cloud Based Scene Segmentation: A Survey
ABSTRACT: Autonomous driving is a safety-critical application, and it is therefore a
top priority that the accompanying assistance systems are able to provide
precise information about the surrounding environment of the vehicle. Tasks
such as 3D Object Detection deliver an insufficiently detailed understanding of
the surrounding scene because they only predict a bounding box for foreground
objects. In contrast, 3D Semantic Segmentation provides richer and denser
information about the environment by assigning a label to each individual
point, which is of paramount importance for autonomous driving tasks, such as
navigation or lane changes. To inspire future research, in this review paper,
we provide a comprehensive overview of the current state-of-the-art methods in
the field of Point Cloud Semantic Segmentation for autonomous driving. We
categorize the approaches into projection-based, 3D-based and hybrid methods.
Moreover, we discuss the most important and commonly used datasets for this
task and also emphasize the importance of synthetic data to support research
when real-world data is limited. We further present the results of the
different methods and compare them with respect to their segmentation accuracy
and efficiency.
|
2503.12600 | Tao Feng | Tao Feng, Yihang Sun, Jiaxuan You | GraphEval: A Lightweight Graph-Based LLM Framework for Idea Evaluation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The powerful capabilities of Large Language Models (LLMs) have led to their
growing use in evaluating human-generated content, particularly in evaluating
research ideas within academic settings. Existing solutions primarily rely on
prompt-based LLM methods or fine-tuned lightweight language models for idea
evaluation. However, these methods are often unstable and struggle to
comprehend the complex semantic information embedded in the ideas, impeding
their ability to perform high-quality evaluations. To address the above
challenges, we propose GraphEval, a lightweight graph-based LLM framework for
idea evaluation. Our insight is that a complex idea can be broken down into
comprehensible viewpoint nodes using prompts from small LLMs. These viewpoint
nodes can then be linked together through edges created from LLM-based relation
extraction and/or BERT similarity scores. The created viewpoint-graph can be
used to conveniently propagate scores across view-nodes to improve the
robustness of the idea evaluations. In particular, we propose two lightweight
graph-based methods for idea evaluation: (1) GraphEval-LP: a training-free
label propagation algorithm that propagates evaluation scores from known
view-nodes to unknown nodes; (2) GraphEval-GNN: a Graph Neural Networks (GNN)
that is trained to predict the evaluation scores given the observed graph with
minimal computation resources. Moreover, to overcome LLM's limitation in
objectively assessing the novelty of ideas, we further propose a novelty
detection model to GraphEval-GNN to enhance its capability in judging idea
novelty. Experiments on two datasets show GraphEval improves F1 scores by at
least 14% with low computation and API costs. Additionally, GraphEval can
effectively detect plagiarized ideas.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 18:24:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Feng",
"Tao",
""
],
[
"Sun",
"Yihang",
""
],
[
"You",
"Jiaxuan",
""
]
] | TITLE: GraphEval: A Lightweight Graph-Based LLM Framework for Idea Evaluation
ABSTRACT: The powerful capabilities of Large Language Models (LLMs) have led to their
growing use in evaluating human-generated content, particularly in evaluating
research ideas within academic settings. Existing solutions primarily rely on
prompt-based LLM methods or fine-tuned lightweight language models for idea
evaluation. However, these methods are often unstable and struggle to
comprehend the complex semantic information embedded in the ideas, impeding
their ability to perform high-quality evaluations. To address the above
challenges, we propose GraphEval, a lightweight graph-based LLM framework for
idea evaluation. Our insight is that a complex idea can be broken down into
comprehensible viewpoint nodes using prompts from small LLMs. These viewpoint
nodes can then be linked together through edges created from LLM-based relation
extraction and/or BERT similarity scores. The created viewpoint-graph can be
used to conveniently propagate scores across view-nodes to improve the
robustness of the idea evaluations. In particular, we propose two lightweight
graph-based methods for idea evaluation: (1) GraphEval-LP: a training-free
label propagation algorithm that propagates evaluation scores from known
view-nodes to unknown nodes; (2) GraphEval-GNN: a Graph Neural Networks (GNN)
that is trained to predict the evaluation scores given the observed graph with
minimal computation resources. Moreover, to overcome LLM's limitation in
objectively assessing the novelty of ideas, we further propose a novelty
detection model to GraphEval-GNN to enhance its capability in judging idea
novelty. Experiments on two datasets show GraphEval improves F1 scores by at
least 14% with low computation and API costs. Additionally, GraphEval can
effectively detect plagiarized ideas.
|
2503.12616 | Myisha Ahmed Chowdhury | Myisha A. Chowdhury and Qiugang Lu | Equivalent-Circuit Thermal Model for Batteries with One-Shot Parameter
Identification | null | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by/4.0/ | Accurate state of temperature (SOT) estimation for batteries is crucial for
regulating their temperature within a desired range to ensure safe operation
and optimal performance. The existing measurement-based methods often generate
noisy signals and cannot scale up for large-scale battery packs. The
electrochemical model-based methods, on the contrary, offer high accuracy but
are computationally expensive. To tackle these issues, inspired by the
equivalentcircuit voltage model for batteries, this paper presents a novel
equivalent-circuit electro-thermal model (ECTM) for modeling battery surface
temperature. By approximating the complex heat generation inside batteries with
data-driven nonlinear (polynomial) functions of key measurable parameters such
as state-of-charge (SOC), current, and terminal voltage, our ECTM is simplified
into a linear form that admits rapid solutions. Such simplified ECTM can be
readily identified with one single (one-shot) cycle data. The proposed model is
extensively validated with benchmark NASA, MIT, and Oxford battery datasets.
Simulation results verify the accuracy of the model, despite being identified
with one-shot cycle data, in predicting battery temperatures robustly under
different battery degradation status and ambient conditions.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 19:12:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chowdhury",
"Myisha A.",
""
],
[
"Lu",
"Qiugang",
""
]
] | TITLE: Equivalent-Circuit Thermal Model for Batteries with One-Shot Parameter
Identification
ABSTRACT: Accurate state of temperature (SOT) estimation for batteries is crucial for
regulating their temperature within a desired range to ensure safe operation
and optimal performance. The existing measurement-based methods often generate
noisy signals and cannot scale up for large-scale battery packs. The
electrochemical model-based methods, on the contrary, offer high accuracy but
are computationally expensive. To tackle these issues, inspired by the
equivalentcircuit voltage model for batteries, this paper presents a novel
equivalent-circuit electro-thermal model (ECTM) for modeling battery surface
temperature. By approximating the complex heat generation inside batteries with
data-driven nonlinear (polynomial) functions of key measurable parameters such
as state-of-charge (SOC), current, and terminal voltage, our ECTM is simplified
into a linear form that admits rapid solutions. Such simplified ECTM can be
readily identified with one single (one-shot) cycle data. The proposed model is
extensively validated with benchmark NASA, MIT, and Oxford battery datasets.
Simulation results verify the accuracy of the model, despite being identified
with one-shot cycle data, in predicting battery temperatures robustly under
different battery degradation status and ambient conditions.
|
2503.12617 | Anthony Lamelas | Anthony Lamelas and Harrison Muchnic | Scaling Semantic Categories: Investigating the Impact on Vision
Transformer Labeling Performance | 4 pages, 7 figures, submitted to CVPR (feedback pending) | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study explores the impact of scaling semantic categories on the image
classification performance of vision transformers (ViTs). In this specific
case, the CLIP server provided by Jina AI is used for experimentation. The
research hypothesizes that as the number of ground truth and artificially
introduced semantically equivalent categories increases, the labeling accuracy
of ViTs improves until a theoretical maximum or limit is reached. A wide
variety of image datasets were chosen to test this hypothesis. These datasets
were processed through a custom function in Python designed to evaluate the
model's accuracy, with adjustments being made to account for format differences
between datasets. By exponentially introducing new redundant categories, the
experiment assessed accuracy trends until they plateaued, decreased, or
fluctuated inconsistently. The findings show that while semantic scaling
initially increases model performance, the benefits diminish or reverse after
surpassing a critical threshold, providing insight into the limitations and
possible optimization of category labeling strategies for ViTs.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 19:14:21 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lamelas",
"Anthony",
""
],
[
"Muchnic",
"Harrison",
""
]
] | TITLE: Scaling Semantic Categories: Investigating the Impact on Vision
Transformer Labeling Performance
ABSTRACT: This study explores the impact of scaling semantic categories on the image
classification performance of vision transformers (ViTs). In this specific
case, the CLIP server provided by Jina AI is used for experimentation. The
research hypothesizes that as the number of ground truth and artificially
introduced semantically equivalent categories increases, the labeling accuracy
of ViTs improves until a theoretical maximum or limit is reached. A wide
variety of image datasets were chosen to test this hypothesis. These datasets
were processed through a custom function in Python designed to evaluate the
model's accuracy, with adjustments being made to account for format differences
between datasets. By exponentially introducing new redundant categories, the
experiment assessed accuracy trends until they plateaued, decreased, or
fluctuated inconsistently. The findings show that while semantic scaling
initially increases model performance, the benefits diminish or reverse after
surpassing a critical threshold, providing insight into the limitations and
possible optimization of category labeling strategies for ViTs.
|
2503.12622 | Khayrul Islam | Khayrul Islam, Ryan F. Forelli, Jianzhong Han, Deven Bhadane, Jian
Huang, Joshua C. Agar, Nhan Tran, Seda Ogrenci, Yaling Liu | Real-Time Cell Sorting with Scalable In Situ FPGA-Accelerated Deep
Learning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Precise cell classification is essential in biomedical diagnostics and
therapeutic monitoring, particularly for identifying diverse cell types
involved in various diseases. Traditional cell classification methods such as
flow cytometry depend on molecular labeling which is often costly,
time-intensive, and can alter cell integrity. To overcome these limitations, we
present a label-free machine learning framework for cell classification,
designed for real-time sorting applications using bright-field microscopy
images. This approach leverages a teacher-student model architecture enhanced
by knowledge distillation, achieving high efficiency and scalability across
different cell types. Demonstrated through a use case of classifying lymphocyte
subsets, our framework accurately classifies T4, T8, and B cell types with a
dataset of 80,000 preprocessed images, accessible via an open-source Python
package for easy adaptation. Our teacher model attained 98\% accuracy in
differentiating T4 cells from B cells and 93\% accuracy in zero-shot
classification between T8 and B cells. Remarkably, our student model operates
with only 0.02\% of the teacher model's parameters, enabling field-programmable
gate array (FPGA) deployment. Our FPGA-accelerated student model achieves an
ultra-low inference latency of just 14.5~$\mu$s and a complete cell
detection-to-sorting trigger time of 24.7~$\mu$s, delivering 12x and 40x
improvements over the previous state-of-the-art real-time cell analysis
algorithm in inference and total latency, respectively, while preserving
accuracy comparable to the teacher model. This framework provides a scalable,
cost-effective solution for lymphocyte classification, as well as a new SOTA
real-time cell sorting implementation for rapid identification of subsets using
in situ deep learning on off-the-shelf computing hardware.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 19:32:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Islam",
"Khayrul",
""
],
[
"Forelli",
"Ryan F.",
""
],
[
"Han",
"Jianzhong",
""
],
[
"Bhadane",
"Deven",
""
],
[
"Huang",
"Jian",
""
],
[
"Agar",
"Joshua C.",
""
],
[
"Tran",
"Nhan",
""
],
[
"Ogrenci",
"Seda",
""
],
[
"Liu",
"Yaling",
""
]
] | TITLE: Real-Time Cell Sorting with Scalable In Situ FPGA-Accelerated Deep
Learning
ABSTRACT: Precise cell classification is essential in biomedical diagnostics and
therapeutic monitoring, particularly for identifying diverse cell types
involved in various diseases. Traditional cell classification methods such as
flow cytometry depend on molecular labeling which is often costly,
time-intensive, and can alter cell integrity. To overcome these limitations, we
present a label-free machine learning framework for cell classification,
designed for real-time sorting applications using bright-field microscopy
images. This approach leverages a teacher-student model architecture enhanced
by knowledge distillation, achieving high efficiency and scalability across
different cell types. Demonstrated through a use case of classifying lymphocyte
subsets, our framework accurately classifies T4, T8, and B cell types with a
dataset of 80,000 preprocessed images, accessible via an open-source Python
package for easy adaptation. Our teacher model attained 98\% accuracy in
differentiating T4 cells from B cells and 93\% accuracy in zero-shot
classification between T8 and B cells. Remarkably, our student model operates
with only 0.02\% of the teacher model's parameters, enabling field-programmable
gate array (FPGA) deployment. Our FPGA-accelerated student model achieves an
ultra-low inference latency of just 14.5~$\mu$s and a complete cell
detection-to-sorting trigger time of 24.7~$\mu$s, delivering 12x and 40x
improvements over the previous state-of-the-art real-time cell analysis
algorithm in inference and total latency, respectively, while preserving
accuracy comparable to the teacher model. This framework provides a scalable,
cost-effective solution for lymphocyte classification, as well as a new SOTA
real-time cell sorting implementation for rapid identification of subsets using
in situ deep learning on off-the-shelf computing hardware.
|
2503.12623 | Vrushank Ahire | Vrushank Ahire, Kunal Shah, Mudasir Nazir Khan, Nikhil Pakhale,
Lownish Rai Sookha, M. A. Ganaie, Abhinav Dhall | MAVEN: Multi-modal Attention for Valence-Arousal Emotion Network | null | null | null | null | cs.LG cs.AI cs.CV cs.MM | http://creativecommons.org/licenses/by/4.0/ | This paper introduces MAVEN (Multi-modal Attention for Valence-Arousal
Emotion Network), a novel architecture for dynamic emotion recognition through
dimensional modeling of affect. The model uniquely integrates visual, audio,
and textual modalities via a bi-directional cross-modal attention mechanism
with six distinct attention pathways, enabling comprehensive interactions
between all modality pairs. Our proposed approach employs modality-specific
encoders to extract rich feature representations from synchronized video
frames, audio segments, and transcripts. The architecture's novelty lies in its
cross-modal enhancement strategy, where each modality representation is refined
through weighted attention from other modalities, followed by self-attention
refinement through modality-specific encoders. Rather than directly predicting
valence-arousal values, MAVEN predicts emotions in a polar coordinate form,
aligning with psychological models of the emotion circumplex. Experimental
evaluation on the Aff-Wild2 dataset demonstrates the effectiveness of our
approach, with performance measured using Concordance Correlation Coefficient
(CCC). The multi-stage architecture demonstrates superior ability to capture
the complex, nuanced nature of emotional expressions in conversational videos,
advancing the state-of-the-art (SOTA) in continuous emotion recognition
in-the-wild. Code can be found at:
https://github.com/Vrushank-Ahire/MAVEN_8th_ABAW.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 19:32:32 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ahire",
"Vrushank",
""
],
[
"Shah",
"Kunal",
""
],
[
"Khan",
"Mudasir Nazir",
""
],
[
"Pakhale",
"Nikhil",
""
],
[
"Sookha",
"Lownish Rai",
""
],
[
"Ganaie",
"M. A.",
""
],
[
"Dhall",
"Abhinav",
""
]
] | TITLE: MAVEN: Multi-modal Attention for Valence-Arousal Emotion Network
ABSTRACT: This paper introduces MAVEN (Multi-modal Attention for Valence-Arousal
Emotion Network), a novel architecture for dynamic emotion recognition through
dimensional modeling of affect. The model uniquely integrates visual, audio,
and textual modalities via a bi-directional cross-modal attention mechanism
with six distinct attention pathways, enabling comprehensive interactions
between all modality pairs. Our proposed approach employs modality-specific
encoders to extract rich feature representations from synchronized video
frames, audio segments, and transcripts. The architecture's novelty lies in its
cross-modal enhancement strategy, where each modality representation is refined
through weighted attention from other modalities, followed by self-attention
refinement through modality-specific encoders. Rather than directly predicting
valence-arousal values, MAVEN predicts emotions in a polar coordinate form,
aligning with psychological models of the emotion circumplex. Experimental
evaluation on the Aff-Wild2 dataset demonstrates the effectiveness of our
approach, with performance measured using Concordance Correlation Coefficient
(CCC). The multi-stage architecture demonstrates superior ability to capture
the complex, nuanced nature of emotional expressions in conversational videos,
advancing the state-of-the-art (SOTA) in continuous emotion recognition
in-the-wild. Code can be found at:
https://github.com/Vrushank-Ahire/MAVEN_8th_ABAW.
|
2503.12653 | Francesco Calcagno | Francesco Calcagno, Luca Serfilippi, Giorgio Franceschelli, Marco
Garavelli, Mirco Musolesi, Ivan Rivalta | Quantum Chemistry Driven Molecular Inverse Design with Data-free
Reinforcement Learning | 47 pages including references and supporting material | null | null | null | physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | The inverse design of molecules has challenged chemists for decades. In the
past years, machine learning and artificial intelligence have emerged as new
tools to generate molecules tailoring desired properties, but with the limit of
relying on models that are pretrained on large datasets. Here, we present a
data-free generative model based on reinforcement learning and quantum
mechanics calculations. To improve the generation, our software is based on a
five-model reinforcement learning algorithm designed to mimic the syntactic
rules of an original ASCII encoding based on the SMILES one, and here reported.
The reinforcement learning generator is rewarded by on-the-fly quantum
mechanics calculations within a computational routine addressing conformational
sampling. We demonstrate that our software successfully generates new molecules
with desired properties finding optimal solutions for problems with known
solutions and (sub)optimal molecules for unexplored chemical (sub)spaces,
jointly showing significant speed-up to a reference baseline.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 21:12:15 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Calcagno",
"Francesco",
""
],
[
"Serfilippi",
"Luca",
""
],
[
"Franceschelli",
"Giorgio",
""
],
[
"Garavelli",
"Marco",
""
],
[
"Musolesi",
"Mirco",
""
],
[
"Rivalta",
"Ivan",
""
]
] | TITLE: Quantum Chemistry Driven Molecular Inverse Design with Data-free
Reinforcement Learning
ABSTRACT: The inverse design of molecules has challenged chemists for decades. In the
past years, machine learning and artificial intelligence have emerged as new
tools to generate molecules tailoring desired properties, but with the limit of
relying on models that are pretrained on large datasets. Here, we present a
data-free generative model based on reinforcement learning and quantum
mechanics calculations. To improve the generation, our software is based on a
five-model reinforcement learning algorithm designed to mimic the syntactic
rules of an original ASCII encoding based on the SMILES one, and here reported.
The reinforcement learning generator is rewarded by on-the-fly quantum
mechanics calculations within a computational routine addressing conformational
sampling. We demonstrate that our software successfully generates new molecules
with desired properties finding optimal solutions for problems with known
solutions and (sub)optimal molecules for unexplored chemical (sub)spaces,
jointly showing significant speed-up to a reference baseline.
|
2503.12660 | Tiziano Guadagnino Dr. | Tiziano Guadagnino, Benedikt Mersch, Saurabh Gupta, Ignacio Vizzo,
Giorgio Grisetti, Cyrill Stachniss | KISS-SLAM: A Simple, Robust, and Accurate 3D LiDAR SLAM System With
Enhanced Generalization Capabilities | 8 pages | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust and accurate localization and mapping of an environment using laser
scanners, so-called LiDAR SLAM, is essential to many robotic applications.
Early 3D LiDAR SLAM methods often exploited additional information from IMU or
GNSS sensors to enhance localization accuracy and mitigate drift. Later,
advanced systems further improved the estimation at the cost of a higher
runtime and complexity. This paper explores the limits of what can be achieved
with a LiDAR-only SLAM approach while following the "Keep It Small and Simple"
(KISS) principle. By leveraging this minimalistic design principle, our system,
KISS-SLAM, archives state-of-the-art performances in pose accuracy while
requiring little to no parameter tuning for deployment across diverse
environments, sensors, and motion profiles. We follow best practices in
graph-based SLAM and build upon LiDAR odometry to compute the relative motion
between scans and construct local maps of the environment. To correct drift, we
match local maps and optimize the trajectory in a pose graph optimization step.
The experimental results demonstrate that this design achieves competitive
performance while reducing complexity and reliance on additional sensor
modalities. By prioritizing simplicity, this work provides a new strong
baseline for LiDAR-only SLAM and a high-performing starting point for future
research. Further, our pipeline builds consistent maps that can be used
directly for further downstream tasks like navigation. Our open-source system
operates faster than the sensor frame rate in all presented datasets and is
designed for real-world scenarios.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 21:30:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guadagnino",
"Tiziano",
""
],
[
"Mersch",
"Benedikt",
""
],
[
"Gupta",
"Saurabh",
""
],
[
"Vizzo",
"Ignacio",
""
],
[
"Grisetti",
"Giorgio",
""
],
[
"Stachniss",
"Cyrill",
""
]
] | TITLE: KISS-SLAM: A Simple, Robust, and Accurate 3D LiDAR SLAM System With
Enhanced Generalization Capabilities
ABSTRACT: Robust and accurate localization and mapping of an environment using laser
scanners, so-called LiDAR SLAM, is essential to many robotic applications.
Early 3D LiDAR SLAM methods often exploited additional information from IMU or
GNSS sensors to enhance localization accuracy and mitigate drift. Later,
advanced systems further improved the estimation at the cost of a higher
runtime and complexity. This paper explores the limits of what can be achieved
with a LiDAR-only SLAM approach while following the "Keep It Small and Simple"
(KISS) principle. By leveraging this minimalistic design principle, our system,
KISS-SLAM, archives state-of-the-art performances in pose accuracy while
requiring little to no parameter tuning for deployment across diverse
environments, sensors, and motion profiles. We follow best practices in
graph-based SLAM and build upon LiDAR odometry to compute the relative motion
between scans and construct local maps of the environment. To correct drift, we
match local maps and optimize the trajectory in a pose graph optimization step.
The experimental results demonstrate that this design achieves competitive
performance while reducing complexity and reliance on additional sensor
modalities. By prioritizing simplicity, this work provides a new strong
baseline for LiDAR-only SLAM and a high-performing starting point for future
research. Further, our pipeline builds consistent maps that can be used
directly for further downstream tasks like navigation. Our open-source system
operates faster than the sensor frame rate in all presented datasets and is
designed for real-world scenarios.
|
2503.12667 | Jacob Chmura | Jacob Chmura, Jonah Dauvet, Sebastian Sabry | Plausibility Vaccine: Injecting LLM Knowledge for Event Plausibility | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Despite advances in language modelling, distributional methods that build
semantic representations from co-occurrences fail to discriminate between
plausible and implausible events. In this work, we investigate how plausibility
prediction can be improved by injecting latent knowledge prompted from large
language models using parameter-efficient fine-tuning. We train 12 task
adapters to learn various physical properties and association measures and
perform adapter fusion to compose latent semantic knowledge from each task on
top of pre-trained AlBERT embeddings. We automate auxiliary task data
generation, which enables us to scale our approach and fine-tune our learned
representations across two plausibility datasets. Our code is available at
https://github.com/Jacob-Chmura/plausibility-vaccine.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 21:55:17 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Chmura",
"Jacob",
""
],
[
"Dauvet",
"Jonah",
""
],
[
"Sabry",
"Sebastian",
""
]
] | TITLE: Plausibility Vaccine: Injecting LLM Knowledge for Event Plausibility
ABSTRACT: Despite advances in language modelling, distributional methods that build
semantic representations from co-occurrences fail to discriminate between
plausible and implausible events. In this work, we investigate how plausibility
prediction can be improved by injecting latent knowledge prompted from large
language models using parameter-efficient fine-tuning. We train 12 task
adapters to learn various physical properties and association measures and
perform adapter fusion to compose latent semantic knowledge from each task on
top of pre-trained AlBERT embeddings. We automate auxiliary task data
generation, which enables us to scale our approach and fine-tune our learned
representations across two plausibility datasets. Our code is available at
https://github.com/Jacob-Chmura/plausibility-vaccine.
|
2503.12683 | Lachlan Simpson | Lachlan Simpson, Federico Costanza, Kyle Millar, Adriel Cheng,
Cheng-Chew Lim, Hong Gunn Chew | Algebraic Adversarial Attacks on Explainability Models | null | null | null | null | cs.LG math.GR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Classical adversarial attacks are phrased as a constrained optimisation
problem. Despite the efficacy of a constrained optimisation approach to
adversarial attacks, one cannot trace how an adversarial point was generated.
In this work, we propose an algebraic approach to adversarial attacks and study
the conditions under which one can generate adversarial examples for post-hoc
explainability models. Phrasing neural networks in the framework of geometric
deep learning, algebraic adversarial attacks are constructed through analysis
of the symmetry groups of neural networks. Algebraic adversarial examples
provide a mathematically tractable approach to adversarial examples. We
validate our approach of algebraic adversarial examples on two well-known and
one real-world dataset.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 22:55:02 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Simpson",
"Lachlan",
""
],
[
"Costanza",
"Federico",
""
],
[
"Millar",
"Kyle",
""
],
[
"Cheng",
"Adriel",
""
],
[
"Lim",
"Cheng-Chew",
""
],
[
"Chew",
"Hong Gunn",
""
]
] | TITLE: Algebraic Adversarial Attacks on Explainability Models
ABSTRACT: Classical adversarial attacks are phrased as a constrained optimisation
problem. Despite the efficacy of a constrained optimisation approach to
adversarial attacks, one cannot trace how an adversarial point was generated.
In this work, we propose an algebraic approach to adversarial attacks and study
the conditions under which one can generate adversarial examples for post-hoc
explainability models. Phrasing neural networks in the framework of geometric
deep learning, algebraic adversarial attacks are constructed through analysis
of the symmetry groups of neural networks. Algebraic adversarial examples
provide a mathematically tractable approach to adversarial examples. We
validate our approach of algebraic adversarial examples on two well-known and
one real-world dataset.
|
2503.12686 | Jacqueline Mitchell | Jacqueline L. Mitchell, Brian Hyeongseok Kim, Chenyu Zhou, Chao Wang | Can LLMs Formally Reason as Abstract Interpreters for Program Analysis? | null | null | null | null | cs.LG cs.PL cs.SE | http://creativecommons.org/licenses/by/4.0/ | LLMs have demonstrated impressive capabilities in code generation and
comprehension, but their potential in being able to perform program analysis in
a formal, automatic manner remains under-explored. To that end, we
systematically investigate whether LLMs can reason about programs using a
program analysis framework called abstract interpretation. We prompt LLMs to
follow two different strategies, denoted as Compositional and Fixed Point
Equation, to formally reason in the style of abstract interpretation, which has
never been done before to the best of our knowledge. We validate our approach
using state-of-the-art LLMs on 22 challenging benchmark programs from the
Software Verification Competition (SV-COMP) 2019 dataset, widely used in
program analysis. Our results show that our strategies are able to elicit
abstract interpretation-based reasoning in the tested models, but LLMs are
susceptible to logical errors, especially while interpreting complex program
structures, as well as general hallucinations. This highlights key areas for
improvement in the formal reasoning capabilities of LLMs.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 23:05:52 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Mitchell",
"Jacqueline L.",
""
],
[
"Kim",
"Brian Hyeongseok",
""
],
[
"Zhou",
"Chenyu",
""
],
[
"Wang",
"Chao",
""
]
] | TITLE: Can LLMs Formally Reason as Abstract Interpreters for Program Analysis?
ABSTRACT: LLMs have demonstrated impressive capabilities in code generation and
comprehension, but their potential in being able to perform program analysis in
a formal, automatic manner remains under-explored. To that end, we
systematically investigate whether LLMs can reason about programs using a
program analysis framework called abstract interpretation. We prompt LLMs to
follow two different strategies, denoted as Compositional and Fixed Point
Equation, to formally reason in the style of abstract interpretation, which has
never been done before to the best of our knowledge. We validate our approach
using state-of-the-art LLMs on 22 challenging benchmark programs from the
Software Verification Competition (SV-COMP) 2019 dataset, widely used in
program analysis. Our results show that our strategies are able to elicit
abstract interpretation-based reasoning in the tested models, but LLMs are
susceptible to logical errors, especially while interpreting complex program
structures, as well as general hallucinations. This highlights key areas for
improvement in the formal reasoning capabilities of LLMs.
|
2503.12695 | Meng Li | Yuansheng Lian, Ke Zhang, Meng Li | CDKFormer: Contextual Deviation Knowledge-Based Transformer for
Long-Tail Trajectory Prediction | null | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the future movements of surrounding vehicles is essential for
ensuring the safe operation and efficient navigation of autonomous vehicles
(AVs) in urban traffic environments. Existing vehicle trajectory prediction
methods primarily focus on improving overall performance, yet they struggle to
address long-tail scenarios effectively. This limitation often leads to poor
predictions in rare cases, significantly increasing the risk of safety
incidents. Taking Argoverse 2 motion forecasting dataset as an example, we
first investigate the long-tail characteristics in trajectory samples from two
perspectives, individual motion and group interaction, and deriving deviation
features to distinguish abnormal from regular scenarios. On this basis, we
propose CDKFormer, a Contextual Deviation Knowledge-based Transformer model for
long-tail trajectory prediction. CDKFormer integrates an attention-based scene
context fusion module to encode spatiotemporal interaction and road topology.
An additional deviation feature fusion module is proposed to capture the
dynamic deviations in the target vehicle status. We further introduce a dual
query-based decoder, supported by a multi-stream decoder block, to sequentially
decode heterogeneous scene deviation features and generate multimodal
trajectory predictions. Extensive experiments demonstrate that CDKFormer
achieves state-of-the-art performance, significantly enhancing prediction
accuracy and robustness for long-tailed trajectories compared to existing
methods, thus advancing the reliability of AVs in complex real-world
environments.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 23:48:13 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lian",
"Yuansheng",
""
],
[
"Zhang",
"Ke",
""
],
[
"Li",
"Meng",
""
]
] | TITLE: CDKFormer: Contextual Deviation Knowledge-Based Transformer for
Long-Tail Trajectory Prediction
ABSTRACT: Predicting the future movements of surrounding vehicles is essential for
ensuring the safe operation and efficient navigation of autonomous vehicles
(AVs) in urban traffic environments. Existing vehicle trajectory prediction
methods primarily focus on improving overall performance, yet they struggle to
address long-tail scenarios effectively. This limitation often leads to poor
predictions in rare cases, significantly increasing the risk of safety
incidents. Taking Argoverse 2 motion forecasting dataset as an example, we
first investigate the long-tail characteristics in trajectory samples from two
perspectives, individual motion and group interaction, and deriving deviation
features to distinguish abnormal from regular scenarios. On this basis, we
propose CDKFormer, a Contextual Deviation Knowledge-based Transformer model for
long-tail trajectory prediction. CDKFormer integrates an attention-based scene
context fusion module to encode spatiotemporal interaction and road topology.
An additional deviation feature fusion module is proposed to capture the
dynamic deviations in the target vehicle status. We further introduce a dual
query-based decoder, supported by a multi-stream decoder block, to sequentially
decode heterogeneous scene deviation features and generate multimodal
trajectory predictions. Extensive experiments demonstrate that CDKFormer
achieves state-of-the-art performance, significantly enhancing prediction
accuracy and robustness for long-tailed trajectories compared to existing
methods, thus advancing the reliability of AVs in complex real-world
environments.
|
2503.12698 | Dazhou Guo | Dazhou Guo, Zhanghexuan Ji, Yanzhou Su, Dandan Zheng, Heng Guo, Puyang
Wang, Ke Yan, Yirui Wang, Qinji Yu, Zi Li, Minfeng Xu, Jianfeng Zhang,
Haoshen Li, Jia Ge, Tsung-Ying Ho, Bing-Shen Huang, Tashan Ai, Kuaile Zhao,
Na Shen, Qifeng Wang, Yun Bian, Tingyu Wu, Peng Du, Hua Zhang, Feng-Ming
Kong, Alan L. Yuille, Cher Heng Tan, Chunyan Miao, Perry J. Pickhardt,
Senxiang Yan, Ronald M. Summers, Le Lu, Dakai Jin, Xianghua Ye | A Continual Learning-driven Model for Accurate and Generalizable
Segmentation of Clinically Comprehensive and Fine-grained Whole-body
Anatomies in CT | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Precision medicine in the quantitative management of chronic diseases and
oncology would be greatly improved if the Computed Tomography (CT) scan of any
patient could be segmented, parsed and analyzed in a precise and detailed way.
However, there is no such fully annotated CT dataset with all anatomies
delineated for training because of the exceptionally high manual cost, the need
for specialized clinical expertise, and the time required to finish the task.
To this end, we proposed a novel continual learning-driven CT model that can
segment complete anatomies presented using dozens of previously partially
labeled datasets, dynamically expanding its capacity to segment new ones
without compromising previously learned organ knowledge. Existing multi-dataset
approaches are not able to dynamically segment new anatomies without
catastrophic forgetting and would encounter optimization difficulty or
infeasibility when segmenting hundreds of anatomies across the whole range of
body regions. Our single unified CT segmentation model, CL-Net, can highly
accurately segment a clinically comprehensive set of 235 fine-grained
whole-body anatomies. Composed of a universal encoder, multiple optimized and
pruned decoders, CL-Net is developed using 13,952 CT scans from 20 public and
16 private high-quality partially labeled CT datasets of various vendors,
different contrast phases, and pathologies. Extensive evaluation demonstrates
that CL-Net consistently outperforms the upper limit of an ensemble of 36
specialist nnUNets trained per dataset with the complexity of 5% model size and
significantly surpasses the segmentation accuracy of recent leading Segment
Anything-style medical image foundation models by large margins. Our continual
learning-driven CL-Net model would lay a solid foundation to facilitate many
downstream tasks of oncology and chronic diseases using the most widely adopted
CT imaging.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 23:55:02 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guo",
"Dazhou",
""
],
[
"Ji",
"Zhanghexuan",
""
],
[
"Su",
"Yanzhou",
""
],
[
"Zheng",
"Dandan",
""
],
[
"Guo",
"Heng",
""
],
[
"Wang",
"Puyang",
""
],
[
"Yan",
"Ke",
""
],
[
"Wang",
"Yirui",
""
],
[
"Yu",
"Qinji",
""
],
[
"Li",
"Zi",
""
],
[
"Xu",
"Minfeng",
""
],
[
"Zhang",
"Jianfeng",
""
],
[
"Li",
"Haoshen",
""
],
[
"Ge",
"Jia",
""
],
[
"Ho",
"Tsung-Ying",
""
],
[
"Huang",
"Bing-Shen",
""
],
[
"Ai",
"Tashan",
""
],
[
"Zhao",
"Kuaile",
""
],
[
"Shen",
"Na",
""
],
[
"Wang",
"Qifeng",
""
],
[
"Bian",
"Yun",
""
],
[
"Wu",
"Tingyu",
""
],
[
"Du",
"Peng",
""
],
[
"Zhang",
"Hua",
""
],
[
"Kong",
"Feng-Ming",
""
],
[
"Yuille",
"Alan L.",
""
],
[
"Tan",
"Cher Heng",
""
],
[
"Miao",
"Chunyan",
""
],
[
"Pickhardt",
"Perry J.",
""
],
[
"Yan",
"Senxiang",
""
],
[
"Summers",
"Ronald M.",
""
],
[
"Lu",
"Le",
""
],
[
"Jin",
"Dakai",
""
],
[
"Ye",
"Xianghua",
""
]
] | TITLE: A Continual Learning-driven Model for Accurate and Generalizable
Segmentation of Clinically Comprehensive and Fine-grained Whole-body
Anatomies in CT
ABSTRACT: Precision medicine in the quantitative management of chronic diseases and
oncology would be greatly improved if the Computed Tomography (CT) scan of any
patient could be segmented, parsed and analyzed in a precise and detailed way.
However, there is no such fully annotated CT dataset with all anatomies
delineated for training because of the exceptionally high manual cost, the need
for specialized clinical expertise, and the time required to finish the task.
To this end, we proposed a novel continual learning-driven CT model that can
segment complete anatomies presented using dozens of previously partially
labeled datasets, dynamically expanding its capacity to segment new ones
without compromising previously learned organ knowledge. Existing multi-dataset
approaches are not able to dynamically segment new anatomies without
catastrophic forgetting and would encounter optimization difficulty or
infeasibility when segmenting hundreds of anatomies across the whole range of
body regions. Our single unified CT segmentation model, CL-Net, can highly
accurately segment a clinically comprehensive set of 235 fine-grained
whole-body anatomies. Composed of a universal encoder, multiple optimized and
pruned decoders, CL-Net is developed using 13,952 CT scans from 20 public and
16 private high-quality partially labeled CT datasets of various vendors,
different contrast phases, and pathologies. Extensive evaluation demonstrates
that CL-Net consistently outperforms the upper limit of an ensemble of 36
specialist nnUNets trained per dataset with the complexity of 5% model size and
significantly surpasses the segmentation accuracy of recent leading Segment
Anything-style medical image foundation models by large margins. Our continual
learning-driven CL-Net model would lay a solid foundation to facilitate many
downstream tasks of oncology and chronic diseases using the most widely adopted
CT imaging.
|
2503.12706 | Rahul Deshmukh | Rahul Deshmukh and Avinash Kak | SatDepth: A Novel Dataset for Satellite Image Matching | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in deep-learning based methods for image matching have
demonstrated their superiority over traditional algorithms, enabling
correspondence estimation in challenging scenes with significant differences in
viewing angles, illumination and weather conditions. However, the existing
datasets, learning frameworks, and evaluation metrics for the deep-learning
based methods are limited to ground-based images recorded with pinhole cameras
and have not been explored for satellite images. In this paper, we present
``SatDepth'', a novel dataset that provides dense ground-truth correspondences
for training image matching frameworks meant specifically for satellite images.
Satellites capture images from various viewing angles and tracks through
multiple revisits over a region. To manage this variability, we propose a
dataset balancing strategy through a novel image rotation augmentation
procedure. This procedure allows for the discovery of corresponding pixels even
in the presence of large rotational differences between the images. We
benchmark four existing image matching frameworks using our dataset and carry
out an ablation study that confirms that the models trained with our dataset
with rotation augmentation outperform (up to 40% increase in precision) the
models trained with other datasets, especially when there exist large
rotational differences between the images.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 00:14:13 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Deshmukh",
"Rahul",
""
],
[
"Kak",
"Avinash",
""
]
] | TITLE: SatDepth: A Novel Dataset for Satellite Image Matching
ABSTRACT: Recent advances in deep-learning based methods for image matching have
demonstrated their superiority over traditional algorithms, enabling
correspondence estimation in challenging scenes with significant differences in
viewing angles, illumination and weather conditions. However, the existing
datasets, learning frameworks, and evaluation metrics for the deep-learning
based methods are limited to ground-based images recorded with pinhole cameras
and have not been explored for satellite images. In this paper, we present
``SatDepth'', a novel dataset that provides dense ground-truth correspondences
for training image matching frameworks meant specifically for satellite images.
Satellites capture images from various viewing angles and tracks through
multiple revisits over a region. To manage this variability, we propose a
dataset balancing strategy through a novel image rotation augmentation
procedure. This procedure allows for the discovery of corresponding pixels even
in the presence of large rotational differences between the images. We
benchmark four existing image matching frameworks using our dataset and carry
out an ablation study that confirms that the models trained with our dataset
with rotation augmentation outperform (up to 40% increase in precision) the
models trained with other datasets, especially when there exist large
rotational differences between the images.
|
2503.12720 | Feng Qiao | Feng Qiao, Zhexiao Xiong, Eric Xing, Nathan Jacobs | GenStereo: Towards Open-World Generation of Stereo Images and
Unsupervised Matching | Project page is available at https://qjizhi.github.io/genstereo | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stereo images are fundamental to numerous applications, including extended
reality (XR) devices, autonomous driving, and robotics. Unfortunately,
acquiring high-quality stereo images remains challenging due to the precise
calibration requirements of dual-camera setups and the complexity of obtaining
accurate, dense disparity maps. Existing stereo image generation methods
typically focus on either visual quality for viewing or geometric accuracy for
matching, but not both. We introduce GenStereo, a diffusion-based approach, to
bridge this gap. The method includes two primary innovations (1) conditioning
the diffusion process on a disparity-aware coordinate embedding and a warped
input image, allowing for more precise stereo alignment than previous methods,
and (2) an adaptive fusion mechanism that intelligently combines the
diffusion-generated image with a warped image, improving both realism and
disparity consistency. Through extensive training on 11 diverse stereo
datasets, GenStereo demonstrates strong generalization ability. GenStereo
achieves state-of-the-art performance in both stereo image generation and
unsupervised stereo matching tasks. Our framework eliminates the need for
complex hardware setups while enabling high-quality stereo image generation,
making it valuable for both real-world applications and unsupervised learning
scenarios. Project page is available at https://qjizhi.github.io/genstereo
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 01:19:28 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Qiao",
"Feng",
""
],
[
"Xiong",
"Zhexiao",
""
],
[
"Xing",
"Eric",
""
],
[
"Jacobs",
"Nathan",
""
]
] | TITLE: GenStereo: Towards Open-World Generation of Stereo Images and
Unsupervised Matching
ABSTRACT: Stereo images are fundamental to numerous applications, including extended
reality (XR) devices, autonomous driving, and robotics. Unfortunately,
acquiring high-quality stereo images remains challenging due to the precise
calibration requirements of dual-camera setups and the complexity of obtaining
accurate, dense disparity maps. Existing stereo image generation methods
typically focus on either visual quality for viewing or geometric accuracy for
matching, but not both. We introduce GenStereo, a diffusion-based approach, to
bridge this gap. The method includes two primary innovations (1) conditioning
the diffusion process on a disparity-aware coordinate embedding and a warped
input image, allowing for more precise stereo alignment than previous methods,
and (2) an adaptive fusion mechanism that intelligently combines the
diffusion-generated image with a warped image, improving both realism and
disparity consistency. Through extensive training on 11 diverse stereo
datasets, GenStereo demonstrates strong generalization ability. GenStereo
achieves state-of-the-art performance in both stereo image generation and
unsupervised stereo matching tasks. Our framework eliminates the need for
complex hardware setups while enabling high-quality stereo image generation,
making it valuable for both real-world applications and unsupervised learning
scenarios. Project page is available at https://qjizhi.github.io/genstereo
|
2503.12730 | Philip Quirke | Philip Quirke, Clement Neo, Abir Harrasse, Dhruv Nathawani and Amir
Abdullah | TinySQL: A Progressive Text-to-SQL Dataset for Mechanistic
Interpretability Research | 9 pages, 19 figures, 7 tables, 18 trained models | null | null | null | cs.LG cs.AI cs.DB | http://creativecommons.org/licenses/by-sa/4.0/ | Mechanistic interpretability research faces a gap between analyzing simple
circuits in toy tasks and discovering features in large models. To bridge this
gap, we propose text-to-SQL generation as an ideal task to study, as it
combines the formal structure of toy tasks with real-world complexity. We
introduce TinySQL, a synthetic dataset progressing from basic to advanced SQL
operations, and train models ranging from 33M to 1B parameters to establish a
comprehensive testbed for interpretability. We apply multiple complementary
interpretability techniques, including edge attribution patching and sparse
autoencoders, to identify minimal circuits and components supporting SQL
generation. Our analysis reveals both the potential and limitations of current
interpretability methods, showing how circuits can vary even across similar
queries. Lastly, we demonstrate how mechanistic interpretability can identify
flawed heuristics in models and improve synthetic dataset design. Our work
provides a comprehensive framework for evaluating and advancing
interpretability techniques while establishing clear boundaries for their
reliable application.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 01:47:50 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Quirke",
"Philip",
""
],
[
"Neo",
"Clement",
""
],
[
"Harrasse",
"Abir",
""
],
[
"Nathawani",
"Dhruv",
""
],
[
"Abdullah",
"Amir",
""
]
] | TITLE: TinySQL: A Progressive Text-to-SQL Dataset for Mechanistic
Interpretability Research
ABSTRACT: Mechanistic interpretability research faces a gap between analyzing simple
circuits in toy tasks and discovering features in large models. To bridge this
gap, we propose text-to-SQL generation as an ideal task to study, as it
combines the formal structure of toy tasks with real-world complexity. We
introduce TinySQL, a synthetic dataset progressing from basic to advanced SQL
operations, and train models ranging from 33M to 1B parameters to establish a
comprehensive testbed for interpretability. We apply multiple complementary
interpretability techniques, including edge attribution patching and sparse
autoencoders, to identify minimal circuits and components supporting SQL
generation. Our analysis reveals both the potential and limitations of current
interpretability methods, showing how circuits can vary even across similar
queries. Lastly, we demonstrate how mechanistic interpretability can identify
flawed heuristics in models and improve synthetic dataset design. Our work
provides a comprehensive framework for evaluating and advancing
interpretability techniques while establishing clear boundaries for their
reliable application.
|
2503.12732 | Zibin Liu | Zibin Liu, Banglei Guan, Yang Shang, Yifei Bian, Pengju Sun, Qifeng Yu | Stereo Event-based, 6-DOF Pose Tracking for Uncooperative Spacecraft | Accepted by IEEE Transactions on Geoscience and Remote Sensing | null | 10.1109/TGRS.2025.3530915 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pose tracking of uncooperative spacecraft is an essential technology for
space exploration and on-orbit servicing, which remains an open problem. Event
cameras possess numerous advantages, such as high dynamic range, high temporal
resolution, and low power consumption. These attributes hold the promise of
overcoming challenges encountered by conventional cameras, including motion
blur and extreme illumination, among others. To address the standard on-orbit
observation missions, we propose a line-based pose tracking method for
uncooperative spacecraft utilizing a stereo event camera. To begin with, we
estimate the wireframe model of uncooperative spacecraft, leveraging the
spatio-temporal consistency of stereo event streams for line-based
reconstruction. Then, we develop an effective strategy to establish
correspondences between events and projected lines of uncooperative spacecraft.
Using these correspondences, we formulate the pose tracking as a continuous
optimization process over 6-DOF motion parameters, achieved by minimizing
event-line distances. Moreover, we construct a stereo event-based uncooperative
spacecraft motion dataset, encompassing both simulated and real events. The
proposed method is quantitatively evaluated through experiments conducted on
our self-collected dataset, demonstrating an improvement in terms of
effectiveness and accuracy over competing methods. The code will be
open-sourced at https://github.com/Zibin6/SE6PT.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 01:51:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Zibin",
""
],
[
"Guan",
"Banglei",
""
],
[
"Shang",
"Yang",
""
],
[
"Bian",
"Yifei",
""
],
[
"Sun",
"Pengju",
""
],
[
"Yu",
"Qifeng",
""
]
] | TITLE: Stereo Event-based, 6-DOF Pose Tracking for Uncooperative Spacecraft
ABSTRACT: Pose tracking of uncooperative spacecraft is an essential technology for
space exploration and on-orbit servicing, which remains an open problem. Event
cameras possess numerous advantages, such as high dynamic range, high temporal
resolution, and low power consumption. These attributes hold the promise of
overcoming challenges encountered by conventional cameras, including motion
blur and extreme illumination, among others. To address the standard on-orbit
observation missions, we propose a line-based pose tracking method for
uncooperative spacecraft utilizing a stereo event camera. To begin with, we
estimate the wireframe model of uncooperative spacecraft, leveraging the
spatio-temporal consistency of stereo event streams for line-based
reconstruction. Then, we develop an effective strategy to establish
correspondences between events and projected lines of uncooperative spacecraft.
Using these correspondences, we formulate the pose tracking as a continuous
optimization process over 6-DOF motion parameters, achieved by minimizing
event-line distances. Moreover, we construct a stereo event-based uncooperative
spacecraft motion dataset, encompassing both simulated and real events. The
proposed method is quantitatively evaluated through experiments conducted on
our self-collected dataset, demonstrating an improvement in terms of
effectiveness and accuracy over competing methods. The code will be
open-sourced at https://github.com/Zibin6/SE6PT.
|
2503.12745 | Patrick Rim | Patrick Rim, Hyoungseob Park, S. Gangopadhyay, Ziyao Zeng, Younjoon
Chung, Alex Wong | ProtoDepth: Unsupervised Continual Depth Completion with Prototypes | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present ProtoDepth, a novel prototype-based approach for continual
learning of unsupervised depth completion, the multimodal 3D reconstruction
task of predicting dense depth maps from RGB images and sparse point clouds.
The unsupervised learning paradigm is well-suited for continual learning, as
ground truth is not needed. However, when training on new non-stationary
distributions, depth completion models will catastrophically forget previously
learned information. We address forgetting by learning prototype sets that
adapt the latent features of a frozen pretrained model to new domains. Since
the original weights are not modified, ProtoDepth does not forget when
test-time domain identity is known. To extend ProtoDepth to the challenging
setting where the test-time domain identity is withheld, we propose to learn
domain descriptors that enable the model to select the appropriate prototype
set for inference. We evaluate ProtoDepth on benchmark dataset sequences, where
we reduce forgetting compared to baselines by 52.2% for indoor and 53.2% for
outdoor to achieve the state of the art.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 02:25:49 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Rim",
"Patrick",
""
],
[
"Park",
"Hyoungseob",
""
],
[
"Gangopadhyay",
"S.",
""
],
[
"Zeng",
"Ziyao",
""
],
[
"Chung",
"Younjoon",
""
],
[
"Wong",
"Alex",
""
]
] | TITLE: ProtoDepth: Unsupervised Continual Depth Completion with Prototypes
ABSTRACT: We present ProtoDepth, a novel prototype-based approach for continual
learning of unsupervised depth completion, the multimodal 3D reconstruction
task of predicting dense depth maps from RGB images and sparse point clouds.
The unsupervised learning paradigm is well-suited for continual learning, as
ground truth is not needed. However, when training on new non-stationary
distributions, depth completion models will catastrophically forget previously
learned information. We address forgetting by learning prototype sets that
adapt the latent features of a frozen pretrained model to new domains. Since
the original weights are not modified, ProtoDepth does not forget when
test-time domain identity is known. To extend ProtoDepth to the challenging
setting where the test-time domain identity is withheld, we propose to learn
domain descriptors that enable the model to select the appropriate prototype
set for inference. We evaluate ProtoDepth on benchmark dataset sequences, where
we reduce forgetting compared to baselines by 52.2% for indoor and 53.2% for
outdoor to achieve the state of the art.
|
2503.12758 | Zhifeng Wang | Zhifeng Wang, Renjiao Yi, Xin Wen, Chenyang Zhu, Kai Xu | VasTSD: Learning 3D Vascular Tree-state Space Diffusion Model for
Angiography Synthesis | null | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Angiography imaging is a medical imaging technique that enhances the
visibility of blood vessels within the body by using contrast agents.
Angiographic images can effectively assist in the diagnosis of vascular
diseases. However, contrast agents may bring extra radiation exposure which is
harmful to patients with health risks. To mitigate these concerns, in this
paper, we aim to automatically generate angiography from non-angiographic
inputs, by leveraging and enhancing the inherent physical properties of
vascular structures. Previous methods relying on 2D slice-based angiography
synthesis struggle with maintaining continuity in 3D vascular structures and
exhibit limited effectiveness across different imaging modalities. We propose
VasTSD, a 3D vascular tree-state space diffusion model to synthesize
angiography from 3D non-angiographic volumes, with a novel state space
serialization approach that dynamically constructs vascular tree topologies,
integrating these with a diffusion-based generative model to ensure the
generation of anatomically continuous vasculature in 3D volumes. A pre-trained
vision embedder is employed to construct vascular state space representations,
enabling consistent modeling of vascular structures across multiple modalities.
Extensive experiments on various angiographic datasets demonstrate the
superiority of VasTSD over prior works, achieving enhanced continuity of blood
vessels in synthesized angiographic synthesis for multiple modalities and
anatomical regions.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 02:53:38 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wang",
"Zhifeng",
""
],
[
"Yi",
"Renjiao",
""
],
[
"Wen",
"Xin",
""
],
[
"Zhu",
"Chenyang",
""
],
[
"Xu",
"Kai",
""
]
] | TITLE: VasTSD: Learning 3D Vascular Tree-state Space Diffusion Model for
Angiography Synthesis
ABSTRACT: Angiography imaging is a medical imaging technique that enhances the
visibility of blood vessels within the body by using contrast agents.
Angiographic images can effectively assist in the diagnosis of vascular
diseases. However, contrast agents may bring extra radiation exposure which is
harmful to patients with health risks. To mitigate these concerns, in this
paper, we aim to automatically generate angiography from non-angiographic
inputs, by leveraging and enhancing the inherent physical properties of
vascular structures. Previous methods relying on 2D slice-based angiography
synthesis struggle with maintaining continuity in 3D vascular structures and
exhibit limited effectiveness across different imaging modalities. We propose
VasTSD, a 3D vascular tree-state space diffusion model to synthesize
angiography from 3D non-angiographic volumes, with a novel state space
serialization approach that dynamically constructs vascular tree topologies,
integrating these with a diffusion-based generative model to ensure the
generation of anatomically continuous vasculature in 3D volumes. A pre-trained
vision embedder is employed to construct vascular state space representations,
enabling consistent modeling of vascular structures across multiple modalities.
Extensive experiments on various angiographic datasets demonstrate the
superiority of VasTSD over prior works, achieving enhanced continuity of blood
vessels in synthesized angiographic synthesis for multiple modalities and
anatomical regions.
|
2503.12759 | Jerry Huang | Jerry Huang, Siddarth Madala, Risham Sidhu, Cheng Niu, Julia
Hockenmaier, Tong Zhang | RAG-RL: Advancing Retrieval-Augmented Generation via RL and Curriculum
Learning | 11 Pages, 3 Figures, Preprint | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research highlights the challenges retrieval models face in retrieving
useful contexts and the limitations of generation models in effectively
utilizing those contexts in retrieval-augmented generation (RAG) settings. To
address these challenges, we introduce RAG-RL, the first reasoning language
model (RLM) specifically trained for RAG. RAG-RL demonstrates that stronger
answer generation models can identify relevant contexts within larger sets of
retrieved information -- thereby alleviating the burden on retrievers -- while
also being able to utilize those contexts more effectively. Moreover, we show
that curriculum design in the reinforcement learning (RL) post-training process
is a powerful approach to enhancing model performance. We benchmark our method
on two open-domain question-answering datasets and achieve state-of-the-art
results, surpassing previous SOTA generative reader models. In addition, we
offers empirical insights into various curriculum learning strategies,
providing a deeper understanding of their impact on model performance.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 02:53:42 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Jerry",
""
],
[
"Madala",
"Siddarth",
""
],
[
"Sidhu",
"Risham",
""
],
[
"Niu",
"Cheng",
""
],
[
"Hockenmaier",
"Julia",
""
],
[
"Zhang",
"Tong",
""
]
] | TITLE: RAG-RL: Advancing Retrieval-Augmented Generation via RL and Curriculum
Learning
ABSTRACT: Recent research highlights the challenges retrieval models face in retrieving
useful contexts and the limitations of generation models in effectively
utilizing those contexts in retrieval-augmented generation (RAG) settings. To
address these challenges, we introduce RAG-RL, the first reasoning language
model (RLM) specifically trained for RAG. RAG-RL demonstrates that stronger
answer generation models can identify relevant contexts within larger sets of
retrieved information -- thereby alleviating the burden on retrievers -- while
also being able to utilize those contexts more effectively. Moreover, we show
that curriculum design in the reinforcement learning (RL) post-training process
is a powerful approach to enhancing model performance. We benchmark our method
on two open-domain question-answering datasets and achieve state-of-the-art
results, surpassing previous SOTA generative reader models. In addition, we
offers empirical insights into various curriculum learning strategies,
providing a deeper understanding of their impact on model performance.
|
2503.12769 | Shenghao Fu | Shenghao Fu, Qize Yang, Yuan-Ming Li, Yi-Xing Peng, Kun-Yu Lin, Xihan
Wei, Jian-Fang Hu, Xiaohua Xie, Wei-Shi Zheng | ViSpeak: Visual Instruction Feedback in Streaming Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in Large Multi-modal Models (LMMs) are primarily focused on
offline video understanding. Instead, streaming video understanding poses great
challenges to recent models due to its time-sensitive, omni-modal and
interactive characteristics. In this work, we aim to extend the streaming video
understanding from a new perspective and propose a novel task named Visual
Instruction Feedback in which models should be aware of visual contents and
learn to extract instructions from them. For example, when users wave their
hands to agents, agents should recognize the gesture and start conversations
with welcome information. Thus, following instructions in visual modality
greatly enhances user-agent interactions. To facilitate research, we define
seven key subtasks highly relevant to visual modality and collect the
ViSpeak-Instruct dataset for training and the ViSpeak-Bench for evaluation.
Further, we propose the ViSpeak model, which is a SOTA streaming video
understanding LMM with GPT-4o-level performance on various streaming video
understanding benchmarks. After finetuning on our ViSpeak-Instruct dataset,
ViSpeak is equipped with basic visual instruction feedback ability, serving as
a solid baseline for future research.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 03:05:31 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Fu",
"Shenghao",
""
],
[
"Yang",
"Qize",
""
],
[
"Li",
"Yuan-Ming",
""
],
[
"Peng",
"Yi-Xing",
""
],
[
"Lin",
"Kun-Yu",
""
],
[
"Wei",
"Xihan",
""
],
[
"Hu",
"Jian-Fang",
""
],
[
"Xie",
"Xiaohua",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] | TITLE: ViSpeak: Visual Instruction Feedback in Streaming Videos
ABSTRACT: Recent advances in Large Multi-modal Models (LMMs) are primarily focused on
offline video understanding. Instead, streaming video understanding poses great
challenges to recent models due to its time-sensitive, omni-modal and
interactive characteristics. In this work, we aim to extend the streaming video
understanding from a new perspective and propose a novel task named Visual
Instruction Feedback in which models should be aware of visual contents and
learn to extract instructions from them. For example, when users wave their
hands to agents, agents should recognize the gesture and start conversations
with welcome information. Thus, following instructions in visual modality
greatly enhances user-agent interactions. To facilitate research, we define
seven key subtasks highly relevant to visual modality and collect the
ViSpeak-Instruct dataset for training and the ViSpeak-Bench for evaluation.
Further, we propose the ViSpeak model, which is a SOTA streaming video
understanding LMM with GPT-4o-level performance on various streaming video
understanding benchmarks. After finetuning on our ViSpeak-Instruct dataset,
ViSpeak is equipped with basic visual instruction feedback ability, serving as
a solid baseline for future research.
|
2503.12772 | Sung-Yeon Park | Sung-Yeon Park, Can Cui, Yunsheng Ma, Ahmadreza Moradipari, Rohit
Gupta, Kyungtae Han, Ziran Wang | NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving
Scene Understanding in Multi-Modal Large Language Models | null | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in multi-modal large language models (MLLMs) have
demonstrated strong performance across various domains; however, their ability
to comprehend driving scenes remains less proven. The complexity of driving
scenarios, which includes multi-view information, poses significant challenges
for existing MLLMs. In this paper, we introduce NuPlanQA-Eval, a multi-view,
multi-modal evaluation benchmark for driving scene understanding. To further
support generalization to multi-view driving scenarios, we also propose
NuPlanQA-1M, a large-scale dataset comprising 1M real-world visual
question-answering (VQA) pairs. For context-aware analysis of traffic scenes,
we categorize our dataset into nine subtasks across three core skills: Road
Environment Perception, Spatial Relations Recognition, and Ego-Centric
Reasoning. Furthermore, we present BEV-LLM, integrating Bird's-Eye-View (BEV)
features from multi-view images into MLLMs. Our evaluation results reveal key
challenges that existing MLLMs face in driving scene-specific perception and
spatial reasoning from ego-centric perspectives. In contrast, BEV-LLM
demonstrates remarkable adaptability to this domain, outperforming other models
in six of the nine subtasks. These findings highlight how BEV integration
enhances multi-view MLLMs while also identifying key areas that require further
refinement for effective adaptation to driving scenes. To facilitate further
research, we publicly release NuPlanQA at
https://github.com/sungyeonparkk/NuPlanQA.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 03:12:39 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Park",
"Sung-Yeon",
""
],
[
"Cui",
"Can",
""
],
[
"Ma",
"Yunsheng",
""
],
[
"Moradipari",
"Ahmadreza",
""
],
[
"Gupta",
"Rohit",
""
],
[
"Han",
"Kyungtae",
""
],
[
"Wang",
"Ziran",
""
]
] | TITLE: NuPlanQA: A Large-Scale Dataset and Benchmark for Multi-View Driving
Scene Understanding in Multi-Modal Large Language Models
ABSTRACT: Recent advances in multi-modal large language models (MLLMs) have
demonstrated strong performance across various domains; however, their ability
to comprehend driving scenes remains less proven. The complexity of driving
scenarios, which includes multi-view information, poses significant challenges
for existing MLLMs. In this paper, we introduce NuPlanQA-Eval, a multi-view,
multi-modal evaluation benchmark for driving scene understanding. To further
support generalization to multi-view driving scenarios, we also propose
NuPlanQA-1M, a large-scale dataset comprising 1M real-world visual
question-answering (VQA) pairs. For context-aware analysis of traffic scenes,
we categorize our dataset into nine subtasks across three core skills: Road
Environment Perception, Spatial Relations Recognition, and Ego-Centric
Reasoning. Furthermore, we present BEV-LLM, integrating Bird's-Eye-View (BEV)
features from multi-view images into MLLMs. Our evaluation results reveal key
challenges that existing MLLMs face in driving scene-specific perception and
spatial reasoning from ego-centric perspectives. In contrast, BEV-LLM
demonstrates remarkable adaptability to this domain, outperforming other models
in six of the nine subtasks. These findings highlight how BEV integration
enhances multi-view MLLMs while also identifying key areas that require further
refinement for effective adaptation to driving scenes. To facilitate further
research, we publicly release NuPlanQA at
https://github.com/sungyeonparkk/NuPlanQA.
|
2503.12778 | Sheeraz Gul | Gul Sheeraz, Qun Chen, Liu Feiyu, Zhou Fengjin MD | Adaptive Deep Learning for Multiclass Breast Cancer Classification via
Misprediction Risk Analysis | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Breast cancer remains one of the leading causes of cancer-related deaths
worldwide. Early detection is crucial for improving patient outcomes, yet the
diagnostic process is often complex and prone to inconsistencies among
pathologists. Computer-aided diagnostic approaches have significantly enhanced
breast cancer detection, particularly in binary classification (benign vs.
malignant). However, these methods face challenges in multiclass
classification, leading to frequent mispredictions. In this work, we propose a
novel adaptive learning approach for multiclass breast cancer classification
using H&E-stained histopathology images. First, we introduce a misprediction
risk analysis framework that quantifies and ranks the likelihood of an image
being mislabeled by a classifier. This framework leverages an interpretable
risk model that requires only a small number of labeled samples for training.
Next, we present an adaptive learning strategy that fine-tunes classifiers
based on the specific characteristics of a given dataset. This approach
minimizes misprediction risk, allowing the classifier to adapt effectively to
the target workload. We evaluate our proposed solutions on real benchmark
datasets, demonstrating that our risk analysis framework more accurately
identifies mispredictions compared to existing methods. Furthermore, our
adaptive learning approach significantly improves the performance of
state-of-the-art deep neural network classifiers.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 03:25:28 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Sheeraz",
"Gul",
""
],
[
"Chen",
"Qun",
""
],
[
"Feiyu",
"Liu",
""
],
[
"MD",
"Zhou Fengjin",
""
]
] | TITLE: Adaptive Deep Learning for Multiclass Breast Cancer Classification via
Misprediction Risk Analysis
ABSTRACT: Breast cancer remains one of the leading causes of cancer-related deaths
worldwide. Early detection is crucial for improving patient outcomes, yet the
diagnostic process is often complex and prone to inconsistencies among
pathologists. Computer-aided diagnostic approaches have significantly enhanced
breast cancer detection, particularly in binary classification (benign vs.
malignant). However, these methods face challenges in multiclass
classification, leading to frequent mispredictions. In this work, we propose a
novel adaptive learning approach for multiclass breast cancer classification
using H&E-stained histopathology images. First, we introduce a misprediction
risk analysis framework that quantifies and ranks the likelihood of an image
being mislabeled by a classifier. This framework leverages an interpretable
risk model that requires only a small number of labeled samples for training.
Next, we present an adaptive learning strategy that fine-tunes classifiers
based on the specific characteristics of a given dataset. This approach
minimizes misprediction risk, allowing the classifier to adapt effectively to
the target workload. We evaluate our proposed solutions on real benchmark
datasets, demonstrating that our risk analysis framework more accurately
identifies mispredictions compared to existing methods. Furthermore, our
adaptive learning approach significantly improves the performance of
state-of-the-art deep neural network classifiers.
|
2503.12784 | Jingzhou Huang | Jingzhou Huang, Jiuyao Lu, Alexander Williams Tolbert | Causal Feature Learning in the Social Sciences | null | null | null | null | stat.ME cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Variable selection poses a significant challenge in causal modeling,
particularly within the social sciences, where constructs often rely on
inter-related factors such as age, socioeconomic status, gender, and race.
Indeed, it has been argued that such attributes must be modeled as macro-level
abstractions of lower-level manipulable features, in order to preserve the
modularity assumption essential to causal inference. This paper accordingly
extends the theoretical framework of Causal Feature Learning (CFL).
Empirically, we apply the CFL algorithm to diverse social science datasets,
evaluating how CFL-derived macrostates compare with traditional microstates in
downstream modeling tasks.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 03:43:00 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Jingzhou",
""
],
[
"Lu",
"Jiuyao",
""
],
[
"Tolbert",
"Alexander Williams",
""
]
] | TITLE: Causal Feature Learning in the Social Sciences
ABSTRACT: Variable selection poses a significant challenge in causal modeling,
particularly within the social sciences, where constructs often rely on
inter-related factors such as age, socioeconomic status, gender, and race.
Indeed, it has been argued that such attributes must be modeled as macro-level
abstractions of lower-level manipulable features, in order to preserve the
modularity assumption essential to causal inference. This paper accordingly
extends the theoretical framework of Causal Feature Learning (CFL).
Empirically, we apply the CFL algorithm to diverse social science datasets,
evaluating how CFL-derived macrostates compare with traditional microstates in
downstream modeling tasks.
|
2503.12785 | Zhiyan Liu | Zhiyan Liu, Kaibin Huang | Semantic-Relevance Based Sensor Selection for Edge-AI Empowered Sensing
Systems | Submitted to IEEE for possible publications | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sixth-generation (6G) mobile network is envisioned to incorporate sensing
and edge artificial intelligence (AI) as two key functions. Their natural
convergence leads to the emergence of Integrated Sensing and Edge AI (ISEA), a
novel paradigm enabling real-time acquisition and understanding of sensory
information at the network edge. However, ISEA faces a communication bottleneck
due to the large number of sensors and the high dimensionality of sensory
features. Traditional approaches to communication-efficient ISEA lack awareness
of semantic relevance, i.e., the level of relevance between sensor observations
and the downstream task. To fill this gap, this paper presents a novel
framework for semantic-relevance-aware sensor selection to achieve optimal
end-to-end (E2E) task performance under heterogeneous sensor relevance and
channel states. E2E sensing accuracy analysis is provided to characterize the
sensing task performance in terms of selected sensors' relevance scores and
channel states. Building on the results, the sensor-selection problem for
accuracy maximization is formulated as an integer program and solved through a
tight approximation of the objective. The optimal solution exhibits a
priority-based structure, which ranks sensors based on a priority indicator
combining relevance scores and channel states and selects top-ranked sensors.
Low-complexity algorithms are then developed to determine the optimal numbers
of selected sensors and features. Experimental results on both synthetic and
real datasets show substantial accuracy gain achieved by the proposed selection
scheme compared to existing benchmarks.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 03:47:19 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Zhiyan",
""
],
[
"Huang",
"Kaibin",
""
]
] | TITLE: Semantic-Relevance Based Sensor Selection for Edge-AI Empowered Sensing
Systems
ABSTRACT: The sixth-generation (6G) mobile network is envisioned to incorporate sensing
and edge artificial intelligence (AI) as two key functions. Their natural
convergence leads to the emergence of Integrated Sensing and Edge AI (ISEA), a
novel paradigm enabling real-time acquisition and understanding of sensory
information at the network edge. However, ISEA faces a communication bottleneck
due to the large number of sensors and the high dimensionality of sensory
features. Traditional approaches to communication-efficient ISEA lack awareness
of semantic relevance, i.e., the level of relevance between sensor observations
and the downstream task. To fill this gap, this paper presents a novel
framework for semantic-relevance-aware sensor selection to achieve optimal
end-to-end (E2E) task performance under heterogeneous sensor relevance and
channel states. E2E sensing accuracy analysis is provided to characterize the
sensing task performance in terms of selected sensors' relevance scores and
channel states. Building on the results, the sensor-selection problem for
accuracy maximization is formulated as an integer program and solved through a
tight approximation of the objective. The optimal solution exhibits a
priority-based structure, which ranks sensors based on a priority indicator
combining relevance scores and channel states and selects top-ranked sensors.
Low-complexity algorithms are then developed to determine the optimal numbers
of selected sensors and features. Experimental results on both synthetic and
real datasets show substantial accuracy gain achieved by the proposed selection
scheme compared to existing benchmarks.
|
2503.12786 | Peirong Zhang | Peirong Zhang, Yuliang Liu, Songxuan Lai, Hongliang Li, Lianwen Jin | Privacy-Preserving Biometric Verification with Handwritten Random Digit
String | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Handwriting verification has stood as a steadfast identity authentication
method for decades. However, this technique risks potential privacy breaches
due to the inclusion of personal information in handwritten biometrics such as
signatures. To address this concern, we propose using the Random Digit String
(RDS) for privacy-preserving handwriting verification. This approach allows
users to authenticate themselves by writing an arbitrary digit sequence,
effectively ensuring privacy protection. To evaluate the effectiveness of RDS,
we construct a new HRDS4BV dataset composed of online naturally handwritten
RDS. Unlike conventional handwriting, RDS encompasses unconstrained and
variable content, posing significant challenges for modeling consistent
personal writing style. To surmount this, we propose the Pattern Attentive
VErification Network (PAVENet), along with a Discriminative Pattern Mining
(DPM) module. DPM adaptively enhances the recognition of consistent and
discriminative writing patterns, thus refining handwriting style
representation. Through comprehensive evaluations, we scrutinize the
applicability of online RDS verification and showcase a pronounced
outperformance of our model over existing methods. Furthermore, we discover a
noteworthy forgery phenomenon that deviates from prior findings and discuss its
positive impact in countering malicious impostor attacks. Substantially, our
work underscores the feasibility of privacy-preserving biometric verification
and propels the prospects of its broader acceptance and application.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 03:47:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhang",
"Peirong",
""
],
[
"Liu",
"Yuliang",
""
],
[
"Lai",
"Songxuan",
""
],
[
"Li",
"Hongliang",
""
],
[
"Jin",
"Lianwen",
""
]
] | TITLE: Privacy-Preserving Biometric Verification with Handwritten Random Digit
String
ABSTRACT: Handwriting verification has stood as a steadfast identity authentication
method for decades. However, this technique risks potential privacy breaches
due to the inclusion of personal information in handwritten biometrics such as
signatures. To address this concern, we propose using the Random Digit String
(RDS) for privacy-preserving handwriting verification. This approach allows
users to authenticate themselves by writing an arbitrary digit sequence,
effectively ensuring privacy protection. To evaluate the effectiveness of RDS,
we construct a new HRDS4BV dataset composed of online naturally handwritten
RDS. Unlike conventional handwriting, RDS encompasses unconstrained and
variable content, posing significant challenges for modeling consistent
personal writing style. To surmount this, we propose the Pattern Attentive
VErification Network (PAVENet), along with a Discriminative Pattern Mining
(DPM) module. DPM adaptively enhances the recognition of consistent and
discriminative writing patterns, thus refining handwriting style
representation. Through comprehensive evaluations, we scrutinize the
applicability of online RDS verification and showcase a pronounced
outperformance of our model over existing methods. Furthermore, we discover a
noteworthy forgery phenomenon that deviates from prior findings and discuss its
positive impact in countering malicious impostor attacks. Substantially, our
work underscores the feasibility of privacy-preserving biometric verification
and propels the prospects of its broader acceptance and application.
|
2503.12790 | Lei Li | Xiaofei Kong, Lei Li, Menghan Dou, Zhaoyun Chen, Yuchun Wu and Guoping
Guo | Quantum-Enhanced LLM Efficient Fine Tuning | null | null | null | null | quant-ph cs.AI | http://creativecommons.org/licenses/by/4.0/ | Low-Rank Adaptation (LoRA) enables efficient fine-tuning of pre-trained
language models via low-rank matrix approximation, which is effective in many
scenarios. However, its low-rank representation capacity is constrained in
complex tasks or high-rank dependency settings, potentially limiting model
adaptability. Addressing the expressive bottleneck of classical low-rank
approximation in fine-tuning large language models, this paper proposes a
parameter-efficient fine-tuning method based on a Quantum Weighted Tensor
Hybrid Network (QWTHN), which leverages Quantum Neural Network (QNN). The study
investigates quantum-classical hybrid parameter-efficient fine-tuning in
low-rank spaces. QWTHN decomposes pre-trained weights into quantum neural
network and tensor network representations, utilizing quantum state
superposition and other methods to break through classical rank limitations.
Experiments show that the proposed quantum fine-tuning technique for large
models approaches or even surpasses the parameter efficiency of LoRA. On the
CPsyCounD and R1-Distill-SFT datasets, QWTHN, compared to classical LoRA,
reduces training loss by up to 15% while using 76% fewer parameters, and
achieves an 8.4% performance improvement on the CPsyCounD test set. This
research not only realizes lightweight and efficient adaptation of quantum
resources to billion-parameter models but also validates the practical path of
quantum hardware driven by large model tasks, laying the first
engineering-ready technical foundation for future quantum-enhanced AGI systems.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 03:59:26 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kong",
"Xiaofei",
""
],
[
"Li",
"Lei",
""
],
[
"Dou",
"Menghan",
""
],
[
"Chen",
"Zhaoyun",
""
],
[
"Wu",
"Yuchun",
""
],
[
"Guo",
"Guoping",
""
]
] | TITLE: Quantum-Enhanced LLM Efficient Fine Tuning
ABSTRACT: Low-Rank Adaptation (LoRA) enables efficient fine-tuning of pre-trained
language models via low-rank matrix approximation, which is effective in many
scenarios. However, its low-rank representation capacity is constrained in
complex tasks or high-rank dependency settings, potentially limiting model
adaptability. Addressing the expressive bottleneck of classical low-rank
approximation in fine-tuning large language models, this paper proposes a
parameter-efficient fine-tuning method based on a Quantum Weighted Tensor
Hybrid Network (QWTHN), which leverages Quantum Neural Network (QNN). The study
investigates quantum-classical hybrid parameter-efficient fine-tuning in
low-rank spaces. QWTHN decomposes pre-trained weights into quantum neural
network and tensor network representations, utilizing quantum state
superposition and other methods to break through classical rank limitations.
Experiments show that the proposed quantum fine-tuning technique for large
models approaches or even surpasses the parameter efficiency of LoRA. On the
CPsyCounD and R1-Distill-SFT datasets, QWTHN, compared to classical LoRA,
reduces training loss by up to 15% while using 76% fewer parameters, and
achieves an 8.4% performance improvement on the CPsyCounD test set. This
research not only realizes lightweight and efficient adaptation of quantum
resources to billion-parameter models but also validates the practical path of
quantum hardware driven by large model tasks, laying the first
engineering-ready technical foundation for future quantum-enhanced AGI systems.
|
2503.12796 | Chen Li | Chen Li, Huidong Tang, Ye Zhu, Yoshihiro Yamanishi | A Reinforcement Learning-Driven Transformer GAN for Molecular Generation | null | null | null | null | cs.LG cs.CL physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating molecules with desired chemical properties presents a critical
challenge in fields such as chemical synthesis and drug discovery. Recent
advancements in artificial intelligence (AI) and deep learning have
significantly contributed to data-driven molecular generation. However,
challenges persist due to the inherent sensitivity of simplified molecular
input line entry system (SMILES) representations and the difficulties in
applying generative adversarial networks (GANs) to discrete data. This study
introduces RL-MolGAN, a novel Transformer-based discrete GAN framework designed
to address these challenges. Unlike traditional Transformer architectures,
RL-MolGAN utilizes a first-decoder-then-encoder structure, facilitating the
generation of drug-like molecules from both $de~novo$ and scaffold-based
designs. In addition, RL-MolGAN integrates reinforcement learning (RL) and
Monte Carlo tree search (MCTS) techniques to enhance the stability of GAN
training and optimize the chemical properties of the generated molecules. To
further improve the model's performance, RL-MolWGAN, an extension of RL-MolGAN,
incorporates Wasserstein distance and mini-batch discrimination, which together
enhance the stability of the GAN. Experimental results on two widely used
molecular datasets, QM9 and ZINC, validate the effectiveness of our models in
generating high-quality molecular structures with diverse and desirable
chemical properties.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 04:06:10 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Chen",
""
],
[
"Tang",
"Huidong",
""
],
[
"Zhu",
"Ye",
""
],
[
"Yamanishi",
"Yoshihiro",
""
]
] | TITLE: A Reinforcement Learning-Driven Transformer GAN for Molecular Generation
ABSTRACT: Generating molecules with desired chemical properties presents a critical
challenge in fields such as chemical synthesis and drug discovery. Recent
advancements in artificial intelligence (AI) and deep learning have
significantly contributed to data-driven molecular generation. However,
challenges persist due to the inherent sensitivity of simplified molecular
input line entry system (SMILES) representations and the difficulties in
applying generative adversarial networks (GANs) to discrete data. This study
introduces RL-MolGAN, a novel Transformer-based discrete GAN framework designed
to address these challenges. Unlike traditional Transformer architectures,
RL-MolGAN utilizes a first-decoder-then-encoder structure, facilitating the
generation of drug-like molecules from both $de~novo$ and scaffold-based
designs. In addition, RL-MolGAN integrates reinforcement learning (RL) and
Monte Carlo tree search (MCTS) techniques to enhance the stability of GAN
training and optimize the chemical properties of the generated molecules. To
further improve the model's performance, RL-MolWGAN, an extension of RL-MolGAN,
incorporates Wasserstein distance and mini-batch discrimination, which together
enhance the stability of the GAN. Experimental results on two widely used
molecular datasets, QM9 and ZINC, validate the effectiveness of our models in
generating high-quality molecular structures with diverse and desirable
chemical properties.
|
2503.12800 | Jialu Zhou | Jialu Zhou, Dianxi Shi, Shaowu Yang, Chunping Qiu, Luoxi Jing, Mengzhu
Wang | Pairwise Similarity Regularization for Semi-supervised Graph Medical
Image Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With fully leveraging the value of unlabeled data, semi-supervised medical
image segmentation algorithms significantly reduces the limitation of limited
labeled data, achieving a significant improvement in accuracy. However, the
distributional shift between labeled and unlabeled data weakens the utilization
of information from the labeled data. To alleviate the problem, we propose a
graph network feature alignment method based on pairwise similarity
regularization (PaSR) for semi-supervised medical image segmentation. PaSR
aligns the graph structure of images in different domains by maintaining
consistency in the pairwise structural similarity of feature graphs between the
target domain and the source domain, reducing distribution shift issues in
medical images. Meanwhile, further improving the accuracy of pseudo-labels in
the teacher network by aligning graph clustering information to enhance the
semi-supervised efficiency of the model. The experimental part was verified on
three medical image segmentation benchmark datasets, with results showing
improvements over advanced methods in various metrics. On the ACDC dataset, it
achieved an average improvement of more than 10.66%.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 04:14:36 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhou",
"Jialu",
""
],
[
"Shi",
"Dianxi",
""
],
[
"Yang",
"Shaowu",
""
],
[
"Qiu",
"Chunping",
""
],
[
"Jing",
"Luoxi",
""
],
[
"Wang",
"Mengzhu",
""
]
] | TITLE: Pairwise Similarity Regularization for Semi-supervised Graph Medical
Image Segmentation
ABSTRACT: With fully leveraging the value of unlabeled data, semi-supervised medical
image segmentation algorithms significantly reduces the limitation of limited
labeled data, achieving a significant improvement in accuracy. However, the
distributional shift between labeled and unlabeled data weakens the utilization
of information from the labeled data. To alleviate the problem, we propose a
graph network feature alignment method based on pairwise similarity
regularization (PaSR) for semi-supervised medical image segmentation. PaSR
aligns the graph structure of images in different domains by maintaining
consistency in the pairwise structural similarity of feature graphs between the
target domain and the source domain, reducing distribution shift issues in
medical images. Meanwhile, further improving the accuracy of pseudo-labels in
the teacher network by aligning graph clustering information to enhance the
semi-supervised efficiency of the model. The experimental part was verified on
three medical image segmentation benchmark datasets, with results showing
improvements over advanced methods in various metrics. On the ACDC dataset, it
achieved an average improvement of more than 10.66%.
|
2503.12803 | Chen Li | Chen Li, Debo Cheng, Yasuhiko Morimoto | Leveraging Deep Neural Networks for Aspect-Based Sentiment
Classification | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aspect-based sentiment analysis seeks to determine sentiment with a high
level of detail. While graph convolutional networks (GCNs) are commonly used
for extracting sentiment features, their straightforward use in syntactic
feature extraction can lead to a loss of crucial information. This paper
presents a novel edge-enhanced GCN, called EEGCN, which improves performance by
preserving feature integrity as it processes syntactic graphs. We incorporate a
bidirectional long short-term memory (Bi-LSTM) network alongside a
self-attention-based transformer for effective text encoding, ensuring the
retention of long-range dependencies. A bidirectional GCN (Bi-GCN) with message
passing then captures the relationships between entities, while an
aspect-specific masking technique removes extraneous information. Extensive
evaluations and ablation studies on four benchmark datasets show that EEGCN
significantly enhances aspect-based sentiment analysis, overcoming issues with
syntactic feature extraction and advancing the field's methodologies.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 04:19:20 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Li",
"Chen",
""
],
[
"Cheng",
"Debo",
""
],
[
"Morimoto",
"Yasuhiko",
""
]
] | TITLE: Leveraging Deep Neural Networks for Aspect-Based Sentiment
Classification
ABSTRACT: Aspect-based sentiment analysis seeks to determine sentiment with a high
level of detail. While graph convolutional networks (GCNs) are commonly used
for extracting sentiment features, their straightforward use in syntactic
feature extraction can lead to a loss of crucial information. This paper
presents a novel edge-enhanced GCN, called EEGCN, which improves performance by
preserving feature integrity as it processes syntactic graphs. We incorporate a
bidirectional long short-term memory (Bi-LSTM) network alongside a
self-attention-based transformer for effective text encoding, ensuring the
retention of long-range dependencies. A bidirectional GCN (Bi-GCN) with message
passing then captures the relationships between entities, while an
aspect-specific masking technique removes extraneous information. Extensive
evaluations and ablation studies on four benchmark datasets show that EEGCN
significantly enhances aspect-based sentiment analysis, overcoming issues with
syntactic feature extraction and advancing the field's methodologies.
|
2503.12806 | Hadam Baek | Hadam Baek, Hannie Shin, Jiyoung Seo, Chanwoo Kim, Saerom Kim,
Hyeongbok Kim, Sangpil Kim | AV-Surf: Surface-Enhanced Geometry-Aware Novel-View Acoustic Synthesis | null | null | null | null | cs.MM cs.CV cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurately modeling sound propagation with complex real-world environments is
essential for Novel View Acoustic Synthesis (NVAS). While previous studies have
leveraged visual perception to estimate spatial acoustics, the combined use of
surface normal and structural details from 3D representations in acoustic
modeling has been underexplored. Given their direct impact on sound wave
reflections and propagation, surface normals should be jointly modeled with
structural details to achieve accurate spatial acoustics. In this paper, we
propose a surface-enhanced geometry-aware approach for NVAS to improve spatial
acoustic modeling. To achieve this, we exploit geometric priors, such as image,
depth map, surface normals, and point clouds obtained using a 3D Gaussian
Splatting (3DGS) based framework. We introduce a dual cross-attention-based
transformer integrating geometrical constraints into frequency query to
understand the surroundings of the emitter. Additionally, we design a
ConvNeXt-based spectral features processing network called Spectral Refinement
Network (SRN) to synthesize realistic binaural audio. Experimental results on
the RWAVS and SoundSpace datasets highlight the necessity of our approach, as
it surpasses existing methods in novel view acoustic synthesis.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 04:22:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Baek",
"Hadam",
""
],
[
"Shin",
"Hannie",
""
],
[
"Seo",
"Jiyoung",
""
],
[
"Kim",
"Chanwoo",
""
],
[
"Kim",
"Saerom",
""
],
[
"Kim",
"Hyeongbok",
""
],
[
"Kim",
"Sangpil",
""
]
] | TITLE: AV-Surf: Surface-Enhanced Geometry-Aware Novel-View Acoustic Synthesis
ABSTRACT: Accurately modeling sound propagation with complex real-world environments is
essential for Novel View Acoustic Synthesis (NVAS). While previous studies have
leveraged visual perception to estimate spatial acoustics, the combined use of
surface normal and structural details from 3D representations in acoustic
modeling has been underexplored. Given their direct impact on sound wave
reflections and propagation, surface normals should be jointly modeled with
structural details to achieve accurate spatial acoustics. In this paper, we
propose a surface-enhanced geometry-aware approach for NVAS to improve spatial
acoustic modeling. To achieve this, we exploit geometric priors, such as image,
depth map, surface normals, and point clouds obtained using a 3D Gaussian
Splatting (3DGS) based framework. We introduce a dual cross-attention-based
transformer integrating geometrical constraints into frequency query to
understand the surroundings of the emitter. Additionally, we design a
ConvNeXt-based spectral features processing network called Spectral Refinement
Network (SRN) to synthesize realistic binaural audio. Experimental results on
the RWAVS and SoundSpace datasets highlight the necessity of our approach, as
it surpasses existing methods in novel view acoustic synthesis.
|
2503.12822 | Mehdi Makni | Mehdi Makni, Kayhan Behdin, Gabriel Afriat, Zheng Xu, Sergei
Vassilvitskii, Natalia Ponomareva, Hussein Hazimeh, Rahul Mazumder | An Optimization Framework for Differentially Private Sparse Fine-Tuning | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differentially private stochastic gradient descent (DP-SGD) is broadly
considered to be the gold standard for training and fine-tuning neural networks
under differential privacy (DP). With the increasing availability of
high-quality pre-trained model checkpoints (e.g., vision and language models),
fine-tuning has become a popular strategy. However, despite recent progress in
understanding and applying DP-SGD for private transfer learning tasks,
significant challenges remain -- most notably, the performance gap between
models fine-tuned with DP-SGD and their non-private counterparts. Sparse
fine-tuning on private data has emerged as an alternative to full-model
fine-tuning; recent work has shown that privately fine-tuning only a small
subset of model weights and keeping the rest of the weights fixed can lead to
better performance. In this work, we propose a new approach for sparse
fine-tuning of neural networks under DP. Existing work on private sparse
finetuning often used fixed choice of trainable weights (e.g., updating only
the last layer), or relied on public model's weights to choose the subset of
weights to modify. Such choice of weights remains suboptimal. In contrast, we
explore an optimization-based approach, where our selection method makes use of
the private gradient information, while using off the shelf privacy accounting
techniques. Our numerical experiments on several computer vision models and
datasets show that our selection method leads to better prediction accuracy,
compared to full-model private fine-tuning or existing private sparse
fine-tuning approaches.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 05:05:05 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Makni",
"Mehdi",
""
],
[
"Behdin",
"Kayhan",
""
],
[
"Afriat",
"Gabriel",
""
],
[
"Xu",
"Zheng",
""
],
[
"Vassilvitskii",
"Sergei",
""
],
[
"Ponomareva",
"Natalia",
""
],
[
"Hazimeh",
"Hussein",
""
],
[
"Mazumder",
"Rahul",
""
]
] | TITLE: An Optimization Framework for Differentially Private Sparse Fine-Tuning
ABSTRACT: Differentially private stochastic gradient descent (DP-SGD) is broadly
considered to be the gold standard for training and fine-tuning neural networks
under differential privacy (DP). With the increasing availability of
high-quality pre-trained model checkpoints (e.g., vision and language models),
fine-tuning has become a popular strategy. However, despite recent progress in
understanding and applying DP-SGD for private transfer learning tasks,
significant challenges remain -- most notably, the performance gap between
models fine-tuned with DP-SGD and their non-private counterparts. Sparse
fine-tuning on private data has emerged as an alternative to full-model
fine-tuning; recent work has shown that privately fine-tuning only a small
subset of model weights and keeping the rest of the weights fixed can lead to
better performance. In this work, we propose a new approach for sparse
fine-tuning of neural networks under DP. Existing work on private sparse
finetuning often used fixed choice of trainable weights (e.g., updating only
the last layer), or relied on public model's weights to choose the subset of
weights to modify. Such choice of weights remains suboptimal. In contrast, we
explore an optimization-based approach, where our selection method makes use of
the private gradient information, while using off the shelf privacy accounting
techniques. Our numerical experiments on several computer vision models and
datasets show that our selection method leads to better prediction accuracy,
compared to full-model private fine-tuning or existing private sparse
fine-tuning approaches.
|
2503.12833 | Yilong Wu | Yilong Wu, Yifan Duan, Yuxi Chen, Xinran Zhang, Yedong Shen, Jianmin
Ji, Yanyong Zhang, Lu Zhang | MT-PCR: Leveraging Modality Transformation for Large-Scale Point Cloud
Registration with Limited Overlap | 8 pages, 5 figures, ICRA2025 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale scene point cloud registration with limited overlap is a
challenging task due to computational load and constrained data acquisition. To
tackle these issues, we propose a point cloud registration method, MT-PCR,
based on Modality Transformation. MT-PCR leverages a BEV capturing the maximal
overlap information to improve the accuracy and utilizes images to provide
complementary spatial features. Specifically, MT-PCR converts 3D point clouds
to BEV images and eastimates correspondence by 2D image keypoints extraction
and matching. Subsequently, the 2D correspondence estimates are then
transformed back to 3D point clouds using inverse mapping. We have applied
MT-PCR to Terrestrial Laser Scanning and Aerial Laser Scanning point cloud
registration on the GrAco dataset, involving 8 low-overlap, square-kilometer
scale registration scenarios. Experiments and comparisons with commonly used
methods demonstrate that MT-PCR can achieve superior accuracy and robustness in
large-scale scenes with limited overlap.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 05:25:02 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wu",
"Yilong",
""
],
[
"Duan",
"Yifan",
""
],
[
"Chen",
"Yuxi",
""
],
[
"Zhang",
"Xinran",
""
],
[
"Shen",
"Yedong",
""
],
[
"Ji",
"Jianmin",
""
],
[
"Zhang",
"Yanyong",
""
],
[
"Zhang",
"Lu",
""
]
] | TITLE: MT-PCR: Leveraging Modality Transformation for Large-Scale Point Cloud
Registration with Limited Overlap
ABSTRACT: Large-scale scene point cloud registration with limited overlap is a
challenging task due to computational load and constrained data acquisition. To
tackle these issues, we propose a point cloud registration method, MT-PCR,
based on Modality Transformation. MT-PCR leverages a BEV capturing the maximal
overlap information to improve the accuracy and utilizes images to provide
complementary spatial features. Specifically, MT-PCR converts 3D point clouds
to BEV images and eastimates correspondence by 2D image keypoints extraction
and matching. Subsequently, the 2D correspondence estimates are then
transformed back to 3D point clouds using inverse mapping. We have applied
MT-PCR to Terrestrial Laser Scanning and Aerial Laser Scanning point cloud
registration on the GrAco dataset, involving 8 low-overlap, square-kilometer
scale registration scenarios. Experiments and comparisons with commonly used
methods demonstrate that MT-PCR can achieve superior accuracy and robustness in
large-scale scenes with limited overlap.
|
2503.12838 | Guanbin Li | Junjia Huang, Pengxiang Yan, Jinhang Cai, Jiyang Liu, Zhao Wang,
Yitong Wang, Xinglong Wu, Guanbin Li | DreamLayer: Simultaneous Multi-Layer Generation via Diffusion Mode | Under submission | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-driven image generation using diffusion models has recently gained
significant attention. To enable more flexible image manipulation and editing,
recent research has expanded from single image generation to transparent layer
generation and multi-layer compositions. However, existing approaches often
fail to provide a thorough exploration of multi-layer structures, leading to
inconsistent inter-layer interactions, such as occlusion relationships, spatial
layout, and shadowing. In this paper, we introduce DreamLayer, a novel
framework that enables coherent text-driven generation of multiple image
layers, by explicitly modeling the relationship between transparent foreground
and background layers. DreamLayer incorporates three key components, i.e.,
Context-Aware Cross-Attention (CACA) for global-local information exchange,
Layer-Shared Self-Attention (LSSA) for establishing robust inter-layer
connections, and Information Retained Harmonization (IRH) for refining fusion
details at the latent level. By leveraging a coherent full-image context,
DreamLayer builds inter-layer connections through attention mechanisms and
applies a harmonization step to achieve seamless layer fusion. To facilitate
research in multi-layer generation, we construct a high-quality, diverse
multi-layer dataset including 400k samples. Extensive experiments and user
studies demonstrate that DreamLayer generates more coherent and well-aligned
layers, with broad applicability, including latent-space image editing and
image-to-layer decomposition.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 05:34:11 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Junjia",
""
],
[
"Yan",
"Pengxiang",
""
],
[
"Cai",
"Jinhang",
""
],
[
"Liu",
"Jiyang",
""
],
[
"Wang",
"Zhao",
""
],
[
"Wang",
"Yitong",
""
],
[
"Wu",
"Xinglong",
""
],
[
"Li",
"Guanbin",
""
]
] | TITLE: DreamLayer: Simultaneous Multi-Layer Generation via Diffusion Mode
ABSTRACT: Text-driven image generation using diffusion models has recently gained
significant attention. To enable more flexible image manipulation and editing,
recent research has expanded from single image generation to transparent layer
generation and multi-layer compositions. However, existing approaches often
fail to provide a thorough exploration of multi-layer structures, leading to
inconsistent inter-layer interactions, such as occlusion relationships, spatial
layout, and shadowing. In this paper, we introduce DreamLayer, a novel
framework that enables coherent text-driven generation of multiple image
layers, by explicitly modeling the relationship between transparent foreground
and background layers. DreamLayer incorporates three key components, i.e.,
Context-Aware Cross-Attention (CACA) for global-local information exchange,
Layer-Shared Self-Attention (LSSA) for establishing robust inter-layer
connections, and Information Retained Harmonization (IRH) for refining fusion
details at the latent level. By leveraging a coherent full-image context,
DreamLayer builds inter-layer connections through attention mechanisms and
applies a harmonization step to achieve seamless layer fusion. To facilitate
research in multi-layer generation, we construct a high-quality, diverse
multi-layer dataset including 400k samples. Extensive experiments and user
studies demonstrate that DreamLayer generates more coherent and well-aligned
layers, with broad applicability, including latent-space image editing and
image-to-layer decomposition.
|
2503.12840 | Chen Liu | Chen Liu, Liying Yang, Peike Li, Dadong Wang, Lincheng Li, Xin Yu | Dynamic Derivation and Elimination: Audio Visual Segmentation with
Enhanced Audio Semantics | Accepted by CVPR2025 | null | null | null | cs.SD cs.CV eess.AS | http://creativecommons.org/licenses/by/4.0/ | Sound-guided object segmentation has drawn considerable attention for its
potential to enhance multimodal perception. Previous methods primarily focus on
developing advanced architectures to facilitate effective audio-visual
interactions, without fully addressing the inherent challenges posed by audio
natures, \emph{\ie}, (1) feature confusion due to the overlapping nature of
audio signals, and (2) audio-visual matching difficulty from the varied sounds
produced by the same object. To address these challenges, we propose Dynamic
Derivation and Elimination (DDESeg): a novel audio-visual segmentation
framework. Specifically, to mitigate feature confusion, DDESeg reconstructs the
semantic content of the mixed audio signal by enriching the distinct semantic
information of each individual source, deriving representations that preserve
the unique characteristics of each sound. To reduce the matching difficulty, we
introduce a discriminative feature learning module, which enhances the semantic
distinctiveness of generated audio representations. Considering that not all
derived audio representations directly correspond to visual features (e.g.,
off-screen sounds), we propose a dynamic elimination module to filter out
non-matching elements. This module facilitates targeted interaction between
sounding regions and relevant audio semantics. By scoring the interacted
features, we identify and filter out irrelevant audio information, ensuring
accurate audio-visual alignment. Comprehensive experiments demonstrate that our
framework achieves superior performance in AVS datasets.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 05:38:05 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Chen",
""
],
[
"Yang",
"Liying",
""
],
[
"Li",
"Peike",
""
],
[
"Wang",
"Dadong",
""
],
[
"Li",
"Lincheng",
""
],
[
"Yu",
"Xin",
""
]
] | TITLE: Dynamic Derivation and Elimination: Audio Visual Segmentation with
Enhanced Audio Semantics
ABSTRACT: Sound-guided object segmentation has drawn considerable attention for its
potential to enhance multimodal perception. Previous methods primarily focus on
developing advanced architectures to facilitate effective audio-visual
interactions, without fully addressing the inherent challenges posed by audio
natures, \emph{\ie}, (1) feature confusion due to the overlapping nature of
audio signals, and (2) audio-visual matching difficulty from the varied sounds
produced by the same object. To address these challenges, we propose Dynamic
Derivation and Elimination (DDESeg): a novel audio-visual segmentation
framework. Specifically, to mitigate feature confusion, DDESeg reconstructs the
semantic content of the mixed audio signal by enriching the distinct semantic
information of each individual source, deriving representations that preserve
the unique characteristics of each sound. To reduce the matching difficulty, we
introduce a discriminative feature learning module, which enhances the semantic
distinctiveness of generated audio representations. Considering that not all
derived audio representations directly correspond to visual features (e.g.,
off-screen sounds), we propose a dynamic elimination module to filter out
non-matching elements. This module facilitates targeted interaction between
sounding regions and relevant audio semantics. By scoring the interacted
features, we identify and filter out irrelevant audio information, ensuring
accurate audio-visual alignment. Comprehensive experiments demonstrate that our
framework achieves superior performance in AVS datasets.
|
2503.12844 | Junhyeok Kim | Junhyeok Kim, Jaewoo Park, Junhee Park, Sangeyl Lee, Jiwan Chung,
Jisung Kim, Ji Hoon Joung, Youngjae Yu | GuideDog: A Real-World Egocentric Multimodal Dataset for Blind and
Low-Vision Accessibility-Aware Guidance | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Mobility remains a significant challenge for the 2.2 billion people worldwide
affected by blindness and low vision (BLV), with 7% of visually impaired
individuals experiencing falls at least once a month. While recent advances in
Multimodal Large Language Models (MLLMs) offer promising opportunities for BLV
assistance, their development has been hindered by limited datasets. This
limitation stems from the fact that BLV-aware annotation requires specialized
domain knowledge and intensive labor. To address this gap, we introduce
GuideDog, a novel accessibility-aware guide dataset containing 22K
image-description pairs (including 2K human-annotated pairs) that capture
diverse real-world scenes from a pedestrian's viewpoint. Our approach shifts
the annotation burden from generation to verification through a collaborative
human-AI framework grounded in established accessibility standards,
significantly improving efficiency while maintaining high-quality annotations.
We also develop GuideDogQA, a subset of 818 samples featuring multiple-choice
questions designed to evaluate fine-grained visual perception capabilities,
specifically object recognition and relative depth perception. Our experimental
results highlight the importance of accurate spatial understanding for
effective BLV guidance. GuideDog and GuideDogQA will advance research in
MLLM-based assistive technologies for BLV individuals while contributing to
broader applications in understanding egocentric scenes for robotics and
augmented reality. The code and dataset will be publicly available.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 05:43:40 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Kim",
"Junhyeok",
""
],
[
"Park",
"Jaewoo",
""
],
[
"Park",
"Junhee",
""
],
[
"Lee",
"Sangeyl",
""
],
[
"Chung",
"Jiwan",
""
],
[
"Kim",
"Jisung",
""
],
[
"Joung",
"Ji Hoon",
""
],
[
"Yu",
"Youngjae",
""
]
] | TITLE: GuideDog: A Real-World Egocentric Multimodal Dataset for Blind and
Low-Vision Accessibility-Aware Guidance
ABSTRACT: Mobility remains a significant challenge for the 2.2 billion people worldwide
affected by blindness and low vision (BLV), with 7% of visually impaired
individuals experiencing falls at least once a month. While recent advances in
Multimodal Large Language Models (MLLMs) offer promising opportunities for BLV
assistance, their development has been hindered by limited datasets. This
limitation stems from the fact that BLV-aware annotation requires specialized
domain knowledge and intensive labor. To address this gap, we introduce
GuideDog, a novel accessibility-aware guide dataset containing 22K
image-description pairs (including 2K human-annotated pairs) that capture
diverse real-world scenes from a pedestrian's viewpoint. Our approach shifts
the annotation burden from generation to verification through a collaborative
human-AI framework grounded in established accessibility standards,
significantly improving efficiency while maintaining high-quality annotations.
We also develop GuideDogQA, a subset of 818 samples featuring multiple-choice
questions designed to evaluate fine-grained visual perception capabilities,
specifically object recognition and relative depth perception. Our experimental
results highlight the importance of accurate spatial understanding for
effective BLV guidance. GuideDog and GuideDogQA will advance research in
MLLM-based assistive technologies for BLV individuals while contributing to
broader applications in understanding egocentric scenes for robotics and
augmented reality. The code and dataset will be publicly available.
|
2503.12852 | Aditi Tiwari | Aditi Tiwari and Klara Nahrstedt | ACT360: An Efficient 360-Degree Action Detection and Summarization
Framework for Mission-Critical Training and Debriefing | 9 pages, 8 figures | null | null | null | cs.CV cs.MM | http://creativecommons.org/licenses/by/4.0/ | Effective training and debriefing are critical in high-stakes,
mission-critical environments such as disaster response, military simulations,
and industrial safety, where precision and minimizing errors are paramount. The
traditional post-training analysis relies on manually reviewing 2D videos, a
time-consuming process that lacks comprehensive situational awareness. To
address these limitations, we introduce ACT360, a system that leverages
360-degree videos and machine learning for automated action detection and
structured debriefing. ACT360 integrates 360YOWO, an enhanced You Only Watch
Once (YOWO) model with spatial attention and equirectangular-aware convolution
(EAC) to mitigate panoramic video distortions. To enable deployment in
resource-constrained environments, we apply quantization and model pruning,
reducing the model size by 74% while maintaining robust accuracy (mAP drop of
only 1.5%, from 0.865 to 0.850) and improving inference speed. We validate our
approach on a publicly available dataset of 55 labeled 360-degree videos
covering seven key operational actions, recorded across various real-world
training sessions and environmental conditions. Additionally, ACT360 integrates
360AIE (Action Insight Explorer), a web-based interface for automatic action
detection, retrieval, and textual summarization using large language models
(LLMs), significantly enhancing post-incident analysis efficiency. ACT360
serves as a generalized framework for mission-critical debriefing,
incorporating EAC, spatial attention, summarization, and model optimization.
These innovations apply to any training environment requiring lightweight
action detection and structured post-exercise analysis.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 06:12:36 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Tiwari",
"Aditi",
""
],
[
"Nahrstedt",
"Klara",
""
]
] | TITLE: ACT360: An Efficient 360-Degree Action Detection and Summarization
Framework for Mission-Critical Training and Debriefing
ABSTRACT: Effective training and debriefing are critical in high-stakes,
mission-critical environments such as disaster response, military simulations,
and industrial safety, where precision and minimizing errors are paramount. The
traditional post-training analysis relies on manually reviewing 2D videos, a
time-consuming process that lacks comprehensive situational awareness. To
address these limitations, we introduce ACT360, a system that leverages
360-degree videos and machine learning for automated action detection and
structured debriefing. ACT360 integrates 360YOWO, an enhanced You Only Watch
Once (YOWO) model with spatial attention and equirectangular-aware convolution
(EAC) to mitigate panoramic video distortions. To enable deployment in
resource-constrained environments, we apply quantization and model pruning,
reducing the model size by 74% while maintaining robust accuracy (mAP drop of
only 1.5%, from 0.865 to 0.850) and improving inference speed. We validate our
approach on a publicly available dataset of 55 labeled 360-degree videos
covering seven key operational actions, recorded across various real-world
training sessions and environmental conditions. Additionally, ACT360 integrates
360AIE (Action Insight Explorer), a web-based interface for automatic action
detection, retrieval, and textual summarization using large language models
(LLMs), significantly enhancing post-incident analysis efficiency. ACT360
serves as a generalized framework for mission-critical debriefing,
incorporating EAC, spatial attention, summarization, and model optimization.
These innovations apply to any training environment requiring lightweight
action detection and structured post-exercise analysis.
|
2503.12855 | Yujie Lu | Yujie Lu, Yale Song, William Wang, Lorenzo Torresani, Tushar Nagarajan | VITED: Video Temporal Evidence Distillation | null | CVPR 2025 | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | We investigate complex video question answering via chain-of-evidence
reasoning -- identifying sequences of temporal spans from multiple relevant
parts of the video, together with visual evidence within them. Existing models
struggle with multi-step reasoning as they uniformly sample a fixed number of
frames, which can miss critical evidence distributed nonuniformly throughout
the video. Moreover, they lack the ability to temporally localize such evidence
in the broader context of the full video, which is required for answering
complex questions. We propose a framework to enhance existing VideoQA datasets
with evidence reasoning chains, automatically constructed by searching for
optimal intervals of interest in the video with supporting evidence, that
maximizes the likelihood of answering a given question. We train our model
(VITED) to generate these evidence chains directly, enabling it to both
localize evidence windows as well as perform multi-step reasoning across them
in long-form video content. We show the value of our evidence-distilled models
on a suite of long video QA benchmarks where we outperform state-of-the-art
approaches that lack evidence reasoning capabilities.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 06:30:02 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lu",
"Yujie",
""
],
[
"Song",
"Yale",
""
],
[
"Wang",
"William",
""
],
[
"Torresani",
"Lorenzo",
""
],
[
"Nagarajan",
"Tushar",
""
]
] | TITLE: VITED: Video Temporal Evidence Distillation
ABSTRACT: We investigate complex video question answering via chain-of-evidence
reasoning -- identifying sequences of temporal spans from multiple relevant
parts of the video, together with visual evidence within them. Existing models
struggle with multi-step reasoning as they uniformly sample a fixed number of
frames, which can miss critical evidence distributed nonuniformly throughout
the video. Moreover, they lack the ability to temporally localize such evidence
in the broader context of the full video, which is required for answering
complex questions. We propose a framework to enhance existing VideoQA datasets
with evidence reasoning chains, automatically constructed by searching for
optimal intervals of interest in the video with supporting evidence, that
maximizes the likelihood of answering a given question. We train our model
(VITED) to generate these evidence chains directly, enabling it to both
localize evidence windows as well as perform multi-step reasoning across them
in long-form video content. We show the value of our evidence-distilled models
on a suite of long video QA benchmarks where we outperform state-of-the-art
approaches that lack evidence reasoning capabilities.
|
2503.12858 | Duke Nguyen | Duke Nguyen, Aditya Joshi, Flora Salim | Harnessing Test-time Adaptation for NLU tasks Involving Dialects of
English | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Test-time adaptation (TTA) is an excellent method which helps generalize
models across domains, tasks, and distributions without the use of labeled
datasets. Thus, TTA is very useful in natural language processing (NLP) in the
dialectal setting, since oftentimes, models are trained on Standard American
English (SAE), evaluated on Indian English or Nigerian English, of which
distribution differs significantly from the former. This is especially useful
since dialectal datasets are scarce. In this paper, we explore one of the most
famous TTA techniques, SHOT, in dialectal NLP. We finetune and evaluate SHOT on
different combinations of dialectal GLUE. Our findings show that SHOT is a
viable technique when labeled datasets are unavailable. We also theoretically
propose the concept of dialectal gap and show that it has a positive
correlation with the effectiveness of SHOT. We also find that in many cases,
finetuning on SAE yields higher performance than finetuning on dialectal data.
Our code is available at https://github.com/dukenguyenxyz/dialect-adaptation
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 06:40:06 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Nguyen",
"Duke",
""
],
[
"Joshi",
"Aditya",
""
],
[
"Salim",
"Flora",
""
]
] | TITLE: Harnessing Test-time Adaptation for NLU tasks Involving Dialects of
English
ABSTRACT: Test-time adaptation (TTA) is an excellent method which helps generalize
models across domains, tasks, and distributions without the use of labeled
datasets. Thus, TTA is very useful in natural language processing (NLP) in the
dialectal setting, since oftentimes, models are trained on Standard American
English (SAE), evaluated on Indian English or Nigerian English, of which
distribution differs significantly from the former. This is especially useful
since dialectal datasets are scarce. In this paper, we explore one of the most
famous TTA techniques, SHOT, in dialectal NLP. We finetune and evaluate SHOT on
different combinations of dialectal GLUE. Our findings show that SHOT is a
viable technique when labeled datasets are unavailable. We also theoretically
propose the concept of dialectal gap and show that it has a positive
correlation with the effectiveness of SHOT. We also find that in many cases,
finetuning on SAE yields higher performance than finetuning on dialectal data.
Our code is available at https://github.com/dukenguyenxyz/dialect-adaptation
|
2503.12873 | Dehai Zhao | Dehai Zhao, Zhenchang Xing, Qinghua Lu, Xiwei Xu, Liming Zhu | SeeAction: Towards Reverse Engineering How-What-Where of HCI Actions
from Screencasts for UI Automation | Accepted by IEEE/ACM International Conference on Software Engineering
2025 (ICSE 2025, Distinguished paper award) | ICSE 2025 | null | null | cs.SE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | UI automation is a useful technique for UI testing, bug reproduction, and
robotic process automation. Recording user actions with an application assists
rapid development of UI automation scripts, but existing recording techniques
are intrusive, rely on OS or GUI framework accessibility support, or assume
specific app implementations. Reverse engineering user actions from screencasts
is non-intrusive, but a key reverse-engineering step is currently missing -
recognizing human-understandable structured user actions ([command] [widget]
[location]) from action screencasts. To fill the gap, we propose a deep
learning-based computer vision model that can recognize 11 commands and 11
widgets, and generate location phrases from action screencasts, through joint
learning and multi-task learning. We label a large dataset with 7260
video-action pairs, which record user interactions with Word, Zoom, Firefox,
Photoshop, and Windows 10 Settings. Through extensive experiments, we confirm
the effectiveness and generality of our model, and demonstrate the usefulness
of a screencast-to-action-script tool built upon our model for bug
reproduction.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 07:07:38 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Zhao",
"Dehai",
""
],
[
"Xing",
"Zhenchang",
""
],
[
"Lu",
"Qinghua",
""
],
[
"Xu",
"Xiwei",
""
],
[
"Zhu",
"Liming",
""
]
] | TITLE: SeeAction: Towards Reverse Engineering How-What-Where of HCI Actions
from Screencasts for UI Automation
ABSTRACT: UI automation is a useful technique for UI testing, bug reproduction, and
robotic process automation. Recording user actions with an application assists
rapid development of UI automation scripts, but existing recording techniques
are intrusive, rely on OS or GUI framework accessibility support, or assume
specific app implementations. Reverse engineering user actions from screencasts
is non-intrusive, but a key reverse-engineering step is currently missing -
recognizing human-understandable structured user actions ([command] [widget]
[location]) from action screencasts. To fill the gap, we propose a deep
learning-based computer vision model that can recognize 11 commands and 11
widgets, and generate location phrases from action screencasts, through joint
learning and multi-task learning. We label a large dataset with 7260
video-action pairs, which record user interactions with Word, Zoom, Firefox,
Photoshop, and Windows 10 Settings. Through extensive experiments, we confirm
the effectiveness and generality of our model, and demonstrate the usefulness
of a screencast-to-action-script tool built upon our model for bug
reproduction.
|
2503.12882 | Hyeonsu Cho | Cho Hyeonsu, Dooyoung Kim, Youngjoong Ko | DAPI: Domain Adaptive Toxicity Probe Vector Intervention for
Fine-Grained Detoxification | 10 pages, 3 figures | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There have been attempts to utilize linear probe for detoxification, with
existing studies relying on a single toxicity probe vector to reduce toxicity.
However, toxicity can be fine-grained into various subcategories, making it
difficult to remove certain types of toxicity by using a single toxicity probe
vector. To address this limitation, we propose a category-specific toxicity
probe vector approach. First, we train multiple toxicity probe vectors for
different toxicity categories. During generation, we dynamically select the
most relevant toxicity probe vector based on the current context. Finally, the
selected vector is dynamically scaled and subtracted from model. Our method
successfully mitigated toxicity from categories that the single probe vector
approach failed to detoxify. Experiments demonstrate that our approach achieves
up to a 78.52% reduction in toxicity on the evaluation dataset, while fluency
remains nearly unchanged, with only a 0.052% drop compared to the unsteered
model.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 07:25:32 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Hyeonsu",
"Cho",
""
],
[
"Kim",
"Dooyoung",
""
],
[
"Ko",
"Youngjoong",
""
]
] | TITLE: DAPI: Domain Adaptive Toxicity Probe Vector Intervention for
Fine-Grained Detoxification
ABSTRACT: There have been attempts to utilize linear probe for detoxification, with
existing studies relying on a single toxicity probe vector to reduce toxicity.
However, toxicity can be fine-grained into various subcategories, making it
difficult to remove certain types of toxicity by using a single toxicity probe
vector. To address this limitation, we propose a category-specific toxicity
probe vector approach. First, we train multiple toxicity probe vectors for
different toxicity categories. During generation, we dynamically select the
most relevant toxicity probe vector based on the current context. Finally, the
selected vector is dynamically scaled and subtracted from model. Our method
successfully mitigated toxicity from categories that the single probe vector
approach failed to detoxify. Experiments demonstrate that our approach achieves
up to a 78.52% reduction in toxicity on the evaluation dataset, while fluency
remains nearly unchanged, with only a 0.052% drop compared to the unsteered
model.
|
2503.12897 | Haiyang Guo | Haiyang Guo, Fanhu Zeng, Fei Zhu, Wenzhuo Liu, Da-Han Wang, Jian Xu,
Xu-Yao Zhang, Cheng-Lin Liu | Federated Continual Instruction Tuning | Preprint | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A vast amount of instruction tuning data is crucial for the impressive
performance of Large Multimodal Models (LMMs), but the associated computational
costs and data collection demands during supervised fine-tuning make it
impractical for most researchers. Federated learning (FL) has the potential to
leverage all distributed data and training resources to reduce the overhead of
joint training. However, most existing methods assume a fixed number of tasks,
while in real-world scenarios, clients continuously encounter new knowledge and
often struggle to retain old tasks due to memory constraints. In this work, we
introduce the Federated Continual Instruction Tuning (FCIT) benchmark to model
this real-world challenge. Our benchmark includes two realistic scenarios,
encompassing four different settings and twelve carefully curated instruction
tuning datasets. To address the challenges posed by FCIT, we propose dynamic
knowledge organization to effectively integrate updates from different tasks
during training and subspace selective activation to allocate task-specific
output during inference. Extensive experimental results demonstrate that our
proposed method significantly enhances model performance across varying levels
of data heterogeneity and catastrophic forgetting. Our source code and dataset
will be made publicly available.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 07:58:06 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guo",
"Haiyang",
""
],
[
"Zeng",
"Fanhu",
""
],
[
"Zhu",
"Fei",
""
],
[
"Liu",
"Wenzhuo",
""
],
[
"Wang",
"Da-Han",
""
],
[
"Xu",
"Jian",
""
],
[
"Zhang",
"Xu-Yao",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] | TITLE: Federated Continual Instruction Tuning
ABSTRACT: A vast amount of instruction tuning data is crucial for the impressive
performance of Large Multimodal Models (LMMs), but the associated computational
costs and data collection demands during supervised fine-tuning make it
impractical for most researchers. Federated learning (FL) has the potential to
leverage all distributed data and training resources to reduce the overhead of
joint training. However, most existing methods assume a fixed number of tasks,
while in real-world scenarios, clients continuously encounter new knowledge and
often struggle to retain old tasks due to memory constraints. In this work, we
introduce the Federated Continual Instruction Tuning (FCIT) benchmark to model
this real-world challenge. Our benchmark includes two realistic scenarios,
encompassing four different settings and twelve carefully curated instruction
tuning datasets. To address the challenges posed by FCIT, we propose dynamic
knowledge organization to effectively integrate updates from different tasks
during training and subspace selective activation to allocate task-specific
output during inference. Extensive experimental results demonstrate that our
proposed method significantly enhances model performance across varying levels
of data heterogeneity and catastrophic forgetting. Our source code and dataset
will be made publicly available.
|
2503.12912 | Bin Tang | Bin Tang, Keqi Pan, Miao Zheng, Ning Zhou, Jialu Sui, Dandan Zhu,
Cheng-Long Deng and Shu-Guang Kuai | Pose as a Modality: A Psychology-Inspired Network for Personality
Recognition with a New Multimodal Dataset | 9 pages, 6 figures, AAAI 2025 Oral | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, predicting Big Five personality traits from multimodal data
has received significant attention in artificial intelligence (AI). However,
existing computational models often fail to achieve satisfactory performance.
Psychological research has shown a strong correlation between pose and
personality traits, yet previous research has largely ignored pose data in
computational models. To address this gap, we develop a novel multimodal
dataset that incorporates full-body pose data. The dataset includes video
recordings of 287 participants completing a virtual interview with 36
questions, along with self-reported Big Five personality scores as labels. To
effectively utilize this multimodal data, we introduce the Psychology-Inspired
Network (PINet), which consists of three key modules: Multimodal Feature
Awareness (MFA), Multimodal Feature Interaction (MFI), and Psychology-Informed
Modality Correlation Loss (PIMC Loss). The MFA module leverages the Vision
Mamba Block to capture comprehensive visual features related to personality,
while the MFI module efficiently fuses the multimodal features. The PIMC Loss,
grounded in psychological theory, guides the model to emphasize different
modalities for different personality dimensions. Experimental results show that
the PINet outperforms several state-of-the-art baseline models. Furthermore,
the three modules of PINet contribute almost equally to the model's overall
performance. Incorporating pose data significantly enhances the model's
performance, with the pose modality ranking mid-level in importance among the
five modalities. These findings address the existing gap in personality-related
datasets that lack full-body pose data and provide a new approach for improving
the accuracy of personality prediction models, highlighting the importance of
integrating psychological insights into AI frameworks.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 08:21:33 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Tang",
"Bin",
""
],
[
"Pan",
"Keqi",
""
],
[
"Zheng",
"Miao",
""
],
[
"Zhou",
"Ning",
""
],
[
"Sui",
"Jialu",
""
],
[
"Zhu",
"Dandan",
""
],
[
"Deng",
"Cheng-Long",
""
],
[
"Kuai",
"Shu-Guang",
""
]
] | TITLE: Pose as a Modality: A Psychology-Inspired Network for Personality
Recognition with a New Multimodal Dataset
ABSTRACT: In recent years, predicting Big Five personality traits from multimodal data
has received significant attention in artificial intelligence (AI). However,
existing computational models often fail to achieve satisfactory performance.
Psychological research has shown a strong correlation between pose and
personality traits, yet previous research has largely ignored pose data in
computational models. To address this gap, we develop a novel multimodal
dataset that incorporates full-body pose data. The dataset includes video
recordings of 287 participants completing a virtual interview with 36
questions, along with self-reported Big Five personality scores as labels. To
effectively utilize this multimodal data, we introduce the Psychology-Inspired
Network (PINet), which consists of three key modules: Multimodal Feature
Awareness (MFA), Multimodal Feature Interaction (MFI), and Psychology-Informed
Modality Correlation Loss (PIMC Loss). The MFA module leverages the Vision
Mamba Block to capture comprehensive visual features related to personality,
while the MFI module efficiently fuses the multimodal features. The PIMC Loss,
grounded in psychological theory, guides the model to emphasize different
modalities for different personality dimensions. Experimental results show that
the PINet outperforms several state-of-the-art baseline models. Furthermore,
the three modules of PINet contribute almost equally to the model's overall
performance. Incorporating pose data significantly enhances the model's
performance, with the pose modality ranking mid-level in importance among the
five modalities. These findings address the existing gap in personality-related
datasets that lack full-body pose data and provide a new approach for improving
the accuracy of personality prediction models, highlighting the importance of
integrating psychological insights into AI frameworks.
|
2503.12918 | Pengcheng Wen | Pengcheng Wen, Jiaming Ji, Chi-Min Chan, Juntao Dai, Donghai Hong,
Yaodong Yang, Sirui Han and Yike Guo | ThinkPatterns-21k: A Systematic Study on the Impact of Thinking Patterns
in LLMs | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated enhanced performance through
the \textit{Thinking then Responding} paradigm, where models generate internal
thoughts before final responses (aka, System 2 thinking). However, existing
research lacks a systematic understanding of the mechanisms underlying how
thinking patterns affect performance across model sizes. In this work, we
conduct a comprehensive analysis of the impact of various thinking types on
model performance and introduce ThinkPatterns-21k, a curated dataset comprising
21k instruction-response pairs (QA) collected from existing
instruction-following datasets with five thinking types. For each pair, we
augment it with five distinct internal thinking patterns: one unstructured
thinking (monologue) and four structured variants (decomposition, self-ask,
self-debate and self-critic), while maintaining the same instruction and
response. Through extensive evaluation across different model sizes (3B-32B
parameters), we have two key findings: (1) smaller models (<30B parameters) can
benefit from most of structured thinking patterns, while larger models (32B)
with structured thinking like decomposition would degrade performance and (2)
unstructured monologue demonstrates broad effectiveness across different model
sizes. Finally, we released all of our datasets, checkpoints, training logs of
diverse thinking patterns to reproducibility, aiming to facilitate further
research in this direction.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 08:29:04 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Wen",
"Pengcheng",
""
],
[
"Ji",
"Jiaming",
""
],
[
"Chan",
"Chi-Min",
""
],
[
"Dai",
"Juntao",
""
],
[
"Hong",
"Donghai",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Han",
"Sirui",
""
],
[
"Guo",
"Yike",
""
]
] | TITLE: ThinkPatterns-21k: A Systematic Study on the Impact of Thinking Patterns
in LLMs
ABSTRACT: Large language models (LLMs) have demonstrated enhanced performance through
the \textit{Thinking then Responding} paradigm, where models generate internal
thoughts before final responses (aka, System 2 thinking). However, existing
research lacks a systematic understanding of the mechanisms underlying how
thinking patterns affect performance across model sizes. In this work, we
conduct a comprehensive analysis of the impact of various thinking types on
model performance and introduce ThinkPatterns-21k, a curated dataset comprising
21k instruction-response pairs (QA) collected from existing
instruction-following datasets with five thinking types. For each pair, we
augment it with five distinct internal thinking patterns: one unstructured
thinking (monologue) and four structured variants (decomposition, self-ask,
self-debate and self-critic), while maintaining the same instruction and
response. Through extensive evaluation across different model sizes (3B-32B
parameters), we have two key findings: (1) smaller models (<30B parameters) can
benefit from most of structured thinking patterns, while larger models (32B)
with structured thinking like decomposition would degrade performance and (2)
unstructured monologue demonstrates broad effectiveness across different model
sizes. Finally, we released all of our datasets, checkpoints, training logs of
diverse thinking patterns to reproducibility, aiming to facilitate further
research in this direction.
|
2503.12919 | Aref Einizade | Aref Einizade, Dorina Thanou, Fragkiskos D. Malliaros, Jhony H.
Giraldo | COSMOS: Continuous Simplicial Neural Networks | 17 pages, 6 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simplicial complexes provide a powerful framework for modeling high-order
interactions in structured data, making them particularly suitable for
applications such as trajectory prediction and mesh processing. However,
existing simplicial neural networks (SNNs), whether convolutional or
attention-based, rely primarily on discrete filtering techniques, which can be
restrictive. In contrast, partial differential equations (PDEs) on simplicial
complexes offer a principled approach to capture continuous dynamics in such
structures. In this work, we introduce COntinuous SiMplicial neural netwOrkS
(COSMOS), a novel SNN architecture derived from PDEs on simplicial complexes.
We provide theoretical and experimental justifications of COSMOS's stability
under simplicial perturbations. Furthermore, we investigate the over-smoothing
phenomenon, a common issue in geometric deep learning, demonstrating that
COSMOS offers better control over this effect than discrete SNNs. Our
experiments on real-world datasets of ocean trajectory prediction and
regression on partial deformable shapes demonstrate that COSMOS achieves
competitive performance compared to state-of-the-art SNNs in complex and noisy
environments.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 08:31:25 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Einizade",
"Aref",
""
],
[
"Thanou",
"Dorina",
""
],
[
"Malliaros",
"Fragkiskos D.",
""
],
[
"Giraldo",
"Jhony H.",
""
]
] | TITLE: COSMOS: Continuous Simplicial Neural Networks
ABSTRACT: Simplicial complexes provide a powerful framework for modeling high-order
interactions in structured data, making them particularly suitable for
applications such as trajectory prediction and mesh processing. However,
existing simplicial neural networks (SNNs), whether convolutional or
attention-based, rely primarily on discrete filtering techniques, which can be
restrictive. In contrast, partial differential equations (PDEs) on simplicial
complexes offer a principled approach to capture continuous dynamics in such
structures. In this work, we introduce COntinuous SiMplicial neural netwOrkS
(COSMOS), a novel SNN architecture derived from PDEs on simplicial complexes.
We provide theoretical and experimental justifications of COSMOS's stability
under simplicial perturbations. Furthermore, we investigate the over-smoothing
phenomenon, a common issue in geometric deep learning, demonstrating that
COSMOS offers better control over this effect than discrete SNNs. Our
experiments on real-world datasets of ocean trajectory prediction and
regression on partial deformable shapes demonstrate that COSMOS achieves
competitive performance compared to state-of-the-art SNNs in complex and noisy
environments.
|
2503.12931 | Rui Pu | Rui Pu, Chaozhuo Li, Rui Ha, Litian Zhang, Lirong Qiu, Xi Zhang | MirrorGuard: Adaptive Defense Against Jailbreaks via Entropy-Guided
Mirror Crafting | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Defending large language models (LLMs) against jailbreak attacks is crucial
for ensuring their safe deployment. Existing defense strategies generally rely
on predefined static criteria to differentiate between harmful and benign
prompts. However, such rigid rules are incapable of accommodating the inherent
complexity and dynamic nature of real jailbreak attacks. In this paper, we
propose a novel concept of ``mirror'' to enable dynamic and adaptive defense. A
mirror refers to a dynamically generated prompt that mirrors the syntactic
structure of the input while ensuring semantic safety. The personalized
discrepancies between the input prompts and their corresponding mirrors serve
as the guiding principles for defense. A new defense paradigm, MirrorGuard, is
further proposed to detect and calibrate risky inputs based on such mirrors. An
entropy-based detection metric, Relative Input Uncertainty (RIU), is integrated
into MirrorGuard to quantify the discrepancies between input prompts and
mirrors. MirrorGuard is evaluated on several popular datasets, demonstrating
state-of-the-art defense performance while maintaining general effectiveness.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 08:41:29 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Pu",
"Rui",
""
],
[
"Li",
"Chaozhuo",
""
],
[
"Ha",
"Rui",
""
],
[
"Zhang",
"Litian",
""
],
[
"Qiu",
"Lirong",
""
],
[
"Zhang",
"Xi",
""
]
] | TITLE: MirrorGuard: Adaptive Defense Against Jailbreaks via Entropy-Guided
Mirror Crafting
ABSTRACT: Defending large language models (LLMs) against jailbreak attacks is crucial
for ensuring their safe deployment. Existing defense strategies generally rely
on predefined static criteria to differentiate between harmful and benign
prompts. However, such rigid rules are incapable of accommodating the inherent
complexity and dynamic nature of real jailbreak attacks. In this paper, we
propose a novel concept of ``mirror'' to enable dynamic and adaptive defense. A
mirror refers to a dynamically generated prompt that mirrors the syntactic
structure of the input while ensuring semantic safety. The personalized
discrepancies between the input prompts and their corresponding mirrors serve
as the guiding principles for defense. A new defense paradigm, MirrorGuard, is
further proposed to detect and calibrate risky inputs based on such mirrors. An
entropy-based detection metric, Relative Input Uncertainty (RIU), is integrated
into MirrorGuard to quantify the discrepancies between input prompts and
mirrors. MirrorGuard is evaluated on several popular datasets, demonstrating
state-of-the-art defense performance while maintaining general effectiveness.
|
2503.12935 | Guoliang Xu | Guoliang Xu, Jianqin Yin, Ren Zhang, Yonghao Dang, Feng Zhou, Bo Yu | L2HCount:Generalizing Crowd Counting from Low to High Crowd Density via
Density Simulation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since COVID-19, crowd-counting tasks have gained wide applications. While
supervised methods are reliable, annotation is more challenging in high-density
scenes due to small head sizes and severe occlusion, whereas it's simpler in
low-density scenes. Interestingly, can we train the model in low-density scenes
and generalize it to high-density scenes? Therefore, we propose a low- to
high-density generalization framework (L2HCount) that learns the pattern
related to high-density scenes from low-density ones, enabling it to generalize
well to high-density scenes. Specifically, we first introduce a High-Density
Simulation Module and a Ground-Truth Generation Module to construct fake
high-density images along with their corresponding ground-truth crowd
annotations respectively by image-shifting technique, effectively simulating
high-density crowd patterns. However, the simulated images have two issues:
image blurring and loss of low-density image characteristics. Therefore, we
second propose a Head Feature Enhancement Module to extract clear features in
the simulated high-density scene. Third, we propose a Dual-Density Memory
Encoding Module that uses two crowd memories to learn scene-specific patterns
from low- and simulated high-density scenes, respectively. Extensive
experiments on four challenging datasets have shown the promising performance
of L2HCount.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 08:49:09 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Xu",
"Guoliang",
""
],
[
"Yin",
"Jianqin",
""
],
[
"Zhang",
"Ren",
""
],
[
"Dang",
"Yonghao",
""
],
[
"Zhou",
"Feng",
""
],
[
"Yu",
"Bo",
""
]
] | TITLE: L2HCount:Generalizing Crowd Counting from Low to High Crowd Density via
Density Simulation
ABSTRACT: Since COVID-19, crowd-counting tasks have gained wide applications. While
supervised methods are reliable, annotation is more challenging in high-density
scenes due to small head sizes and severe occlusion, whereas it's simpler in
low-density scenes. Interestingly, can we train the model in low-density scenes
and generalize it to high-density scenes? Therefore, we propose a low- to
high-density generalization framework (L2HCount) that learns the pattern
related to high-density scenes from low-density ones, enabling it to generalize
well to high-density scenes. Specifically, we first introduce a High-Density
Simulation Module and a Ground-Truth Generation Module to construct fake
high-density images along with their corresponding ground-truth crowd
annotations respectively by image-shifting technique, effectively simulating
high-density crowd patterns. However, the simulated images have two issues:
image blurring and loss of low-density image characteristics. Therefore, we
second propose a Head Feature Enhancement Module to extract clear features in
the simulated high-density scene. Third, we propose a Dual-Density Memory
Encoding Module that uses two crowd memories to learn scene-specific patterns
from low- and simulated high-density scenes, respectively. Extensive
experiments on four challenging datasets have shown the promising performance
of L2HCount.
|
2503.12941 | Haiyang Guo | Haiyang Guo, Fanhu Zeng, Ziwei Xiang, Fei Zhu, Da-Han Wang, Xu-Yao
Zhang, Cheng-Lin Liu | HiDe-LLaVA: Hierarchical Decoupling for Continual Instruction Tuning of
Multimodal Large Language Model | Preprint | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instruction tuning is widely used to improve a pre-trained Multimodal Large
Language Model (MLLM) by training it on curated task-specific datasets,
enabling better comprehension of human instructions. However, it is infeasible
to collect all possible instruction datasets simultaneously in real-world
scenarios. Thus, enabling MLLM with continual instruction tuning is essential
for maintaining their adaptability. However, existing methods often trade off
memory efficiency for performance gains, significantly compromising overall
efficiency. In this paper, we propose a task-specific expansion and
task-general fusion framework based on the variations in Centered Kernel
Alignment (CKA) similarity across different model layers when trained on
diverse datasets. Furthermore, we analyze the information leakage present in
the existing benchmark and propose a new and more challenging benchmark to
rationally evaluate the performance of different methods. Comprehensive
experiments showcase a significant performance improvement of our method
compared to existing state-of-the-art methods. Our code will be public
available.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 08:56:03 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Guo",
"Haiyang",
""
],
[
"Zeng",
"Fanhu",
""
],
[
"Xiang",
"Ziwei",
""
],
[
"Zhu",
"Fei",
""
],
[
"Wang",
"Da-Han",
""
],
[
"Zhang",
"Xu-Yao",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] | TITLE: HiDe-LLaVA: Hierarchical Decoupling for Continual Instruction Tuning of
Multimodal Large Language Model
ABSTRACT: Instruction tuning is widely used to improve a pre-trained Multimodal Large
Language Model (MLLM) by training it on curated task-specific datasets,
enabling better comprehension of human instructions. However, it is infeasible
to collect all possible instruction datasets simultaneously in real-world
scenarios. Thus, enabling MLLM with continual instruction tuning is essential
for maintaining their adaptability. However, existing methods often trade off
memory efficiency for performance gains, significantly compromising overall
efficiency. In this paper, we propose a task-specific expansion and
task-general fusion framework based on the variations in Centered Kernel
Alignment (CKA) similarity across different model layers when trained on
diverse datasets. Furthermore, we analyze the information leakage present in
the existing benchmark and propose a new and more challenging benchmark to
rationally evaluate the performance of different methods. Comprehensive
experiments showcase a significant performance improvement of our method
compared to existing state-of-the-art methods. Our code will be public
available.
|
2503.12944 | Jianzheng Huang | Jianzheng Huang, Xianyu Mo, Ziling Liu, Jinyu Yang, Feng Zheng | GIFT: Generated Indoor video frames for Texture-less point tracking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Point tracking is becoming a powerful solver for motion estimation and video
editing. Compared to classical feature matching, point tracking methods have
the key advantage of robustly tracking points under complex camera motion
trajectories and over extended periods. However, despite certain improvements
in methodologies, current point tracking methods still struggle to track any
position in video frames, especially in areas that are texture-less or weakly
textured. In this work, we first introduce metrics for evaluating the texture
intensity of a 3D object. Using these metrics, we classify the 3D models in
ShapeNet into three levels of texture intensity and create GIFT, a challenging
synthetic benchmark comprising 1800 indoor video sequences with rich
annotations. Unlike existing datasets that assign ground truth points
arbitrarily, GIFT precisely anchors ground truth on classified target objects,
ensuring that each video corresponds to a specific texture intensity level.
Furthermore, we comprehensively evaluate current methods on GIFT to assess
their performance across different texture intensity levels and analyze the
impact of texture on point tracking.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 08:58:33 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Huang",
"Jianzheng",
""
],
[
"Mo",
"Xianyu",
""
],
[
"Liu",
"Ziling",
""
],
[
"Yang",
"Jinyu",
""
],
[
"Zheng",
"Feng",
""
]
] | TITLE: GIFT: Generated Indoor video frames for Texture-less point tracking
ABSTRACT: Point tracking is becoming a powerful solver for motion estimation and video
editing. Compared to classical feature matching, point tracking methods have
the key advantage of robustly tracking points under complex camera motion
trajectories and over extended periods. However, despite certain improvements
in methodologies, current point tracking methods still struggle to track any
position in video frames, especially in areas that are texture-less or weakly
textured. In this work, we first introduce metrics for evaluating the texture
intensity of a 3D object. Using these metrics, we classify the 3D models in
ShapeNet into three levels of texture intensity and create GIFT, a challenging
synthetic benchmark comprising 1800 indoor video sequences with rich
annotations. Unlike existing datasets that assign ground truth points
arbitrarily, GIFT precisely anchors ground truth on classified target objects,
ensuring that each video corresponds to a specific texture intensity level.
Furthermore, we comprehensively evaluate current methods on GIFT to assess
their performance across different texture intensity levels and analyze the
impact of texture on point tracking.
|
2503.12947 | Ingyun Lee | Ingyun Lee, Jae Won Jang, Seunghyeon Seo, Nojun Kwak | DivCon-NeRF: Generating Augmented Rays with Diversity and Consistency
for Few-shot View Synthesis | 11 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural Radiance Field (NeRF) has shown remarkable performance in novel view
synthesis but requires many multiview images, making it impractical for
few-shot scenarios. Ray augmentation was proposed to prevent overfitting for
sparse training data by generating additional rays. However, existing methods,
which generate augmented rays only near the original rays, produce severe
floaters and appearance distortion due to limited viewpoints and inconsistent
rays obstructed by nearby obstacles and complex surfaces. To address these
problems, we propose DivCon-NeRF, which significantly enhances both diversity
and consistency. It employs surface-sphere augmentation, which preserves the
distance between the original camera and the predicted surface point. This
allows the model to compare the order of high-probability surface points and
filter out inconsistent rays easily without requiring the exact depth. By
introducing inner-sphere augmentation, DivCon-NeRF randomizes angles and
distances for diverse viewpoints, further increasing diversity. Consequently,
our method significantly reduces floaters and visual distortions, achieving
state-of-the-art performance on the Blender, LLFF, and DTU datasets. Our code
will be publicly available.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 08:59:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Lee",
"Ingyun",
""
],
[
"Jang",
"Jae Won",
""
],
[
"Seo",
"Seunghyeon",
""
],
[
"Kwak",
"Nojun",
""
]
] | TITLE: DivCon-NeRF: Generating Augmented Rays with Diversity and Consistency
for Few-shot View Synthesis
ABSTRACT: Neural Radiance Field (NeRF) has shown remarkable performance in novel view
synthesis but requires many multiview images, making it impractical for
few-shot scenarios. Ray augmentation was proposed to prevent overfitting for
sparse training data by generating additional rays. However, existing methods,
which generate augmented rays only near the original rays, produce severe
floaters and appearance distortion due to limited viewpoints and inconsistent
rays obstructed by nearby obstacles and complex surfaces. To address these
problems, we propose DivCon-NeRF, which significantly enhances both diversity
and consistency. It employs surface-sphere augmentation, which preserves the
distance between the original camera and the predicted surface point. This
allows the model to compare the order of high-probability surface points and
filter out inconsistent rays easily without requiring the exact depth. By
introducing inner-sphere augmentation, DivCon-NeRF randomizes angles and
distances for diverse viewpoints, further increasing diversity. Consequently,
our method significantly reduces floaters and visual distortions, achieving
state-of-the-art performance on the Blender, LLFF, and DTU datasets. Our code
will be publicly available.
|
2503.12956 | Animesh Choudhury | Animesh Choudhury and Jagabandhu Panda | Development of a Data-driven weather forecasting system over India with
Pangu-Weather architecture and IMDAA reanalysis Data | null | null | null | null | physics.ao-ph | http://creativecommons.org/licenses/by/4.0/ | Numerical Weather Prediction (NWP) has advanced significantly in recent
decades but still faces challenges in accuracy, computational efficiency, and
scalability. Data-driven weather models have shown great promise, sometimes
surpassing operational NWP systems. However, training these models on massive
datasets incurs high computational costs. A regional data-driven approach
offers a cost-effective alternative for localized forecasts. This study
develops a regional weather forecasting model for India by efficiently
modifying the Pangu-Weather (PW) architecture. The model is trained using the
Indian Monsoon Data Assimilation and Analysis (IMDAA) reanalysis dataset with
limited computational resources. Prediction results are evaluated using Root
Mean Square Error (RMSE), Anomaly Correlation Coefficient (ACC), Mean Absolute
Percentage Error (MAPE), and Fractional Skill Score (FSS). At a 6-hour lead
time, MAPE remains below 5%, FSS exceeds 0.86, and ACC stays above 0.94,
demonstrating robustness. Three forecasting approaches, static, autoregressive,
and hierarchical, are compared. Errors increase with lead time in all cases.
The static approach exhibits periodic fluctuations in error metrics, which are
absent in the autoregressive method. The hierarchical approach also shows
fluctuations, though with reduced intensity after three days. Among these, the
hierarchical approach performs best while maintaining computational efficiency.
Furthermore, the model effectively predicts cyclone tracks using the
hierarchical approach, achieving results comparable to observational and
reanalysis datasets.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:11:44 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Choudhury",
"Animesh",
""
],
[
"Panda",
"Jagabandhu",
""
]
] | TITLE: Development of a Data-driven weather forecasting system over India with
Pangu-Weather architecture and IMDAA reanalysis Data
ABSTRACT: Numerical Weather Prediction (NWP) has advanced significantly in recent
decades but still faces challenges in accuracy, computational efficiency, and
scalability. Data-driven weather models have shown great promise, sometimes
surpassing operational NWP systems. However, training these models on massive
datasets incurs high computational costs. A regional data-driven approach
offers a cost-effective alternative for localized forecasts. This study
develops a regional weather forecasting model for India by efficiently
modifying the Pangu-Weather (PW) architecture. The model is trained using the
Indian Monsoon Data Assimilation and Analysis (IMDAA) reanalysis dataset with
limited computational resources. Prediction results are evaluated using Root
Mean Square Error (RMSE), Anomaly Correlation Coefficient (ACC), Mean Absolute
Percentage Error (MAPE), and Fractional Skill Score (FSS). At a 6-hour lead
time, MAPE remains below 5%, FSS exceeds 0.86, and ACC stays above 0.94,
demonstrating robustness. Three forecasting approaches, static, autoregressive,
and hierarchical, are compared. Errors increase with lead time in all cases.
The static approach exhibits periodic fluctuations in error metrics, which are
absent in the autoregressive method. The hierarchical approach also shows
fluctuations, though with reduced intensity after three days. Among these, the
hierarchical approach performs best while maintaining computational efficiency.
Furthermore, the model effectively predicts cyclone tracks using the
hierarchical approach, achieving results comparable to observational and
reanalysis datasets.
|
2503.12963 | Chaolong Yang | Chaolong Yang, Kai Yao, Yuyao Yan, Chenru Jiang, Weiguang Zhao, Jie
Sun, Guangliang Cheng, Yifei Zhang, Bin Dong, Kaizhu Huang | Unlock Pose Diversity: Accurate and Efficient Implicit Keypoint-based
Spatiotemporal Diffusion for Audio-driven Talking Portrait | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Audio-driven single-image talking portrait generation plays a crucial role in
virtual reality, digital human creation, and filmmaking. Existing approaches
are generally categorized into keypoint-based and image-based methods.
Keypoint-based methods effectively preserve character identity but struggle to
capture fine facial details due to the fixed points limitation of the 3D
Morphable Model. Moreover, traditional generative networks face challenges in
establishing causality between audio and keypoints on limited datasets,
resulting in low pose diversity. In contrast, image-based approaches produce
high-quality portraits with diverse details using the diffusion network but
incur identity distortion and expensive computational costs. In this work, we
propose KDTalker, the first framework to combine unsupervised implicit 3D
keypoint with a spatiotemporal diffusion model. Leveraging unsupervised
implicit 3D keypoints, KDTalker adapts facial information densities, allowing
the diffusion process to model diverse head poses and capture fine facial
details flexibly. The custom-designed spatiotemporal attention mechanism
ensures accurate lip synchronization, producing temporally consistent,
high-quality animations while enhancing computational efficiency. Experimental
results demonstrate that KDTalker achieves state-of-the-art performance
regarding lip synchronization accuracy, head pose diversity, and execution
efficiency.Our codes are available at https://github.com/chaolongy/KDTalker.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:18:31 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yang",
"Chaolong",
""
],
[
"Yao",
"Kai",
""
],
[
"Yan",
"Yuyao",
""
],
[
"Jiang",
"Chenru",
""
],
[
"Zhao",
"Weiguang",
""
],
[
"Sun",
"Jie",
""
],
[
"Cheng",
"Guangliang",
""
],
[
"Zhang",
"Yifei",
""
],
[
"Dong",
"Bin",
""
],
[
"Huang",
"Kaizhu",
""
]
] | TITLE: Unlock Pose Diversity: Accurate and Efficient Implicit Keypoint-based
Spatiotemporal Diffusion for Audio-driven Talking Portrait
ABSTRACT: Audio-driven single-image talking portrait generation plays a crucial role in
virtual reality, digital human creation, and filmmaking. Existing approaches
are generally categorized into keypoint-based and image-based methods.
Keypoint-based methods effectively preserve character identity but struggle to
capture fine facial details due to the fixed points limitation of the 3D
Morphable Model. Moreover, traditional generative networks face challenges in
establishing causality between audio and keypoints on limited datasets,
resulting in low pose diversity. In contrast, image-based approaches produce
high-quality portraits with diverse details using the diffusion network but
incur identity distortion and expensive computational costs. In this work, we
propose KDTalker, the first framework to combine unsupervised implicit 3D
keypoint with a spatiotemporal diffusion model. Leveraging unsupervised
implicit 3D keypoints, KDTalker adapts facial information densities, allowing
the diffusion process to model diverse head poses and capture fine facial
details flexibly. The custom-designed spatiotemporal attention mechanism
ensures accurate lip synchronization, producing temporally consistent,
high-quality animations while enhancing computational efficiency. Experimental
results demonstrate that KDTalker achieves state-of-the-art performance
regarding lip synchronization accuracy, head pose diversity, and execution
efficiency.Our codes are available at https://github.com/chaolongy/KDTalker.
|
2503.12964 | Zeeshan Patel | Zeeshan Patel, Ethan He, Parth Mannan, Xiaowei Ren, Ryan Wolf, Niket
Agarwal, Jacob Huffman, Zhuoyao Wang, Carl Wang, Jack Chang, Yan Bai, Tommy
Huang, Linnan Wang, Sahil Jain, Shanmugam Ramasamy, Joseph Jennings,
Ekaterina Sirazitdinova, Oleg Sudakov, Mingyuan Ma, Bobby Chen, Forrest Lin,
Hao Wang, Vasanth Rao Naik Sabavat, Sriharsha Niverty, Rong Ou, Pallab
Bhattacharya, David Page, Nima Tajbakhsh, Ashwath Aithal | Training Video Foundation Models with NVIDIA NeMo | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Video Foundation Models (VFMs) have recently been used to simulate the real
world to train physical AI systems and develop creative visual experiences.
However, there are significant challenges in training large-scale, high quality
VFMs that can generate high-quality videos. We present a scalable, open-source
VFM training pipeline with NVIDIA NeMo, providing accelerated video dataset
curation, multimodal data loading, and parallelized video diffusion model
training and inference. We also provide a comprehensive performance analysis
highlighting best practices for efficient VFM training and inference.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:19:12 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Patel",
"Zeeshan",
""
],
[
"He",
"Ethan",
""
],
[
"Mannan",
"Parth",
""
],
[
"Ren",
"Xiaowei",
""
],
[
"Wolf",
"Ryan",
""
],
[
"Agarwal",
"Niket",
""
],
[
"Huffman",
"Jacob",
""
],
[
"Wang",
"Zhuoyao",
""
],
[
"Wang",
"Carl",
""
],
[
"Chang",
"Jack",
""
],
[
"Bai",
"Yan",
""
],
[
"Huang",
"Tommy",
""
],
[
"Wang",
"Linnan",
""
],
[
"Jain",
"Sahil",
""
],
[
"Ramasamy",
"Shanmugam",
""
],
[
"Jennings",
"Joseph",
""
],
[
"Sirazitdinova",
"Ekaterina",
""
],
[
"Sudakov",
"Oleg",
""
],
[
"Ma",
"Mingyuan",
""
],
[
"Chen",
"Bobby",
""
],
[
"Lin",
"Forrest",
""
],
[
"Wang",
"Hao",
""
],
[
"Sabavat",
"Vasanth Rao Naik",
""
],
[
"Niverty",
"Sriharsha",
""
],
[
"Ou",
"Rong",
""
],
[
"Bhattacharya",
"Pallab",
""
],
[
"Page",
"David",
""
],
[
"Tajbakhsh",
"Nima",
""
],
[
"Aithal",
"Ashwath",
""
]
] | TITLE: Training Video Foundation Models with NVIDIA NeMo
ABSTRACT: Video Foundation Models (VFMs) have recently been used to simulate the real
world to train physical AI systems and develop creative visual experiences.
However, there are significant challenges in training large-scale, high quality
VFMs that can generate high-quality videos. We present a scalable, open-source
VFM training pipeline with NVIDIA NeMo, providing accelerated video dataset
curation, multimodal data loading, and parallelized video diffusion model
training and inference. We also provide a comprehensive performance analysis
highlighting best practices for efficient VFM training and inference.
|
2503.12968 | Guanhua Ding | Guanhua Ding, Yuxuan Xia, Runwei Guan, Qinchen Wu, Tao Huang, Weiping
Ding, Jinping Sun, and Guoqiang Mao | OptiPMB: Enhancing 3D Multi-Object Tracking with Optimized Poisson
Multi-Bernoulli Filtering | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate 3D multi-object tracking (MOT) is crucial for autonomous driving, as
it enables robust perception, navigation, and planning in complex environments.
While deep learning-based solutions have demonstrated impressive 3D MOT
performance, model-based approaches remain appealing for their simplicity,
interpretability, and data efficiency. Conventional model-based trackers
typically rely on random vector-based Bayesian filters within the
tracking-by-detection (TBD) framework but face limitations due to heuristic
data association and track management schemes. In contrast, random finite set
(RFS)-based Bayesian filtering handles object birth, survival, and death in a
theoretically sound manner, facilitating interpretability and parameter tuning.
In this paper, we present OptiPMB, a novel RFS-based 3D MOT method that employs
an optimized Poisson multi-Bernoulli (PMB) filter while incorporating several
key innovative designs within the TBD framework. Specifically, we propose a
measurement-driven hybrid adaptive birth model for improved track
initialization, employ adaptive detection probability parameters to effectively
maintain tracks for occluded objects, and optimize density pruning and track
extraction modules to further enhance overall tracking performance. Extensive
evaluations on nuScenes and KITTI datasets show that OptiPMB achieves superior
tracking accuracy compared with state-of-the-art methods, thereby establishing
a new benchmark for model-based 3D MOT and offering valuable insights for
future research on RFS-based trackers in autonomous driving.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:24:26 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Ding",
"Guanhua",
""
],
[
"Xia",
"Yuxuan",
""
],
[
"Guan",
"Runwei",
""
],
[
"Wu",
"Qinchen",
""
],
[
"Huang",
"Tao",
""
],
[
"Ding",
"Weiping",
""
],
[
"Sun",
"Jinping",
""
],
[
"Mao",
"Guoqiang",
""
]
] | TITLE: OptiPMB: Enhancing 3D Multi-Object Tracking with Optimized Poisson
Multi-Bernoulli Filtering
ABSTRACT: Accurate 3D multi-object tracking (MOT) is crucial for autonomous driving, as
it enables robust perception, navigation, and planning in complex environments.
While deep learning-based solutions have demonstrated impressive 3D MOT
performance, model-based approaches remain appealing for their simplicity,
interpretability, and data efficiency. Conventional model-based trackers
typically rely on random vector-based Bayesian filters within the
tracking-by-detection (TBD) framework but face limitations due to heuristic
data association and track management schemes. In contrast, random finite set
(RFS)-based Bayesian filtering handles object birth, survival, and death in a
theoretically sound manner, facilitating interpretability and parameter tuning.
In this paper, we present OptiPMB, a novel RFS-based 3D MOT method that employs
an optimized Poisson multi-Bernoulli (PMB) filter while incorporating several
key innovative designs within the TBD framework. Specifically, we propose a
measurement-driven hybrid adaptive birth model for improved track
initialization, employ adaptive detection probability parameters to effectively
maintain tracks for occluded objects, and optimize density pruning and track
extraction modules to further enhance overall tracking performance. Extensive
evaluations on nuScenes and KITTI datasets show that OptiPMB achieves superior
tracking accuracy compared with state-of-the-art methods, thereby establishing
a new benchmark for model-based 3D MOT and offering valuable insights for
future research on RFS-based trackers in autonomous driving.
|
2503.12969 | Toru Tamaki | Kazuki Omi, Jion Oshima, Toru Tamaki | Action tube generation by person query matching for spatio-temporal
action detection | extended version of VISAPP2025 | null | 10.5220/0013089500003912 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a method for spatio-temporal action detection (STAD) that
directly generates action tubes from the original video without relying on
post-processing steps such as IoU-based linking and clip splitting. Our
approach applies query-based detection (DETR) to each frame and matches DETR
queries to link the same person across frames. We introduce the Query Matching
Module (QMM), which uses metric learning to bring queries for the same person
closer together across frames compared to queries for different people. Action
classes are predicted using the sequence of queries obtained from QMM matching,
allowing for variable-length inputs from videos longer than a single clip.
Experimental results on JHMDB, UCF101-24, and AVA datasets demonstrate that our
method performs well for large position changes of people while offering
superior computational efficiency and lower resource requirements.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:26:06 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Omi",
"Kazuki",
""
],
[
"Oshima",
"Jion",
""
],
[
"Tamaki",
"Toru",
""
]
] | TITLE: Action tube generation by person query matching for spatio-temporal
action detection
ABSTRACT: This paper proposes a method for spatio-temporal action detection (STAD) that
directly generates action tubes from the original video without relying on
post-processing steps such as IoU-based linking and clip splitting. Our
approach applies query-based detection (DETR) to each frame and matches DETR
queries to link the same person across frames. We introduce the Query Matching
Module (QMM), which uses metric learning to bring queries for the same person
closer together across frames compared to queries for different people. Action
classes are predicted using the sequence of queries obtained from QMM matching,
allowing for variable-length inputs from videos longer than a single clip.
Experimental results on JHMDB, UCF101-24, and AVA datasets demonstrate that our
method performs well for large position changes of people while offering
superior computational efficiency and lower resource requirements.
|
2503.12974 | Xueying Jiang | Xueying Jiang, Wenhao Li, Xiaoqin Zhang, Ling Shao, Shijian Lu | Exploring 3D Activity Reasoning and Planning: From Implicit Human
Intentions to Route-Aware Planning | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D activity reasoning and planning has attracted increasing attention in
human-robot interaction and embodied AI thanks to the recent advance in
multimodal learning. However, most existing works share two constraints: 1)
heavy reliance on explicit instructions with little reasoning on implicit user
intention; 2) negligence of inter-step route planning on robot moves. To bridge
the gaps, we propose 3D activity reasoning and planning, a novel 3D task that
reasons the intended activities from implicit instructions and decomposes them
into steps with inter-step routes and planning under the guidance of
fine-grained 3D object shapes and locations from scene segmentation. We tackle
the new 3D task from two perspectives. First, we construct ReasonPlan3D, a
large-scale benchmark that covers diverse 3D scenes with rich implicit
instructions and detailed annotations for multi-step task planning, inter-step
route planning, and fine-grained segmentation. Second, we design a novel
framework that introduces progressive plan generation with contextual
consistency across multiple steps, as well as a scene graph that is updated
dynamically for capturing critical objects and their spatial relations.
Extensive experiments demonstrate the effectiveness of our benchmark and
framework in reasoning activities from implicit human instructions, producing
accurate stepwise task plans, and seamlessly integrating route planning for
multi-step moves. The dataset and code will be released.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:33:58 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Jiang",
"Xueying",
""
],
[
"Li",
"Wenhao",
""
],
[
"Zhang",
"Xiaoqin",
""
],
[
"Shao",
"Ling",
""
],
[
"Lu",
"Shijian",
""
]
] | TITLE: Exploring 3D Activity Reasoning and Planning: From Implicit Human
Intentions to Route-Aware Planning
ABSTRACT: 3D activity reasoning and planning has attracted increasing attention in
human-robot interaction and embodied AI thanks to the recent advance in
multimodal learning. However, most existing works share two constraints: 1)
heavy reliance on explicit instructions with little reasoning on implicit user
intention; 2) negligence of inter-step route planning on robot moves. To bridge
the gaps, we propose 3D activity reasoning and planning, a novel 3D task that
reasons the intended activities from implicit instructions and decomposes them
into steps with inter-step routes and planning under the guidance of
fine-grained 3D object shapes and locations from scene segmentation. We tackle
the new 3D task from two perspectives. First, we construct ReasonPlan3D, a
large-scale benchmark that covers diverse 3D scenes with rich implicit
instructions and detailed annotations for multi-step task planning, inter-step
route planning, and fine-grained segmentation. Second, we design a novel
framework that introduces progressive plan generation with contextual
consistency across multiple steps, as well as a scene graph that is updated
dynamically for capturing critical objects and their spatial relations.
Extensive experiments demonstrate the effectiveness of our benchmark and
framework in reasoning activities from implicit human instructions, producing
accurate stepwise task plans, and seamlessly integrating route planning for
multi-step moves. The dataset and code will be released.
|
2503.12982 | Yunshuang Yuan | Yunshuang Yuan, Yan Xia, Daniel Cremers, Monika Sester | SparseAlign: A Fully Sparse Framework for Cooperative Object Detection | null | CVPR2025 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperative perception can increase the view field and decrease the occlusion
of an ego vehicle, hence improving the perception performance and safety of
autonomous driving. Despite the success of previous works on cooperative object
detection, they mostly operate on dense Bird's Eye View (BEV) feature maps,
which are computationally demanding and can hardly be extended to long-range
detection problems. More efficient fully sparse frameworks are rarely explored.
In this work, we design a fully sparse framework, SparseAlign, with three key
features: an enhanced sparse 3D backbone, a query-based temporal context
learning module, and a robust detection head specially tailored for sparse
features. Extensive experimental results on both OPV2V and DairV2X datasets
show that our framework, despite its sparsity, outperforms the state of the art
with less communication bandwidth requirements. In addition, experiments on the
OPV2Vt and DairV2Xt datasets for time-aligned cooperative object detection also
show a significant performance gain compared to the baseline works.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:38:53 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yuan",
"Yunshuang",
""
],
[
"Xia",
"Yan",
""
],
[
"Cremers",
"Daniel",
""
],
[
"Sester",
"Monika",
""
]
] | TITLE: SparseAlign: A Fully Sparse Framework for Cooperative Object Detection
ABSTRACT: Cooperative perception can increase the view field and decrease the occlusion
of an ego vehicle, hence improving the perception performance and safety of
autonomous driving. Despite the success of previous works on cooperative object
detection, they mostly operate on dense Bird's Eye View (BEV) feature maps,
which are computationally demanding and can hardly be extended to long-range
detection problems. More efficient fully sparse frameworks are rarely explored.
In this work, we design a fully sparse framework, SparseAlign, with three key
features: an enhanced sparse 3D backbone, a query-based temporal context
learning module, and a robust detection head specially tailored for sparse
features. Extensive experimental results on both OPV2V and DairV2X datasets
show that our framework, despite its sparsity, outperforms the state of the art
with less communication bandwidth requirements. In addition, experiments on the
OPV2Vt and DairV2Xt datasets for time-aligned cooperative object detection also
show a significant performance gain compared to the baseline works.
|
2503.12989 | Palakorn Achananuparp | Palakorn Achananuparp, Ee-Peng Lim | A Multi-Stage Framework with Taxonomy-Guided Reasoning for Occupation
Classification Using Large Language Models | null | null | null | null | cs.CL cs.AI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically annotating job data with standardized occupations from
taxonomies, known as occupation classification, is crucial for labor market
analysis. However, this task is often hindered by data scarcity and the
challenges of manual annotations. While large language models (LLMs) hold
promise due to their extensive world knowledge and in-context learning
capabilities, their effectiveness depends on their knowledge of occupational
taxonomies, which remains unclear. In this study, we assess the ability of LLMs
to generate precise taxonomic entities from taxonomy, highlighting their
limitations. To address these challenges, we propose a multi-stage framework
consisting of inference, retrieval, and reranking stages, which integrates
taxonomy-guided reasoning examples to enhance performance by aligning outputs
with taxonomic knowledge. Evaluations on a large-scale dataset show significant
improvements in classification accuracy. Furthermore, we demonstrate the
framework's adaptability for multi-label skill classification. Our results
indicate that the framework outperforms existing LLM-based methods, offering a
practical and scalable solution for occupation classification and related tasks
across LLMs.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:44:50 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Achananuparp",
"Palakorn",
""
],
[
"Lim",
"Ee-Peng",
""
]
] | TITLE: A Multi-Stage Framework with Taxonomy-Guided Reasoning for Occupation
Classification Using Large Language Models
ABSTRACT: Automatically annotating job data with standardized occupations from
taxonomies, known as occupation classification, is crucial for labor market
analysis. However, this task is often hindered by data scarcity and the
challenges of manual annotations. While large language models (LLMs) hold
promise due to their extensive world knowledge and in-context learning
capabilities, their effectiveness depends on their knowledge of occupational
taxonomies, which remains unclear. In this study, we assess the ability of LLMs
to generate precise taxonomic entities from taxonomy, highlighting their
limitations. To address these challenges, we propose a multi-stage framework
consisting of inference, retrieval, and reranking stages, which integrates
taxonomy-guided reasoning examples to enhance performance by aligning outputs
with taxonomic knowledge. Evaluations on a large-scale dataset show significant
improvements in classification accuracy. Furthermore, we demonstrate the
framework's adaptability for multi-label skill classification. Our results
indicate that the framework outperforms existing LLM-based methods, offering a
practical and scalable solution for occupation classification and related tasks
across LLMs.
|
2503.12994 | Vincent Labatut | No\'e Cecillon (LIA), Vincent Labatut (LIA), Richard Dufour (LS2N -
\'equipe TALN) | Conversation-Based Multimodal Abuse Detection Through Text and Graph
Embeddings | null | Computing, 2025 | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Abusive behavior is common on online social networks, and forces the hosts of
such platforms to find new solutions to address this problem. Various methods
have been proposed to automate this task in the past decade. Most of them rely
on the exchanged content, but ignore the structure and dynamics of the
conversation, which could provide some relevant information. In this article,
we propose to use representation learning methods to automatically produce
embeddings of this textual content and of the conversational graphs depicting
message exchanges. While the latter could be enhanced by including additional
information on top of the raw conversational structure, no method currently
exists to learn wholegraph representations using simultaneously edge
directions, weights, signs, and vertex attributes. We propose two such methods
to fill this gap in the literature. We experiment with 5 textual and 13 graph
embedding methods, and apply them to a dataset of online messages annotated for
abuse detection. Our best results achieve an F -measure of 81.02 using text
alone and 80.61 using graphs alone. We also combine both modalities of
information (text and graphs) through three fusion strategies, and show that
this strongly improves abuse detection performance, increasing the F -measure
to 87.06. Finally, we identify which specific engineered features are captured
by the embedding methods under consideration. These features have clear
interpretations and help explain what information the representation learning
methods deem discriminative.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 09:51:17 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Cecillon",
"Noé",
"",
"LIA"
],
[
"Labatut",
"Vincent",
"",
"LIA"
],
[
"Dufour",
"Richard",
"",
"LS2N -\n équipe TALN"
]
] | TITLE: Conversation-Based Multimodal Abuse Detection Through Text and Graph
Embeddings
ABSTRACT: Abusive behavior is common on online social networks, and forces the hosts of
such platforms to find new solutions to address this problem. Various methods
have been proposed to automate this task in the past decade. Most of them rely
on the exchanged content, but ignore the structure and dynamics of the
conversation, which could provide some relevant information. In this article,
we propose to use representation learning methods to automatically produce
embeddings of this textual content and of the conversational graphs depicting
message exchanges. While the latter could be enhanced by including additional
information on top of the raw conversational structure, no method currently
exists to learn wholegraph representations using simultaneously edge
directions, weights, signs, and vertex attributes. We propose two such methods
to fill this gap in the literature. We experiment with 5 textual and 13 graph
embedding methods, and apply them to a dataset of online messages annotated for
abuse detection. Our best results achieve an F -measure of 81.02 using text
alone and 80.61 using graphs alone. We also combine both modalities of
information (text and graphs) through three fusion strategies, and show that
this strongly improves abuse detection performance, increasing the F -measure
to 87.06. Finally, we identify which specific engineered features are captured
by the embedding methods under consideration. These features have clear
interpretations and help explain what information the representation learning
methods deem discriminative.
|
2503.13004 | Jiaxu Liu | Jiaxu Liu, Li Li, Hubert P. H. Shum, Toby P. Breckon | TFDM: Time-Variant Frequency-Based Point Cloud Diffusion with Mamba | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffusion models currently demonstrate impressive performance over various
generative tasks. Recent work on image diffusion highlights the strong
capabilities of Mamba (state space models) due to its efficient handling of
long-range dependencies and sequential data modeling. Unfortunately, joint
consideration of state space models with 3D point cloud generation remains
limited. To harness the powerful capabilities of the Mamba model for 3D point
cloud generation, we propose a novel diffusion framework containing dual latent
Mamba block (DM-Block) and a time-variant frequency encoder (TF-Encoder). The
DM-Block apply a space-filling curve to reorder points into sequences suitable
for Mamba state-space modeling, while operating in a latent space to mitigate
the computational overhead that arises from direct 3D data processing.
Meanwhile, the TF-Encoder takes advantage of the ability of the diffusion model
to refine fine details in later recovery stages by prioritizing key points
within the U-Net architecture. This frequency-based mechanism ensures enhanced
detail quality in the final stages of generation. Experimental results on the
ShapeNet-v2 dataset demonstrate that our method achieves state-of-the-art
performance (ShapeNet-v2: 0.14\% on 1-NNA-Abs50 EMD and 57.90\% on COV EMD) on
certain metrics for specific categories while reducing computational parameters
and inference time by up to 10$\times$ and 9$\times$, respectively. Source code
is available in Supplementary Materials and will be released upon accpetance.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:00:14 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Liu",
"Jiaxu",
""
],
[
"Li",
"Li",
""
],
[
"Shum",
"Hubert P. H.",
""
],
[
"Breckon",
"Toby P.",
""
]
] | TITLE: TFDM: Time-Variant Frequency-Based Point Cloud Diffusion with Mamba
ABSTRACT: Diffusion models currently demonstrate impressive performance over various
generative tasks. Recent work on image diffusion highlights the strong
capabilities of Mamba (state space models) due to its efficient handling of
long-range dependencies and sequential data modeling. Unfortunately, joint
consideration of state space models with 3D point cloud generation remains
limited. To harness the powerful capabilities of the Mamba model for 3D point
cloud generation, we propose a novel diffusion framework containing dual latent
Mamba block (DM-Block) and a time-variant frequency encoder (TF-Encoder). The
DM-Block apply a space-filling curve to reorder points into sequences suitable
for Mamba state-space modeling, while operating in a latent space to mitigate
the computational overhead that arises from direct 3D data processing.
Meanwhile, the TF-Encoder takes advantage of the ability of the diffusion model
to refine fine details in later recovery stages by prioritizing key points
within the U-Net architecture. This frequency-based mechanism ensures enhanced
detail quality in the final stages of generation. Experimental results on the
ShapeNet-v2 dataset demonstrate that our method achieves state-of-the-art
performance (ShapeNet-v2: 0.14\% on 1-NNA-Abs50 EMD and 57.90\% on COV EMD) on
certain metrics for specific categories while reducing computational parameters
and inference time by up to 10$\times$ and 9$\times$, respectively. Source code
is available in Supplementary Materials and will be released upon accpetance.
|
2503.13021 | Omri Suissa | Omri Suissa, Muhiim Ali, Ariana Azarbal, Hui Shen, Shekhar Pradhan | Dynamic Relation Inference via Verb Embeddings | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | CLIP has demonstrated exceptional image-text matching capabilities due to its
training on contrastive learning tasks. Past research has suggested that
whereas CLIP effectively matches text to images when the matching can be
achieved just by matching the text with the objects in the image, CLIP
struggles when the matching depends on representing the relationship among the
objects in the images (i.e., inferring relations). Previous attempts to address
this limitation by training CLIP on relation detection datasets with only
linguistic supervision have met with limited success. In this paper, we offer
insights and practical methods to advance the field of relation inference from
images. This paper approaches the task of creating a model that effectively
detects relations among the objects in images by producing text and image
embeddings that capture relationships through linguistic supervision. To this
end, we propose Dynamic Relation Inference via Verb Embeddings (DRIVE), which
augments the COCO dataset, fine-tunes CLIP with hard negatives
subject-relation-object triples and corresponding images, and introduces a
novel loss function to improve relation detection. Evaluated on multiple
CLIP-based models, our method significantly improves zero-shot relation
inference accuracy in both frozen and fine-tuned settings, significantly
outperforming CLIP and state-of-the-art models while generalizing well on
unseen data.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:24:27 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Suissa",
"Omri",
""
],
[
"Ali",
"Muhiim",
""
],
[
"Azarbal",
"Ariana",
""
],
[
"Shen",
"Hui",
""
],
[
"Pradhan",
"Shekhar",
""
]
] | TITLE: Dynamic Relation Inference via Verb Embeddings
ABSTRACT: CLIP has demonstrated exceptional image-text matching capabilities due to its
training on contrastive learning tasks. Past research has suggested that
whereas CLIP effectively matches text to images when the matching can be
achieved just by matching the text with the objects in the image, CLIP
struggles when the matching depends on representing the relationship among the
objects in the images (i.e., inferring relations). Previous attempts to address
this limitation by training CLIP on relation detection datasets with only
linguistic supervision have met with limited success. In this paper, we offer
insights and practical methods to advance the field of relation inference from
images. This paper approaches the task of creating a model that effectively
detects relations among the objects in images by producing text and image
embeddings that capture relationships through linguistic supervision. To this
end, we propose Dynamic Relation Inference via Verb Embeddings (DRIVE), which
augments the COCO dataset, fine-tunes CLIP with hard negatives
subject-relation-object triples and corresponding images, and introduces a
novel loss function to improve relation detection. Evaluated on multiple
CLIP-based models, our method significantly improves zero-shot relation
inference accuracy in both frozen and fine-tuned settings, significantly
outperforming CLIP and state-of-the-art models while generalizing well on
unseen data.
|
2503.13023 | Tomasz Kryjak | Michal Danilowicz and Tomasz Kryjak | Real-Time Multi-Object Tracking using YOLOv8 and SORT on a SoC FPGA | Accepted for the 21st International Symposium on Applied
Reconfigurable Computing ARC 2025, Sevilla, Spain, April 9-11, 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multi-object tracking (MOT) is one of the most important problems in computer
vision and a key component of any vision-based perception system used in
advanced autonomous mobile robotics. Therefore, its implementation on low-power
and real-time embedded platforms is highly desirable. Modern MOT algorithms
should be able to track objects of a given class (e.g. people or vehicles). In
addition, the number of objects to be tracked is not known in advance, and they
may appear and disappear at any time, as well as be obscured. For these
reasons, the most popular and successful approaches have recently been based on
the tracking paradigm. Therefore, the presence of a high quality object
detector is essential, which in practice accounts for the vast majority of the
computational and memory complexity of the whole MOT system. In this paper, we
propose an FPGA (Field-Programmable Gate Array) implementation of an embedded
MOT system based on a quantized YOLOv8 detector and the SORT (Simple Online
Realtime Tracker) tracker. We use a modified version of the FINN framework to
utilize external memory for model parameters and to support operations
necessary required by YOLOv8. We discuss the evaluation of detection and
tracking performance using the COCO and MOT15 datasets, where we achieve 0.21
mAP and 38.9 MOTA respectively. As the computational platform, we use an MPSoC
system (Zynq UltraScale+ device from AMD/Xilinx) where the detector is deployed
in reprogrammable logic and the tracking algorithm is implemented in the
processor system.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:25:33 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Danilowicz",
"Michal",
""
],
[
"Kryjak",
"Tomasz",
""
]
] | TITLE: Real-Time Multi-Object Tracking using YOLOv8 and SORT on a SoC FPGA
ABSTRACT: Multi-object tracking (MOT) is one of the most important problems in computer
vision and a key component of any vision-based perception system used in
advanced autonomous mobile robotics. Therefore, its implementation on low-power
and real-time embedded platforms is highly desirable. Modern MOT algorithms
should be able to track objects of a given class (e.g. people or vehicles). In
addition, the number of objects to be tracked is not known in advance, and they
may appear and disappear at any time, as well as be obscured. For these
reasons, the most popular and successful approaches have recently been based on
the tracking paradigm. Therefore, the presence of a high quality object
detector is essential, which in practice accounts for the vast majority of the
computational and memory complexity of the whole MOT system. In this paper, we
propose an FPGA (Field-Programmable Gate Array) implementation of an embedded
MOT system based on a quantized YOLOv8 detector and the SORT (Simple Online
Realtime Tracker) tracker. We use a modified version of the FINN framework to
utilize external memory for model parameters and to support operations
necessary required by YOLOv8. We discuss the evaluation of detection and
tracking performance using the COCO and MOT15 datasets, where we achieve 0.21
mAP and 38.9 MOTA respectively. As the computational platform, we use an MPSoC
system (Zynq UltraScale+ device from AMD/Xilinx) where the detector is deployed
in reprogrammable logic and the tracking algorithm is implemented in the
processor system.
|
2503.13025 | Changhee Yang | ChangHee Yang, Hyeonseop Song, Seokhun Choi, Seungwoo Lee, Jaechul
Kim, Hoseok Do | PoseSyn: Synthesizing Diverse 3D Pose Data from In-the-Wild 2D Data | The first three authors contributed equally to this work | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite considerable efforts to enhance the generalization of 3D pose
estimators without costly 3D annotations, existing data augmentation methods
struggle in real world scenarios with diverse human appearances and complex
poses. We propose PoseSyn, a novel data synthesis framework that transforms
abundant in the wild 2D pose dataset into diverse 3D pose image pairs. PoseSyn
comprises two key components: Error Extraction Module (EEM), which identifies
challenging poses from the 2D pose datasets, and Motion Synthesis Module (MSM),
which synthesizes motion sequences around the challenging poses. Then, by
generating realistic 3D training data via a human animation model aligned with
challenging poses and appearances PoseSyn boosts the accuracy of various 3D
pose estimators by up to 14% across real world benchmarks including various
backgrounds and occlusions, challenging poses, and multi view scenarios.
Extensive experiments further confirm that PoseSyn is a scalable and effective
approach for improving generalization without relying on expensive 3D
annotations, regardless of the pose estimator's model size or design.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:28:35 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Yang",
"ChangHee",
""
],
[
"Song",
"Hyeonseop",
""
],
[
"Choi",
"Seokhun",
""
],
[
"Lee",
"Seungwoo",
""
],
[
"Kim",
"Jaechul",
""
],
[
"Do",
"Hoseok",
""
]
] | TITLE: PoseSyn: Synthesizing Diverse 3D Pose Data from In-the-Wild 2D Data
ABSTRACT: Despite considerable efforts to enhance the generalization of 3D pose
estimators without costly 3D annotations, existing data augmentation methods
struggle in real world scenarios with diverse human appearances and complex
poses. We propose PoseSyn, a novel data synthesis framework that transforms
abundant in the wild 2D pose dataset into diverse 3D pose image pairs. PoseSyn
comprises two key components: Error Extraction Module (EEM), which identifies
challenging poses from the 2D pose datasets, and Motion Synthesis Module (MSM),
which synthesizes motion sequences around the challenging poses. Then, by
generating realistic 3D training data via a human animation model aligned with
challenging poses and appearances PoseSyn boosts the accuracy of various 3D
pose estimators by up to 14% across real world benchmarks including various
backgrounds and occlusions, challenging poses, and multi view scenarios.
Extensive experiments further confirm that PoseSyn is a scalable and effective
approach for improving generalization without relying on expensive 3D
annotations, regardless of the pose estimator's model size or design.
|
2503.13045 | Gabriele Berton | Gabriele Berton, Kevin Musgrave, Carlo Masone | All You Need to Know About Training Image Retrieval Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image retrieval is the task of finding images in a database that are most
similar to a given query image. The performance of an image retrieval pipeline
depends on many training-time factors, including the embedding model
architecture, loss function, data sampler, mining function, learning rate(s),
and batch size. In this work, we run tens of thousands of training runs to
understand the effect each of these factors has on retrieval accuracy. We also
discover best practices that hold across multiple datasets. The code is
available at https://github.com/gmberton/image-retrieval
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:50:34 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Berton",
"Gabriele",
""
],
[
"Musgrave",
"Kevin",
""
],
[
"Masone",
"Carlo",
""
]
] | TITLE: All You Need to Know About Training Image Retrieval Models
ABSTRACT: Image retrieval is the task of finding images in a database that are most
similar to a given query image. The performance of an image retrieval pipeline
depends on many training-time factors, including the embedding model
architecture, loss function, data sampler, mining function, learning rate(s),
and batch size. In this work, we run tens of thousands of training runs to
understand the effect each of these factors has on retrieval accuracy. We also
discover best practices that hold across multiple datasets. The code is
available at https://github.com/gmberton/image-retrieval
|
2503.13051 | Peter Eisert | Kai Uwe Barthel, Florian Barthel, Peter Eisert | Permutation Learning with Only N Parameters: From SoftSort to
Self-Organizing Gaussians | null | null | null | null | cs.LG cs.CV stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Sorting and permutation learning are key concepts in optimization and machine
learning, especially when organizing high-dimensional data into meaningful
spatial layouts. The Gumbel-Sinkhorn method, while effective, requires N*N
parameters to determine a full permutation matrix, making it computationally
expensive for large datasets. Low-rank matrix factorization approximations
reduce memory requirements to 2MN (with M << N), but they still struggle with
very large problems. SoftSort, by providing a continuous relaxation of the
argsort operator, allows differentiable 1D sorting, but it faces challenges
with multidimensional data and complex permutations. In this paper, we present
a novel method for learning permutations using only N parameters, which
dramatically reduces storage costs. Our approach builds on SoftSort, but
extends it by iteratively shuffling the N indices of the elements to be sorted
through a separable learning process. This modification significantly improves
sorting quality, especially for multidimensional data and complex optimization
criteria, and outperforms pure SoftSort. Our method offers improved memory
efficiency and scalability compared to existing approaches, while maintaining
high-quality permutation learning. Its dramatically reduced memory requirements
make it particularly well-suited for large-scale optimization tasks, such as
"Self-Organizing Gaussians", where efficient and scalable permutation learning
is critical.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 10:55:55 GMT"
}
] | 2025-03-18T00:00:00 | [
[
"Barthel",
"Kai Uwe",
""
],
[
"Barthel",
"Florian",
""
],
[
"Eisert",
"Peter",
""
]
] | TITLE: Permutation Learning with Only N Parameters: From SoftSort to
Self-Organizing Gaussians
ABSTRACT: Sorting and permutation learning are key concepts in optimization and machine
learning, especially when organizing high-dimensional data into meaningful
spatial layouts. The Gumbel-Sinkhorn method, while effective, requires N*N
parameters to determine a full permutation matrix, making it computationally
expensive for large datasets. Low-rank matrix factorization approximations
reduce memory requirements to 2MN (with M << N), but they still struggle with
very large problems. SoftSort, by providing a continuous relaxation of the
argsort operator, allows differentiable 1D sorting, but it faces challenges
with multidimensional data and complex permutations. In this paper, we present
a novel method for learning permutations using only N parameters, which
dramatically reduces storage costs. Our approach builds on SoftSort, but
extends it by iteratively shuffling the N indices of the elements to be sorted
through a separable learning process. This modification significantly improves
sorting quality, especially for multidimensional data and complex optimization
criteria, and outperforms pure SoftSort. Our method offers improved memory
efficiency and scalability compared to existing approaches, while maintaining
high-quality permutation learning. Its dramatically reduced memory requirements
make it particularly well-suited for large-scale optimization tasks, such as
"Self-Organizing Gaussians", where efficient and scalable permutation learning
is critical.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.