Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.11070 | Nuo Xu | Kelu Yao, Nuo Xu, Rong Yang, Yingying Xu, Zhuoyan Gao, Titinunt
Kitrungrotsakul, Yi Ren, Pu Zhang, Jin Wang, Ning Wei, Chao Li | Falcon: A Remote Sensing Vision-Language Foundation Model | Under Review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a holistic vision-language foundation model tailored
for remote sensing, named Falcon. Falcon offers a unified, prompt-based
paradigm that effectively executes comprehensive and complex remote sensing
tasks. Falcon demonstrates powerful understanding and reasoning abilities at
the image, region, and pixel levels. Specifically, given simple natural
language instructions and remote sensing images, Falcon can produce impressive
results in text form across 14 distinct tasks, i.e., image classification,
object detection, segmentation, image captioning, and etc. To facilitate
Falcon's training and empower its representation capacity to encode rich
spatial and semantic information, we developed Falcon_SFT, a large-scale,
multi-task, instruction-tuning dataset in the field of remote sensing. The
Falcon_SFT dataset consists of approximately 78 million high-quality data
samples, covering 5.6 million multi-spatial resolution and multi-view remote
sensing images with diverse instructions. It features hierarchical annotations
and undergoes manual sampling verification to ensure high data quality and
reliability. Extensive comparative experiments are conducted, which verify that
Falcon achieves remarkable performance over 67 datasets and 14 tasks, despite
having only 0.7B parameters. We release the complete dataset, code, and model
weights at https://github.com/TianHuiLab/Falcon, hoping to help further develop
the open-source community.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 04:27:01 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yao",
"Kelu",
""
],
[
"Xu",
"Nuo",
""
],
[
"Yang",
"Rong",
""
],
[
"Xu",
"Yingying",
""
],
[
"Gao",
"Zhuoyan",
""
],
[
"Kitrungrotsakul",
"Titinunt",
""
],
[
"Ren",
"Yi",
""
],
[
"Zhang",
"Pu",
""
],
[
"Wang",
"Jin",
""
],
[
"Wei",
"Ning",
""
],
[
"Li",
"Chao",
""
]
] | TITLE: Falcon: A Remote Sensing Vision-Language Foundation Model
ABSTRACT: This paper introduces a holistic vision-language foundation model tailored
for remote sensing, named Falcon. Falcon offers a unified, prompt-based
paradigm that effectively executes comprehensive and complex remote sensing
tasks. Falcon demonstrates powerful understanding and reasoning abilities at
the image, region, and pixel levels. Specifically, given simple natural
language instructions and remote sensing images, Falcon can produce impressive
results in text form across 14 distinct tasks, i.e., image classification,
object detection, segmentation, image captioning, and etc. To facilitate
Falcon's training and empower its representation capacity to encode rich
spatial and semantic information, we developed Falcon_SFT, a large-scale,
multi-task, instruction-tuning dataset in the field of remote sensing. The
Falcon_SFT dataset consists of approximately 78 million high-quality data
samples, covering 5.6 million multi-spatial resolution and multi-view remote
sensing images with diverse instructions. It features hierarchical annotations
and undergoes manual sampling verification to ensure high data quality and
reliability. Extensive comparative experiments are conducted, which verify that
Falcon achieves remarkable performance over 67 datasets and 14 tasks, despite
having only 0.7B parameters. We release the complete dataset, code, and model
weights at https://github.com/TianHuiLab/Falcon, hoping to help further develop
the open-source community.
|
2503.11080 | Renren Jin | Wuwei Huang, Renren Jin, Wen Zhang, Jian Luan, Bin Wang, Deyi Xiong | Joint Training And Decoding for Multilingual End-to-End Simultaneous
Speech Translation | ICASSP 2023 | null | null | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies on end-to-end speech translation(ST) have facilitated the
exploration of multilingual end-to-end ST and end-to-end simultaneous ST. In
this paper, we investigate end-to-end simultaneous speech translation in a
one-to-many multilingual setting which is closer to applications in real
scenarios. We explore a separate decoder architecture and a unified
architecture for joint synchronous training in this scenario. To further
explore knowledge transfer across languages, we propose an asynchronous
training strategy on the proposed unified decoder architecture. A multi-way
aligned multilingual end-to-end ST dataset was curated as a benchmark testbed
to evaluate our methods. Experimental results demonstrate the effectiveness of
our models on the collected dataset. Our codes and data are available at:
https://github.com/XiaoMi/TED-MMST.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 04:45:46 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Huang",
"Wuwei",
""
],
[
"Jin",
"Renren",
""
],
[
"Zhang",
"Wen",
""
],
[
"Luan",
"Jian",
""
],
[
"Wang",
"Bin",
""
],
[
"Xiong",
"Deyi",
""
]
] | TITLE: Joint Training And Decoding for Multilingual End-to-End Simultaneous
Speech Translation
ABSTRACT: Recent studies on end-to-end speech translation(ST) have facilitated the
exploration of multilingual end-to-end ST and end-to-end simultaneous ST. In
this paper, we investigate end-to-end simultaneous speech translation in a
one-to-many multilingual setting which is closer to applications in real
scenarios. We explore a separate decoder architecture and a unified
architecture for joint synchronous training in this scenario. To further
explore knowledge transfer across languages, we propose an asynchronous
training strategy on the proposed unified decoder architecture. A multi-way
aligned multilingual end-to-end ST dataset was curated as a benchmark testbed
to evaluate our methods. Experimental results demonstrate the effectiveness of
our models on the collected dataset. Our codes and data are available at:
https://github.com/XiaoMi/TED-MMST.
|
2503.11081 | Pingrui Zhang | Pingrui Zhang, Xianqiang Gao, Yuhan Wu, Kehui Liu, Dong Wang, Zhigang
Wang, Bin Zhao, Yan Ding, Xuelong Li | MoMa-Kitchen: A 100K+ Benchmark for Affordance-Grounded Last-Mile
Navigation in Mobile Manipulation | null | null | null | null | cs.RO cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In mobile manipulation, navigation and manipulation are often treated as
separate problems, resulting in a significant gap between merely approaching an
object and engaging with it effectively. Many navigation approaches primarily
define success by proximity to the target, often overlooking the necessity for
optimal positioning that facilitates subsequent manipulation. To address this,
we introduce MoMa-Kitchen, a benchmark dataset comprising over 100k samples
that provide training data for models to learn optimal final navigation
positions for seamless transition to manipulation. Our dataset includes
affordance-grounded floor labels collected from diverse kitchen environments,
in which robotic mobile manipulators of different models attempt to grasp
target objects amidst clutter. Using a fully automated pipeline, we simulate
diverse real-world scenarios and generate affordance labels for optimal
manipulation positions. Visual data are collected from RGB-D inputs captured by
a first-person view camera mounted on the robotic arm, ensuring consistency in
viewpoint during data collection. We also develop a lightweight baseline model,
NavAff, for navigation affordance grounding that demonstrates promising
performance on the MoMa-Kitchen benchmark. Our approach enables models to learn
affordance-based final positioning that accommodates different arm types and
platform heights, thereby paving the way for more robust and generalizable
integration of navigation and manipulation in embodied AI. Project page:
\href{https://momakitchen.github.io/}{https://momakitchen.github.io/}.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 04:47:38 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Pingrui",
""
],
[
"Gao",
"Xianqiang",
""
],
[
"Wu",
"Yuhan",
""
],
[
"Liu",
"Kehui",
""
],
[
"Wang",
"Dong",
""
],
[
"Wang",
"Zhigang",
""
],
[
"Zhao",
"Bin",
""
],
[
"Ding",
"Yan",
""
],
[
"Li",
"Xuelong",
""
]
] | TITLE: MoMa-Kitchen: A 100K+ Benchmark for Affordance-Grounded Last-Mile
Navigation in Mobile Manipulation
ABSTRACT: In mobile manipulation, navigation and manipulation are often treated as
separate problems, resulting in a significant gap between merely approaching an
object and engaging with it effectively. Many navigation approaches primarily
define success by proximity to the target, often overlooking the necessity for
optimal positioning that facilitates subsequent manipulation. To address this,
we introduce MoMa-Kitchen, a benchmark dataset comprising over 100k samples
that provide training data for models to learn optimal final navigation
positions for seamless transition to manipulation. Our dataset includes
affordance-grounded floor labels collected from diverse kitchen environments,
in which robotic mobile manipulators of different models attempt to grasp
target objects amidst clutter. Using a fully automated pipeline, we simulate
diverse real-world scenarios and generate affordance labels for optimal
manipulation positions. Visual data are collected from RGB-D inputs captured by
a first-person view camera mounted on the robotic arm, ensuring consistency in
viewpoint during data collection. We also develop a lightweight baseline model,
NavAff, for navigation affordance grounding that demonstrates promising
performance on the MoMa-Kitchen benchmark. Our approach enables models to learn
affordance-based final positioning that accommodates different arm types and
platform heights, thereby paving the way for more robust and generalizable
integration of navigation and manipulation in embodied AI. Project page:
\href{https://momakitchen.github.io/}{https://momakitchen.github.io/}.
|
2503.11082 | Liwei Guo | Liwei Guo, Sixiang Ye, Zeyu Sun, Xiang Chen, Yuxia Zhang, Bo Wang, Jie
M. Zhang, Zheng Li and Yong Liu | LLMs are Bug Replicators: An Empirical Study on LLMs' Capability in
Completing Bug-prone Code | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have demonstrated remarkable performance in code
completion. However, the training data used to develop these models often
contain a significant amount of buggy code. Yet, it remains unclear to what
extent these buggy instances influence LLMs' performance when tackling
bug-prone code completion tasks. To fill this gap, this paper presents the
first empirical study evaluating the performance of LLMs in completing
bug-prone code. Through extensive experiments on 7 LLMs and the Defects4J
dataset, we analyze LLMs' accuracy, robustness, and limitations in this
challenging context. Our experimental results show that completing bug-prone
code is significantly more challenging for LLMs than completing normal code.
Notably, in bug-prone tasks, the likelihood of LLMs generating correct code is
nearly the same as generating buggy code, and it is substantially lower than in
normal code completion tasks (e.g., 12.27% vs. 29.85% for GPT-4). To our
surprise, 44.44% of the bugs LLMs make are completely identical to the pre-fix
version, indicating that LLMs have been seriously biased by historical bugs
when completing code. Additionally, we investigate the effectiveness of
existing post-processing techniques and find that while they can improve
consistency, they do not significantly reduce error rates in bug-prone code
scenarios. Our research highlights the limitations of current LLMs in handling
bug-prone code and underscores the need for improved models and post-processing
strategies to enhance code completion accuracy in real-world development
environments.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 04:48:38 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Guo",
"Liwei",
""
],
[
"Ye",
"Sixiang",
""
],
[
"Sun",
"Zeyu",
""
],
[
"Chen",
"Xiang",
""
],
[
"Zhang",
"Yuxia",
""
],
[
"Wang",
"Bo",
""
],
[
"Zhang",
"Jie M.",
""
],
[
"Li",
"Zheng",
""
],
[
"Liu",
"Yong",
""
]
] | TITLE: LLMs are Bug Replicators: An Empirical Study on LLMs' Capability in
Completing Bug-prone Code
ABSTRACT: Large Language Models (LLMs) have demonstrated remarkable performance in code
completion. However, the training data used to develop these models often
contain a significant amount of buggy code. Yet, it remains unclear to what
extent these buggy instances influence LLMs' performance when tackling
bug-prone code completion tasks. To fill this gap, this paper presents the
first empirical study evaluating the performance of LLMs in completing
bug-prone code. Through extensive experiments on 7 LLMs and the Defects4J
dataset, we analyze LLMs' accuracy, robustness, and limitations in this
challenging context. Our experimental results show that completing bug-prone
code is significantly more challenging for LLMs than completing normal code.
Notably, in bug-prone tasks, the likelihood of LLMs generating correct code is
nearly the same as generating buggy code, and it is substantially lower than in
normal code completion tasks (e.g., 12.27% vs. 29.85% for GPT-4). To our
surprise, 44.44% of the bugs LLMs make are completely identical to the pre-fix
version, indicating that LLMs have been seriously biased by historical bugs
when completing code. Additionally, we investigate the effectiveness of
existing post-processing techniques and find that while they can improve
consistency, they do not significantly reduce error rates in bug-prone code
scenarios. Our research highlights the limitations of current LLMs in handling
bug-prone code and underscores the need for improved models and post-processing
strategies to enhance code completion accuracy in real-world development
environments.
|
2503.11084 | Zhen Qi | Zhou Fang, Hanlu Zhang, Jacky He, Zhen Qi, Hongye Zheng | Semantic and Contextual Modeling for Malicious Comment Detection with
BERT-BiLSTM | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study aims to develop an efficient and accurate model for detecting
malicious comments, addressing the increasingly severe issue of false and
harmful content on social media platforms. We propose a deep learning model
that combines BERT and BiLSTM. The BERT model, through pre-training, captures
deep semantic features of text, while the BiLSTM network excels at processing
sequential data and can further model the contextual dependencies of text.
Experimental results on the Jigsaw Unintended Bias in Toxicity Classification
dataset demonstrate that the BERT+BiLSTM model achieves superior performance in
malicious comment detection tasks, with a precision of 0.94, recall of 0.93,
and accuracy of 0.94. This surpasses other models, including standalone BERT,
TextCNN, TextRNN, and traditional machine learning algorithms using TF-IDF
features. These results confirm the superiority of the BERT+BiLSTM model in
handling imbalanced data and capturing deep semantic features of malicious
comments, providing an effective technical means for social media content
moderation and online environment purification.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 04:51:36 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Fang",
"Zhou",
""
],
[
"Zhang",
"Hanlu",
""
],
[
"He",
"Jacky",
""
],
[
"Qi",
"Zhen",
""
],
[
"Zheng",
"Hongye",
""
]
] | TITLE: Semantic and Contextual Modeling for Malicious Comment Detection with
BERT-BiLSTM
ABSTRACT: This study aims to develop an efficient and accurate model for detecting
malicious comments, addressing the increasingly severe issue of false and
harmful content on social media platforms. We propose a deep learning model
that combines BERT and BiLSTM. The BERT model, through pre-training, captures
deep semantic features of text, while the BiLSTM network excels at processing
sequential data and can further model the contextual dependencies of text.
Experimental results on the Jigsaw Unintended Bias in Toxicity Classification
dataset demonstrate that the BERT+BiLSTM model achieves superior performance in
malicious comment detection tasks, with a precision of 0.94, recall of 0.93,
and accuracy of 0.94. This surpasses other models, including standalone BERT,
TextCNN, TextRNN, and traditional machine learning algorithms using TF-IDF
features. These results confirm the superiority of the BERT+BiLSTM model in
handling imbalanced data and capturing deep semantic features of malicious
comments, providing an effective technical means for social media content
moderation and online environment purification.
|
2503.11088 | Yifan Liu | Yifan Liu, Xun Xu, Shijie Li, Jingyi Liao, Xulei Yang | Multi-View Industrial Anomaly Detection with Epipolar Constrained
Cross-View Fusion | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-camera systems provide richer contextual information for industrial
anomaly detection. However, traditional methods process each view
independently, disregarding the complementary information across viewpoints.
Existing multi-view anomaly detection approaches typically employ data-driven
cross-view attention for feature fusion but fail to leverage the unique
geometric properties of multi-camera setups. In this work, we introduce an
epipolar geometry-constrained attention module to guide cross-view fusion,
ensuring more effective information aggregation. To further enhance the
potential of cross-view attention, we propose a pretraining strategy inspired
by memory bank-based anomaly detection. This approach encourages normal feature
representations to form multiple local clusters and incorporate multi-view
aware negative sample synthesis to regularize pretraining. We demonstrate that
our epipolar guided multi-view anomaly detection framework outperforms existing
methods on the state-of-the-art multi-view anomaly detection dataset.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 05:02:54 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liu",
"Yifan",
""
],
[
"Xu",
"Xun",
""
],
[
"Li",
"Shijie",
""
],
[
"Liao",
"Jingyi",
""
],
[
"Yang",
"Xulei",
""
]
] | TITLE: Multi-View Industrial Anomaly Detection with Epipolar Constrained
Cross-View Fusion
ABSTRACT: Multi-camera systems provide richer contextual information for industrial
anomaly detection. However, traditional methods process each view
independently, disregarding the complementary information across viewpoints.
Existing multi-view anomaly detection approaches typically employ data-driven
cross-view attention for feature fusion but fail to leverage the unique
geometric properties of multi-camera setups. In this work, we introduce an
epipolar geometry-constrained attention module to guide cross-view fusion,
ensuring more effective information aggregation. To further enhance the
potential of cross-view attention, we propose a pretraining strategy inspired
by memory bank-based anomaly detection. This approach encourages normal feature
representations to form multiple local clusters and incorporate multi-view
aware negative sample synthesis to regularize pretraining. We demonstrate that
our epipolar guided multi-view anomaly detection framework outperforms existing
methods on the state-of-the-art multi-view anomaly detection dataset.
|
2503.11089 | Qiang Zhang | Yi Zhang, Qiang Zhang, Xiaozhu Ju, Zhaoyang Liu, Jilei Mao, Jingkai
Sun, Jintao Wu, Shixiong Gao, Shihan Cai, Zhiyuan Qin, Linkai Liang, Jiaxu
Wang, Yiqun Duan, Jiahang Cao, Renjing Xu, Jian Tang | EmbodiedVSR: Dynamic Scene Graph-Guided Chain-of-Thought Reasoning for
Visual Spatial Tasks | technical report | null | null | null | cs.RO cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While multimodal large language models (MLLMs) have made groundbreaking
progress in embodied intelligence, they still face significant challenges in
spatial reasoning for complex long-horizon tasks. To address this gap, we
propose EmbodiedVSR (Embodied Visual Spatial Reasoning), a novel framework that
integrates dynamic scene graph-guided Chain-of-Thought (CoT) reasoning to
enhance spatial understanding for embodied agents. By explicitly constructing
structured knowledge representations through dynamic scene graphs, our method
enables zero-shot spatial reasoning without task-specific fine-tuning. This
approach not only disentangles intricate spatial relationships but also aligns
reasoning steps with actionable environmental dynamics. To rigorously evaluate
performance, we introduce the eSpatial-Benchmark, a comprehensive dataset
including real-world embodied scenarios with fine-grained spatial annotations
and adaptive task difficulty levels. Experiments demonstrate that our framework
significantly outperforms existing MLLM-based methods in accuracy and reasoning
coherence, particularly in long-horizon tasks requiring iterative environment
interaction. The results reveal the untapped potential of MLLMs for embodied
intelligence when equipped with structured, explainable reasoning mechanisms,
paving the way for more reliable deployment in real-world spatial applications.
The codes and datasets will be released soon.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 05:06:07 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Yi",
""
],
[
"Zhang",
"Qiang",
""
],
[
"Ju",
"Xiaozhu",
""
],
[
"Liu",
"Zhaoyang",
""
],
[
"Mao",
"Jilei",
""
],
[
"Sun",
"Jingkai",
""
],
[
"Wu",
"Jintao",
""
],
[
"Gao",
"Shixiong",
""
],
[
"Cai",
"Shihan",
""
],
[
"Qin",
"Zhiyuan",
""
],
[
"Liang",
"Linkai",
""
],
[
"Wang",
"Jiaxu",
""
],
[
"Duan",
"Yiqun",
""
],
[
"Cao",
"Jiahang",
""
],
[
"Xu",
"Renjing",
""
],
[
"Tang",
"Jian",
""
]
] | TITLE: EmbodiedVSR: Dynamic Scene Graph-Guided Chain-of-Thought Reasoning for
Visual Spatial Tasks
ABSTRACT: While multimodal large language models (MLLMs) have made groundbreaking
progress in embodied intelligence, they still face significant challenges in
spatial reasoning for complex long-horizon tasks. To address this gap, we
propose EmbodiedVSR (Embodied Visual Spatial Reasoning), a novel framework that
integrates dynamic scene graph-guided Chain-of-Thought (CoT) reasoning to
enhance spatial understanding for embodied agents. By explicitly constructing
structured knowledge representations through dynamic scene graphs, our method
enables zero-shot spatial reasoning without task-specific fine-tuning. This
approach not only disentangles intricate spatial relationships but also aligns
reasoning steps with actionable environmental dynamics. To rigorously evaluate
performance, we introduce the eSpatial-Benchmark, a comprehensive dataset
including real-world embodied scenarios with fine-grained spatial annotations
and adaptive task difficulty levels. Experiments demonstrate that our framework
significantly outperforms existing MLLM-based methods in accuracy and reasoning
coherence, particularly in long-horizon tasks requiring iterative environment
interaction. The results reveal the untapped potential of MLLMs for embodied
intelligence when equipped with structured, explainable reasoning mechanisms,
paving the way for more reliable deployment in real-world spatial applications.
The codes and datasets will be released soon.
|
2503.11093 | Yuan Liu | Yuan Liu, Saihui Hou, Saijie Hou, Jiabao Du, Shibei Meng, Yongzhen
Huang | OmniDiff: A Comprehensive Benchmark for Fine-grained Image Difference
Captioning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image Difference Captioning (IDC) aims to generate natural language
descriptions of subtle differences between image pairs, requiring both precise
visual change localization and coherent semantic expression. Despite recent
advancements, existing datasets often lack breadth and depth, limiting their
applicability in complex and dynamic environments: (1) from a breadth
perspective, current datasets are constrained to limited variations of objects
in specific scenes, and (2) from a depth perspective, prior benchmarks often
provide overly simplistic descriptions. To address these challenges, we
introduce OmniDiff, a comprehensive dataset comprising 324 diverse
scenarios-spanning real-world complex environments and 3D synthetic
settings-with fine-grained human annotations averaging 60 words in length and
covering 12 distinct change types. Building on this foundation, we propose
M$^3$Diff, a MultiModal large language model enhanced by a plug-and-play
Multi-scale Differential Perception (MDP) module. This module improves the
model's ability to accurately identify and describe inter-image differences
while maintaining the foundational model's generalization capabilities. With
the addition of the OmniDiff dataset, M$^3$Diff achieves state-of-the-art
performance across multiple benchmarks, including Spot-the-Diff, IEdit,
CLEVR-Change, CLEVR-DC, and OmniDiff, demonstrating significant improvements in
cross-scenario difference recognition accuracy compared to existing methods.
The dataset, code, and models will be made publicly available to support
further research.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 05:34:16 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liu",
"Yuan",
""
],
[
"Hou",
"Saihui",
""
],
[
"Hou",
"Saijie",
""
],
[
"Du",
"Jiabao",
""
],
[
"Meng",
"Shibei",
""
],
[
"Huang",
"Yongzhen",
""
]
] | TITLE: OmniDiff: A Comprehensive Benchmark for Fine-grained Image Difference
Captioning
ABSTRACT: Image Difference Captioning (IDC) aims to generate natural language
descriptions of subtle differences between image pairs, requiring both precise
visual change localization and coherent semantic expression. Despite recent
advancements, existing datasets often lack breadth and depth, limiting their
applicability in complex and dynamic environments: (1) from a breadth
perspective, current datasets are constrained to limited variations of objects
in specific scenes, and (2) from a depth perspective, prior benchmarks often
provide overly simplistic descriptions. To address these challenges, we
introduce OmniDiff, a comprehensive dataset comprising 324 diverse
scenarios-spanning real-world complex environments and 3D synthetic
settings-with fine-grained human annotations averaging 60 words in length and
covering 12 distinct change types. Building on this foundation, we propose
M$^3$Diff, a MultiModal large language model enhanced by a plug-and-play
Multi-scale Differential Perception (MDP) module. This module improves the
model's ability to accurately identify and describe inter-image differences
while maintaining the foundational model's generalization capabilities. With
the addition of the OmniDiff dataset, M$^3$Diff achieves state-of-the-art
performance across multiple benchmarks, including Spot-the-Diff, IEdit,
CLEVR-Change, CLEVR-DC, and OmniDiff, demonstrating significant improvements in
cross-scenario difference recognition accuracy compared to existing methods.
The dataset, code, and models will be made publicly available to support
further research.
|
2503.11097 | Wenbang Deng | Wenbang Deng, Xieyuanli Chen, Qinghua Yu, Yunze He, Junhao Xiao,
Huimin Lu | A Novel Decomposed Feature-Oriented Framework for Open-Set Semantic
Segmentation on LiDAR Data | This paper has been accepted by 2025 ICRA | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Semantic segmentation is a key technique that enables mobile robots to
understand and navigate surrounding environments autonomously. However, most
existing works focus on segmenting known objects, overlooking the
identification of unknown classes, which is common in real-world applications.
In this paper, we propose a feature-oriented framework for open-set semantic
segmentation on LiDAR data, capable of identifying unknown objects while
retaining the ability to classify known ones. We design a decomposed
dual-decoder network to simultaneously perform closed-set semantic segmentation
and generate distinctive features for unknown objects. The network is trained
with multi-objective loss functions to capture the characteristics of known and
unknown objects. Using the extracted features, we introduce an anomaly
detection mechanism to identify unknown objects. By integrating the results of
close-set semantic segmentation and anomaly detection, we achieve effective
feature-driven LiDAR open-set semantic segmentation. Evaluations on both
SemanticKITTI and nuScenes datasets demonstrate that our proposed framework
significantly outperforms state-of-the-art methods. The source code will be
made publicly available at https://github.com/nubot-nudt/DOSS.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 05:40:05 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Deng",
"Wenbang",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Yu",
"Qinghua",
""
],
[
"He",
"Yunze",
""
],
[
"Xiao",
"Junhao",
""
],
[
"Lu",
"Huimin",
""
]
] | TITLE: A Novel Decomposed Feature-Oriented Framework for Open-Set Semantic
Segmentation on LiDAR Data
ABSTRACT: Semantic segmentation is a key technique that enables mobile robots to
understand and navigate surrounding environments autonomously. However, most
existing works focus on segmenting known objects, overlooking the
identification of unknown classes, which is common in real-world applications.
In this paper, we propose a feature-oriented framework for open-set semantic
segmentation on LiDAR data, capable of identifying unknown objects while
retaining the ability to classify known ones. We design a decomposed
dual-decoder network to simultaneously perform closed-set semantic segmentation
and generate distinctive features for unknown objects. The network is trained
with multi-objective loss functions to capture the characteristics of known and
unknown objects. Using the extracted features, we introduce an anomaly
detection mechanism to identify unknown objects. By integrating the results of
close-set semantic segmentation and anomaly detection, we achieve effective
feature-driven LiDAR open-set semantic segmentation. Evaluations on both
SemanticKITTI and nuScenes datasets demonstrate that our proposed framework
significantly outperforms state-of-the-art methods. The source code will be
made publicly available at https://github.com/nubot-nudt/DOSS.
|
2503.11115 | Yunxiang Zhang | Jun Yu, Yunxiang Zhang, Xilong Lu, Yang Zheng, Yongqi Wang, Lingsi Zhu | Solution for 8th Competition on Affective & Behavior Analysis
in-the-wild | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this report, we present our solution for the Action Unit (AU) Detection
Challenge, in 8th Competition on Affective Behavior Analysis in-the-wild. In
order to achieve robust and accurate classification of facial action unit in
the wild environment, we introduce an innovative method that leverages
audio-visual multimodal data. Our method employs ConvNeXt as the image encoder
and uses Whisper to extract Mel spectrogram features. For these features, we
utilize a Transformer encoder-based feature fusion module to integrate the
affective information embedded in audio and image features. This ensures the
provision of rich high-dimensional feature representations for the subsequent
multilayer perceptron (MLP) trained on the Aff-Wild2 dataset, enhancing the
accuracy of AU detection.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 06:26:55 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yu",
"Jun",
""
],
[
"Zhang",
"Yunxiang",
""
],
[
"Lu",
"Xilong",
""
],
[
"Zheng",
"Yang",
""
],
[
"Wang",
"Yongqi",
""
],
[
"Zhu",
"Lingsi",
""
]
] | TITLE: Solution for 8th Competition on Affective & Behavior Analysis
in-the-wild
ABSTRACT: In this report, we present our solution for the Action Unit (AU) Detection
Challenge, in 8th Competition on Affective Behavior Analysis in-the-wild. In
order to achieve robust and accurate classification of facial action unit in
the wild environment, we introduce an innovative method that leverages
audio-visual multimodal data. Our method employs ConvNeXt as the image encoder
and uses Whisper to extract Mel spectrogram features. For these features, we
utilize a Transformer encoder-based feature fusion module to integrate the
affective information embedded in audio and image features. This ensures the
provision of rich high-dimensional feature representations for the subsequent
multilayer perceptron (MLP) trained on the Aff-Wild2 dataset, enhancing the
accuracy of AU detection.
|
2503.11120 | G\"okhan \"Ozbulak | G\"okhan \"Ozbulak and Oscar Jimenez-del-Toro and Ma\'ira Fatoretto
and Lilian Berton and Andr\'e Anjos | A Multi-Objective Evaluation Framework for Analyzing Utility-Fairness
Trade-Offs in Machine Learning Systems | 11 pages, 13 figures | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | The evaluation of fairness models in Machine Learning involves complex
challenges, such as defining appropriate metrics, balancing trade-offs between
utility and fairness, and there are still gaps in this stage. This work
presents a novel multi-objective evaluation framework that enables the analysis
of utility-fairness trade-offs in Machine Learning systems. The framework was
developed using criteria from Multi-Objective Optimization that collect
comprehensive information regarding this complex evaluation task. The
assessment of multiple Machine Learning systems is summarized, both
quantitatively and qualitatively, in a straightforward manner through a radar
chart and a measurement table encompassing various aspects such as convergence,
system capacity, and diversity. The framework's compact representation of
performance facilitates the comparative analysis of different Machine Learning
strategies for decision-makers, in real-world applications, with single or
multiple fairness requirements. The framework is model-agnostic and flexible to
be adapted to any kind of Machine Learning systems, that is, black- or
white-box, any kind and quantity of evaluation metrics, including
multidimensional fairness criteria. The functionality and effectiveness of the
proposed framework is shown with different simulations, and an empirical study
conducted on a real-world dataset with various Machine Learning systems.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 06:32:42 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Özbulak",
"Gökhan",
""
],
[
"Jimenez-del-Toro",
"Oscar",
""
],
[
"Fatoretto",
"Maíra",
""
],
[
"Berton",
"Lilian",
""
],
[
"Anjos",
"André",
""
]
] | TITLE: A Multi-Objective Evaluation Framework for Analyzing Utility-Fairness
Trade-Offs in Machine Learning Systems
ABSTRACT: The evaluation of fairness models in Machine Learning involves complex
challenges, such as defining appropriate metrics, balancing trade-offs between
utility and fairness, and there are still gaps in this stage. This work
presents a novel multi-objective evaluation framework that enables the analysis
of utility-fairness trade-offs in Machine Learning systems. The framework was
developed using criteria from Multi-Objective Optimization that collect
comprehensive information regarding this complex evaluation task. The
assessment of multiple Machine Learning systems is summarized, both
quantitatively and qualitatively, in a straightforward manner through a radar
chart and a measurement table encompassing various aspects such as convergence,
system capacity, and diversity. The framework's compact representation of
performance facilitates the comparative analysis of different Machine Learning
strategies for decision-makers, in real-world applications, with single or
multiple fairness requirements. The framework is model-agnostic and flexible to
be adapted to any kind of Machine Learning systems, that is, black- or
white-box, any kind and quantity of evaluation metrics, including
multidimensional fairness criteria. The functionality and effectiveness of the
proposed framework is shown with different simulations, and an empirical study
conducted on a real-world dataset with various Machine Learning systems.
|
2503.11127 | Matthew Khoriaty | Matthew Khoriaty (1), Andrii Shportko (1), Gustavo Mercier (1), Zach
Wood-Doughty (1) ((1) Northwestern University) | Don't Forget It! Conditional Sparse Autoencoder Clamping Works for
Unlearning | 6 pages, 6 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments in Large Language Model (LLM) capabilities have brought
great potential but also posed new risks. For example, LLMs with knowledge of
bioweapons, advanced chemistry, or cyberattacks could cause violence if placed
in the wrong hands or during malfunctions. Because of their nature as
near-black boxes, intuitive interpretation of LLM internals remains an open
research question, preventing developers from easily controlling model behavior
and capabilities. The use of Sparse Autoencoders (SAEs) has recently emerged as
a potential method of unraveling representations of concepts in LLMs internals,
and has allowed developers to steer model outputs by directly modifying the
hidden activations. In this paper, we use SAEs to identify unwanted concepts
from the Weapons of Mass Destruction Proxy (WMDP) dataset within gemma-2-2b
internals and use feature steering to reduce the model's ability to answer
harmful questions while retaining its performance on harmless queries. Our
results bring back optimism to the viability of SAE-based explicit knowledge
unlearning techniques.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 06:43:19 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Khoriaty",
"Matthew",
"",
"Northwestern University"
],
[
"Shportko",
"Andrii",
"",
"Northwestern University"
],
[
"Mercier",
"Gustavo",
"",
"Northwestern University"
],
[
"Wood-Doughty",
"Zach",
"",
"Northwestern University"
]
] | TITLE: Don't Forget It! Conditional Sparse Autoencoder Clamping Works for
Unlearning
ABSTRACT: Recent developments in Large Language Model (LLM) capabilities have brought
great potential but also posed new risks. For example, LLMs with knowledge of
bioweapons, advanced chemistry, or cyberattacks could cause violence if placed
in the wrong hands or during malfunctions. Because of their nature as
near-black boxes, intuitive interpretation of LLM internals remains an open
research question, preventing developers from easily controlling model behavior
and capabilities. The use of Sparse Autoencoders (SAEs) has recently emerged as
a potential method of unraveling representations of concepts in LLMs internals,
and has allowed developers to steer model outputs by directly modifying the
hidden activations. In this paper, we use SAEs to identify unwanted concepts
from the Weapons of Mass Destruction Proxy (WMDP) dataset within gemma-2-2b
internals and use feature steering to reduce the model's ability to answer
harmful questions while retaining its performance on harmless queries. Our
results bring back optimism to the viability of SAE-based explicit knowledge
unlearning techniques.
|
2503.11133 | Hao Liu | Hao Liu, Pengyu Guo, Siyuan Yang, Zeqing Jiang, Qinglei Hu and Dongyu
Li | SpaceSeg: A High-Precision Intelligent Perception Segmentation Method
for Multi-Spacecraft On-Orbit Targets | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the continuous advancement of human exploration into deep space,
intelligent perception and high-precision segmentation technology for on-orbit
multi-spacecraft targets have become critical factors for ensuring the success
of modern space missions. However, the complex deep space environment, diverse
imaging conditions, and high variability in spacecraft morphology pose
significant challenges to traditional segmentation methods. This paper proposes
SpaceSeg, an innovative vision foundation model-based segmentation framework
with four core technical innovations: First, the Multi-Scale Hierarchical
Attention Refinement Decoder (MSHARD) achieves high-precision feature decoding
through cross-resolution feature fusion via hierarchical attention. Second, the
Multi-spacecraft Connected Component Analysis (MS-CCA) effectively resolves
topological structure confusion in dense targets. Third, the Spatial Domain
Adaptation Transform framework (SDAT) eliminates cross-domain disparities and
resist spatial sensor perturbations through composite enhancement strategies.
Finally, a custom Multi-Spacecraft Segmentation Task Loss Function is created
to significantly improve segmentation robustness in deep space scenarios. To
support algorithm validation, we construct the first multi-scale on-orbit
multi-spacecraft semantic segmentation dataset SpaceES, which covers four types
of spatial backgrounds and 17 typical spacecraft targets. In testing, SpaceSeg
achieves state-of-the-art performance with 89.87$\%$ mIoU and 99.98$\%$ mAcc,
surpassing existing best methods by 5.71 percentage points. The dataset and
code are open-sourced at https://github.com/Akibaru/SpaceSeg to provide
critical technical support for next-generation space situational awareness
systems.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 06:50:37 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liu",
"Hao",
""
],
[
"Guo",
"Pengyu",
""
],
[
"Yang",
"Siyuan",
""
],
[
"Jiang",
"Zeqing",
""
],
[
"Hu",
"Qinglei",
""
],
[
"Li",
"Dongyu",
""
]
] | TITLE: SpaceSeg: A High-Precision Intelligent Perception Segmentation Method
for Multi-Spacecraft On-Orbit Targets
ABSTRACT: With the continuous advancement of human exploration into deep space,
intelligent perception and high-precision segmentation technology for on-orbit
multi-spacecraft targets have become critical factors for ensuring the success
of modern space missions. However, the complex deep space environment, diverse
imaging conditions, and high variability in spacecraft morphology pose
significant challenges to traditional segmentation methods. This paper proposes
SpaceSeg, an innovative vision foundation model-based segmentation framework
with four core technical innovations: First, the Multi-Scale Hierarchical
Attention Refinement Decoder (MSHARD) achieves high-precision feature decoding
through cross-resolution feature fusion via hierarchical attention. Second, the
Multi-spacecraft Connected Component Analysis (MS-CCA) effectively resolves
topological structure confusion in dense targets. Third, the Spatial Domain
Adaptation Transform framework (SDAT) eliminates cross-domain disparities and
resist spatial sensor perturbations through composite enhancement strategies.
Finally, a custom Multi-Spacecraft Segmentation Task Loss Function is created
to significantly improve segmentation robustness in deep space scenarios. To
support algorithm validation, we construct the first multi-scale on-orbit
multi-spacecraft semantic segmentation dataset SpaceES, which covers four types
of spatial backgrounds and 17 typical spacecraft targets. In testing, SpaceSeg
achieves state-of-the-art performance with 89.87$\%$ mIoU and 99.98$\%$ mAcc,
surpassing existing best methods by 5.71 percentage points. The dataset and
code are open-sourced at https://github.com/Akibaru/SpaceSeg to provide
critical technical support for next-generation space situational awareness
systems.
|
2503.11145 | Neng Wang | Neng Wang and Huimin Lu and Zhiqiang Zheng and Hesheng Wang and
Yun-Hui Liu and Xieyuanli Chen | Leveraging Semantic Graphs for Efficient and Robust LiDAR SLAM | 8 pages, 4 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate and robust simultaneous localization and mapping (SLAM) is crucial
for autonomous mobile systems, typically achieved by leveraging the geometric
features of the environment. Incorporating semantics provides a richer scene
representation that not only enhances localization accuracy in SLAM but also
enables advanced cognitive functionalities for downstream navigation and
planning tasks. Existing point-wise semantic LiDAR SLAM methods often suffer
from poor efficiency and generalization, making them less robust in diverse
real-world scenarios. In this paper, we propose a semantic graph-enhanced SLAM
framework, named SG-SLAM, which effectively leverages the geometric, semantic,
and topological characteristics inherent in environmental structures. The
semantic graph serves as a fundamental component that facilitates critical
functionalities of SLAM, including robust relocalization during odometry
failures, accurate loop closing, and semantic graph map construction. Our
method employs a dual-threaded architecture, with one thread dedicated to
online odometry and relocalization, while the other handles loop closure, pose
graph optimization, and map update. This design enables our method to operate
in real time and generate globally consistent semantic graph maps and point
cloud maps. We extensively evaluate our method across the KITTI, MulRAN, and
Apollo datasets, and the results demonstrate its superiority compared to
state-of-the-art methods. Our method has been released at
https://github.com/nubot-nudt/SG-SLAM.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 07:25:26 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wang",
"Neng",
""
],
[
"Lu",
"Huimin",
""
],
[
"Zheng",
"Zhiqiang",
""
],
[
"Wang",
"Hesheng",
""
],
[
"Liu",
"Yun-Hui",
""
],
[
"Chen",
"Xieyuanli",
""
]
] | TITLE: Leveraging Semantic Graphs for Efficient and Robust LiDAR SLAM
ABSTRACT: Accurate and robust simultaneous localization and mapping (SLAM) is crucial
for autonomous mobile systems, typically achieved by leveraging the geometric
features of the environment. Incorporating semantics provides a richer scene
representation that not only enhances localization accuracy in SLAM but also
enables advanced cognitive functionalities for downstream navigation and
planning tasks. Existing point-wise semantic LiDAR SLAM methods often suffer
from poor efficiency and generalization, making them less robust in diverse
real-world scenarios. In this paper, we propose a semantic graph-enhanced SLAM
framework, named SG-SLAM, which effectively leverages the geometric, semantic,
and topological characteristics inherent in environmental structures. The
semantic graph serves as a fundamental component that facilitates critical
functionalities of SLAM, including robust relocalization during odometry
failures, accurate loop closing, and semantic graph map construction. Our
method employs a dual-threaded architecture, with one thread dedicated to
online odometry and relocalization, while the other handles loop closure, pose
graph optimization, and map update. This design enables our method to operate
in real time and generate globally consistent semantic graph maps and point
cloud maps. We extensively evaluate our method across the KITTI, MulRAN, and
Apollo datasets, and the results demonstrate its superiority compared to
state-of-the-art methods. Our method has been released at
https://github.com/nubot-nudt/SG-SLAM.
|
2503.11154 | Shaotian Yan | Shaotian Yan, Chen Shen, Wenxiao Wang, Liang Xie, Junjie Liu, Jieping
Ye | Don't Take Things Out of Context: Attention Intervention for Enhancing
Chain-of-Thought Reasoning in Large Language Models | Accepted by ICLR2025 | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Few-shot Chain-of-Thought (CoT) significantly enhances the reasoning
capabilities of large language models (LLMs), functioning as a whole to guide
these models in generating reasoning steps toward final answers. However, we
observe that isolated segments, words, or tokens within CoT demonstrations can
unexpectedly disrupt the generation process of LLMs. The model may overly
concentrate on certain local information present in the demonstration,
introducing irrelevant noise into the reasoning process and potentially leading
to incorrect answers. In this paper, we investigate the underlying mechanism of
CoT through dynamically tracing and manipulating the inner workings of LLMs at
each output step, which demonstrates that tokens exhibiting specific attention
characteristics are more likely to induce the model to take things out of
context; these tokens directly attend to the hidden states tied with
prediction, without substantial integration of non-local information. Building
upon these insights, we propose a Few-shot Attention Intervention method (FAI)
that dynamically analyzes the attention patterns of demonstrations to
accurately identify these tokens and subsequently make targeted adjustments to
the attention weights to effectively suppress their distracting effect on LLMs.
Comprehensive experiments across multiple benchmarks demonstrate consistent
improvements over baseline methods, with a remarkable 5.91% improvement on the
AQuA dataset, further highlighting the effectiveness of FAI.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 07:46:33 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yan",
"Shaotian",
""
],
[
"Shen",
"Chen",
""
],
[
"Wang",
"Wenxiao",
""
],
[
"Xie",
"Liang",
""
],
[
"Liu",
"Junjie",
""
],
[
"Ye",
"Jieping",
""
]
] | TITLE: Don't Take Things Out of Context: Attention Intervention for Enhancing
Chain-of-Thought Reasoning in Large Language Models
ABSTRACT: Few-shot Chain-of-Thought (CoT) significantly enhances the reasoning
capabilities of large language models (LLMs), functioning as a whole to guide
these models in generating reasoning steps toward final answers. However, we
observe that isolated segments, words, or tokens within CoT demonstrations can
unexpectedly disrupt the generation process of LLMs. The model may overly
concentrate on certain local information present in the demonstration,
introducing irrelevant noise into the reasoning process and potentially leading
to incorrect answers. In this paper, we investigate the underlying mechanism of
CoT through dynamically tracing and manipulating the inner workings of LLMs at
each output step, which demonstrates that tokens exhibiting specific attention
characteristics are more likely to induce the model to take things out of
context; these tokens directly attend to the hidden states tied with
prediction, without substantial integration of non-local information. Building
upon these insights, we propose a Few-shot Attention Intervention method (FAI)
that dynamically analyzes the attention patterns of demonstrations to
accurately identify these tokens and subsequently make targeted adjustments to
the attention weights to effectively suppress their distracting effect on LLMs.
Comprehensive experiments across multiple benchmarks demonstrate consistent
improvements over baseline methods, with a remarkable 5.91% improvement on the
AQuA dataset, further highlighting the effectiveness of FAI.
|
2503.11170 | Yaohua Tang | Yibin Xu and Liang Yang and Hao Chen and Hua Wang and Zhi Chen and
Yaohua Tang | DeskVision: Large Scale Desktop Region Captioning for Advanced GUI
Agents | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The limitation of graphical user interface (GUI) data has been a significant
barrier to the development of GUI agents today, especially for the desktop /
computer use scenarios. To address this, we propose an automated GUI data
generation pipeline, AutoCaptioner, which generates data with rich descriptions
while minimizing human effort. Using AutoCaptioner, we created a novel
large-scale desktop GUI dataset, DeskVision, along with the largest desktop
test benchmark, DeskVision-Eval, which reflects daily usage and covers diverse
systems and UI elements, each with rich descriptions. With DeskVision, we train
a new GUI understanding model, GUIExplorer. Results show that GUIExplorer
achieves state-of-the-art (SOTA) performance in understanding/grounding visual
elements without the need for complex architectural designs. We further
validated the effectiveness of the DeskVision dataset through ablation studies
on various large visual language models (LVLMs). We believe that AutoCaptioner
and DeskVision will significantly advance the development of GUI agents, and
will open-source them for the community.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:16:02 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Xu",
"Yibin",
""
],
[
"Yang",
"Liang",
""
],
[
"Chen",
"Hao",
""
],
[
"Wang",
"Hua",
""
],
[
"Chen",
"Zhi",
""
],
[
"Tang",
"Yaohua",
""
]
] | TITLE: DeskVision: Large Scale Desktop Region Captioning for Advanced GUI
Agents
ABSTRACT: The limitation of graphical user interface (GUI) data has been a significant
barrier to the development of GUI agents today, especially for the desktop /
computer use scenarios. To address this, we propose an automated GUI data
generation pipeline, AutoCaptioner, which generates data with rich descriptions
while minimizing human effort. Using AutoCaptioner, we created a novel
large-scale desktop GUI dataset, DeskVision, along with the largest desktop
test benchmark, DeskVision-Eval, which reflects daily usage and covers diverse
systems and UI elements, each with rich descriptions. With DeskVision, we train
a new GUI understanding model, GUIExplorer. Results show that GUIExplorer
achieves state-of-the-art (SOTA) performance in understanding/grounding visual
elements without the need for complex architectural designs. We further
validated the effectiveness of the DeskVision dataset through ablation studies
on various large visual language models (LVLMs). We believe that AutoCaptioner
and DeskVision will significantly advance the development of GUI agents, and
will open-source them for the community.
|
2503.11181 | Luca Martini Dr. | Luca Martini, Daniele Zolezzi, Saverio Iacono, Gianni Viardo Vercelli | Multi-Stage Generative Upscaler: Reconstructing Football Broadcast
Images via Diffusion Models | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The reconstruction of low-resolution football broadcast images presents a
significant challenge in sports broadcasting, where detailed visuals are
essential for analysis and audience engagement. This study introduces a
multi-stage generative upscaling framework leveraging Diffusion Models to
enhance degraded images, transforming inputs as small as $64 \times 64$ pixels
into high-fidelity $1024 \times 1024$ outputs. By integrating an image-to-image
pipeline, ControlNet conditioning, and LoRA fine-tuning, our approach surpasses
traditional upscaling methods in restoring intricate textures and
domain-specific elements such as player details and jersey logos. The custom
LoRA is trained on a custom football dataset, ensuring adaptability to sports
broadcast needs. Experimental results demonstrate substantial improvements over
conventional models, with ControlNet refining fine details and LoRA enhancing
task-specific elements. These findings highlight the potential of
diffusion-based image reconstruction in sports media, paving the way for future
applications in automated video enhancement and real-time sports analytics.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:28:30 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Martini",
"Luca",
""
],
[
"Zolezzi",
"Daniele",
""
],
[
"Iacono",
"Saverio",
""
],
[
"Vercelli",
"Gianni Viardo",
""
]
] | TITLE: Multi-Stage Generative Upscaler: Reconstructing Football Broadcast
Images via Diffusion Models
ABSTRACT: The reconstruction of low-resolution football broadcast images presents a
significant challenge in sports broadcasting, where detailed visuals are
essential for analysis and audience engagement. This study introduces a
multi-stage generative upscaling framework leveraging Diffusion Models to
enhance degraded images, transforming inputs as small as $64 \times 64$ pixels
into high-fidelity $1024 \times 1024$ outputs. By integrating an image-to-image
pipeline, ControlNet conditioning, and LoRA fine-tuning, our approach surpasses
traditional upscaling methods in restoring intricate textures and
domain-specific elements such as player details and jersey logos. The custom
LoRA is trained on a custom football dataset, ensuring adaptability to sports
broadcast needs. Experimental results demonstrate substantial improvements over
conventional models, with ControlNet refining fine details and LoRA enhancing
task-specific elements. These findings highlight the potential of
diffusion-based image reconstruction in sports media, paving the way for future
applications in automated video enhancement and real-time sports analytics.
|
2503.11183 | Shi Leideng | Leideng Shi, Juan Zhang | Multimodal-Aware Fusion Network for Referring Remote Sensing Image
Segmentation | 5 pages, 5 figures, accepted in IEEE Geoscience and Remote Sensing
Letters (GRSL) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Referring remote sensing image segmentation (RRSIS) is a novel visual task in
remote sensing images segmentation, which aims to segment objects based on a
given text description, with great significance in practical application.
Previous studies fuse visual and linguistic modalities by explicit feature
interaction, which fail to effectively excavate useful multimodal information
from dual-branch encoder. In this letter, we design a multimodal-aware fusion
network (MAFN) to achieve fine-grained alignment and fusion between the two
modalities. We propose a correlation fusion module (CFM) to enhance multi-scale
visual features by introducing adaptively noise in transformer, and integrate
cross-modal aware features. In addition, MAFN employs multi-scale refinement
convolution (MSRC) to adapt to the various orientations of objects at different
scales to boost their representation ability to enhances segmentation accuracy.
Extensive experiments have shown that MAFN is significantly more effective than
the state of the art on RRSIS-D datasets. The source code is available at
https://github.com/Roaxy/MAFN.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:31:21 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Shi",
"Leideng",
""
],
[
"Zhang",
"Juan",
""
]
] | TITLE: Multimodal-Aware Fusion Network for Referring Remote Sensing Image
Segmentation
ABSTRACT: Referring remote sensing image segmentation (RRSIS) is a novel visual task in
remote sensing images segmentation, which aims to segment objects based on a
given text description, with great significance in practical application.
Previous studies fuse visual and linguistic modalities by explicit feature
interaction, which fail to effectively excavate useful multimodal information
from dual-branch encoder. In this letter, we design a multimodal-aware fusion
network (MAFN) to achieve fine-grained alignment and fusion between the two
modalities. We propose a correlation fusion module (CFM) to enhance multi-scale
visual features by introducing adaptively noise in transformer, and integrate
cross-modal aware features. In addition, MAFN employs multi-scale refinement
convolution (MSRC) to adapt to the various orientations of objects at different
scales to boost their representation ability to enhances segmentation accuracy.
Extensive experiments have shown that MAFN is significantly more effective than
the state of the art on RRSIS-D datasets. The source code is available at
https://github.com/Roaxy/MAFN.
|
2503.11186 | Maxence Grand | Maxence Grand (Marvin), Damien Pellier (Marvin), Francis Jambon
(MeTAH, M-PSI, LIG) | GAIPAT -Dataset on Human Gaze and Actions for Intent Prediction in
Assembly Tasks | null | ACM/IEEE International Conference on Human-Robot Interaction, Mar
2025, Melbourne (AUS), Australia | null | null | cs.RO cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The primary objective of the dataset is to provide a better understanding of
the coupling between human actions and gaze in a shared working environment
with a cobot, with the aim of signifcantly enhancing the effciency and safety
of humancobot interactions. More broadly, by linking gaze patterns with
physical actions, the dataset offers valuable insights into cognitive processes
and attention dynamics in the context of assembly tasks. The proposed dataset
contains gaze and action data from approximately 80 participants, recorded
during simulated industrial assembly tasks. The tasks were simulated using
controlled scenarios in which participants manipulated educational building
blocks. Gaze data was collected using two different eye-tracking setups
-head-mounted and remote-while participants worked in two positions: sitting
and standing.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:32:52 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Grand",
"Maxence",
"",
"Marvin"
],
[
"Pellier",
"Damien",
"",
"Marvin"
],
[
"Jambon",
"Francis",
"",
"MeTAH, M-PSI, LIG"
]
] | TITLE: GAIPAT -Dataset on Human Gaze and Actions for Intent Prediction in
Assembly Tasks
ABSTRACT: The primary objective of the dataset is to provide a better understanding of
the coupling between human actions and gaze in a shared working environment
with a cobot, with the aim of signifcantly enhancing the effciency and safety
of humancobot interactions. More broadly, by linking gaze patterns with
physical actions, the dataset offers valuable insights into cognitive processes
and attention dynamics in the context of assembly tasks. The proposed dataset
contains gaze and action data from approximately 80 participants, recorded
during simulated industrial assembly tasks. The tasks were simulated using
controlled scenarios in which participants manipulated educational building
blocks. Gaze data was collected using two different eye-tracking setups
-head-mounted and remote-while participants worked in two positions: sitting
and standing.
|
2503.11190 | Zhuoyuan Mao | Zhuoyuan Mao, Mengjie Zhao, Qiyu Wu, Zhi Zhong, Wei-Hsiang Liao,
Hiromi Wakaki, Yuki Mitsufuji | Cross-Modal Learning for Music-to-Music-Video Description Generation | Accepted by RepL4NLP 2025 @ NAACL 2025 | null | null | null | cs.SD cs.AI cs.CL cs.MM eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Music-to-music-video generation is a challenging task due to the intrinsic
differences between the music and video modalities. The advent of powerful
text-to-video diffusion models has opened a promising pathway for music-video
(MV) generation by first addressing the music-to-MV description task and
subsequently leveraging these models for video generation. In this study, we
focus on the MV description generation task and propose a comprehensive
pipeline encompassing training data construction and multimodal model
fine-tuning. We fine-tune existing pre-trained multimodal models on our newly
constructed music-to-MV description dataset based on the Music4All dataset,
which integrates both musical and visual information. Our experimental results
demonstrate that music representations can be effectively mapped to textual
domains, enabling the generation of meaningful MV description directly from
music inputs. We also identify key components in the dataset construction
pipeline that critically impact the quality of MV description and highlight
specific musical attributes that warrant greater focus for improved MV
description generation.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:34:28 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Mao",
"Zhuoyuan",
""
],
[
"Zhao",
"Mengjie",
""
],
[
"Wu",
"Qiyu",
""
],
[
"Zhong",
"Zhi",
""
],
[
"Liao",
"Wei-Hsiang",
""
],
[
"Wakaki",
"Hiromi",
""
],
[
"Mitsufuji",
"Yuki",
""
]
] | TITLE: Cross-Modal Learning for Music-to-Music-Video Description Generation
ABSTRACT: Music-to-music-video generation is a challenging task due to the intrinsic
differences between the music and video modalities. The advent of powerful
text-to-video diffusion models has opened a promising pathway for music-video
(MV) generation by first addressing the music-to-MV description task and
subsequently leveraging these models for video generation. In this study, we
focus on the MV description generation task and propose a comprehensive
pipeline encompassing training data construction and multimodal model
fine-tuning. We fine-tune existing pre-trained multimodal models on our newly
constructed music-to-MV description dataset based on the Music4All dataset,
which integrates both musical and visual information. Our experimental results
demonstrate that music representations can be effectively mapped to textual
domains, enabling the generation of meaningful MV description directly from
music inputs. We also identify key components in the dataset construction
pipeline that critically impact the quality of MV description and highlight
specific musical attributes that warrant greater focus for improved MV
description generation.
|
2503.11196 | Anas Jnini | Anas Jnini, Harshinee Goordoyal, Sujal Dave, Flavio Vella, Katharine
H. Fraser, and Artem Korobenko | Physics-constrained DeepONet for Surrogate CFD models: a curved
backward-facing step case | null | null | null | null | physics.flu-dyn cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Physics-Constrained DeepONet (PC-DeepONet), an architecture that
incorporates fundamental physics knowledge into the data-driven DeepONet model,
is presented in this study. This methodology is exemplified through surrogate
modeling of fluid dynamics over a curved backward-facing step, a benchmark
problem in computational fluid dynamics. The model was trained on computational
fluid dynamics data generated for a range of parameterized geometries. The
PC-DeepONet was able to learn the mapping from the parameters describing the
geometry to the velocity and pressure fields. While the DeepONet is solely
data-driven, the PC-DeepONet imposes the divergence constraint from the
continuity equation onto the network. The PC-DeepONet demonstrates higher
accuracy than the data-driven baseline, especially when trained on sparse data.
Both models attain convergence with a small dataset of 50 samples and require
only 50 iterations for convergence, highlighting the efficiency of neural
operators in learning the dynamics governed by partial differential equations.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:43:36 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Jnini",
"Anas",
""
],
[
"Goordoyal",
"Harshinee",
""
],
[
"Dave",
"Sujal",
""
],
[
"Vella",
"Flavio",
""
],
[
"Fraser",
"Katharine H.",
""
],
[
"Korobenko",
"Artem",
""
]
] | TITLE: Physics-constrained DeepONet for Surrogate CFD models: a curved
backward-facing step case
ABSTRACT: The Physics-Constrained DeepONet (PC-DeepONet), an architecture that
incorporates fundamental physics knowledge into the data-driven DeepONet model,
is presented in this study. This methodology is exemplified through surrogate
modeling of fluid dynamics over a curved backward-facing step, a benchmark
problem in computational fluid dynamics. The model was trained on computational
fluid dynamics data generated for a range of parameterized geometries. The
PC-DeepONet was able to learn the mapping from the parameters describing the
geometry to the velocity and pressure fields. While the DeepONet is solely
data-driven, the PC-DeepONet imposes the divergence constraint from the
continuity equation onto the network. The PC-DeepONet demonstrates higher
accuracy than the data-driven baseline, especially when trained on sparse data.
Both models attain convergence with a small dataset of 50 samples and require
only 50 iterations for convergence, highlighting the efficiency of neural
operators in learning the dynamics governed by partial differential equations.
|
2503.11207 | Michael Hersche | Giacomo Camposampiero and Michael Hersche and Roger Wattenhofer and
Abu Sebastian and Abbas Rahimi | Can Large Reasoning Models do Analogical Reasoning under Perceptual
Uncertainty? | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | This work presents a first evaluation of two state-of-the-art Large Reasoning
Models (LRMs), OpenAI's o3-mini and DeepSeek R1, on analogical reasoning,
focusing on well-established nonverbal human IQ tests based on Raven's
progressive matrices. We benchmark with the I-RAVEN dataset and its more
difficult extension, I-RAVEN-X, which tests the ability to generalize to longer
reasoning rules and ranges of the attribute values. To assess the influence of
visual uncertainties on these nonverbal analogical reasoning tests, we extend
the I-RAVEN-X dataset, which otherwise assumes an oracle perception. We adopt a
two-fold strategy to simulate this imperfect visual perception: 1) we introduce
confounding attributes which, being sampled at random, do not contribute to the
prediction of the correct answer of the puzzles and 2) smoothen the
distributions of the input attributes' values. We observe a sharp decline in
OpenAI's o3-mini task accuracy, dropping from 86.6% on the original I-RAVEN to
just 17.0% -- approaching random chance -- on the more challenging I-RAVEN-X,
which increases input length and range and emulates perceptual uncertainty.
This drop occurred despite spending 3.4x more reasoning tokens. A similar trend
is also observed for DeepSeek R1: from 80.6% to 23.2%. On the other hand, a
neuro-symbolic probabilistic abductive model, ARLC, that achieves
state-of-the-art performances on I-RAVEN, can robustly reason under all these
out-of-distribution tests, maintaining strong accuracy with only a modest
reduction from 98.6% to 88.0%. Our code is available at
https://github.com/IBM/raven-large-language-models.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 08:52:25 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Camposampiero",
"Giacomo",
""
],
[
"Hersche",
"Michael",
""
],
[
"Wattenhofer",
"Roger",
""
],
[
"Sebastian",
"Abu",
""
],
[
"Rahimi",
"Abbas",
""
]
] | TITLE: Can Large Reasoning Models do Analogical Reasoning under Perceptual
Uncertainty?
ABSTRACT: This work presents a first evaluation of two state-of-the-art Large Reasoning
Models (LRMs), OpenAI's o3-mini and DeepSeek R1, on analogical reasoning,
focusing on well-established nonverbal human IQ tests based on Raven's
progressive matrices. We benchmark with the I-RAVEN dataset and its more
difficult extension, I-RAVEN-X, which tests the ability to generalize to longer
reasoning rules and ranges of the attribute values. To assess the influence of
visual uncertainties on these nonverbal analogical reasoning tests, we extend
the I-RAVEN-X dataset, which otherwise assumes an oracle perception. We adopt a
two-fold strategy to simulate this imperfect visual perception: 1) we introduce
confounding attributes which, being sampled at random, do not contribute to the
prediction of the correct answer of the puzzles and 2) smoothen the
distributions of the input attributes' values. We observe a sharp decline in
OpenAI's o3-mini task accuracy, dropping from 86.6% on the original I-RAVEN to
just 17.0% -- approaching random chance -- on the more challenging I-RAVEN-X,
which increases input length and range and emulates perceptual uncertainty.
This drop occurred despite spending 3.4x more reasoning tokens. A similar trend
is also observed for DeepSeek R1: from 80.6% to 23.2%. On the other hand, a
neuro-symbolic probabilistic abductive model, ARLC, that achieves
state-of-the-art performances on I-RAVEN, can robustly reason under all these
out-of-distribution tests, maintaining strong accuracy with only a modest
reduction from 98.6% to 88.0%. Our code is available at
https://github.com/IBM/raven-large-language-models.
|
2503.11213 | Fengchen He | Fengchen He, Dayang Zhao, Hao Xu, Tingwei Quan, Shaoqun Zeng | Simulating Dual-Pixel Images From Ray Tracing For Depth Estimation | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many studies utilize dual-pixel (DP) sensor phase characteristics for various
applications, such as depth estimation and deblurring. However, since the DP
image features are entirely determined by the camera hardware, DP-depth paired
datasets are very scarce, especially when performing depth estimation on
customized cameras. To overcome this, studies simulate DP images using ideal
optical system models. However, these simulations often violate real optical
propagation laws,leading to poor generalization to real DP data. To address
this, we investigate the domain gap between simulated and real DP data, and
propose solutions using the Simulating DP images from ray tracing (Sdirt)
scheme. The Sdirt generates realistic DP images via ray tracing and integrates
them into the depth estimation training pipeline. Experimental results show
that models trained with Sdirt-simulated images generalize better to real DP
data.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:03:25 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"He",
"Fengchen",
""
],
[
"Zhao",
"Dayang",
""
],
[
"Xu",
"Hao",
""
],
[
"Quan",
"Tingwei",
""
],
[
"Zeng",
"Shaoqun",
""
]
] | TITLE: Simulating Dual-Pixel Images From Ray Tracing For Depth Estimation
ABSTRACT: Many studies utilize dual-pixel (DP) sensor phase characteristics for various
applications, such as depth estimation and deblurring. However, since the DP
image features are entirely determined by the camera hardware, DP-depth paired
datasets are very scarce, especially when performing depth estimation on
customized cameras. To overcome this, studies simulate DP images using ideal
optical system models. However, these simulations often violate real optical
propagation laws,leading to poor generalization to real DP data. To address
this, we investigate the domain gap between simulated and real DP data, and
propose solutions using the Simulating DP images from ray tracing (Sdirt)
scheme. The Sdirt generates realistic DP images via ray tracing and integrates
them into the depth estimation training pipeline. Experimental results show
that models trained with Sdirt-simulated images generalize better to real DP
data.
|
2503.11218 | Andong Lu | Andong Lu, Mai Wen, Jinhu Wang, Yuanzhi Guo, Chenglong Li, Jin Tang
and Bin Luo | Towards General Multimodal Visual Tracking | In peer review | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Existing multimodal tracking studies focus on bi-modal scenarios such as
RGB-Thermal, RGB-Event, and RGB-Language. Although promising tracking
performance is achieved through leveraging complementary cues from different
sources, it remains challenging in complex scenes due to the limitations of
bi-modal scenarios. In this work, we introduce a general multimodal visual
tracking task that fully exploits the advantages of four modalities, including
RGB, thermal infrared, event, and language, for robust tracking under
challenging conditions. To provide a comprehensive evaluation platform for
general multimodal visual tracking, we construct QuadTrack600, a large-scale,
high-quality benchmark comprising 600 video sequences (totaling 384.7K
high-resolution (640x480) frame groups). In each frame group, all four
modalities are spatially aligned and meticulously annotated with bounding
boxes, while 21 sequence-level challenge attributes are provided for detailed
performance analysis. Despite quad-modal data provides richer information, the
differences in information quantity among modalities and the computational
burden from four modalities are two challenging issues in fusing four
modalities. To handle these issues, we propose a novel approach called
QuadFusion, which incorporates an efficient Multiscale Fusion Mamba with four
different scanning scales to achieve sufficient interactions of the four
modalities while overcoming the exponential computational burden, for general
multimodal visual tracking. Extensive experiments on the QuadTrack600 dataset
and three bi-modal tracking datasets, including LasHeR, VisEvent, and TNL2K,
validate the effectiveness of our QuadFusion.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:09:43 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Lu",
"Andong",
""
],
[
"Wen",
"Mai",
""
],
[
"Wang",
"Jinhu",
""
],
[
"Guo",
"Yuanzhi",
""
],
[
"Li",
"Chenglong",
""
],
[
"Tang",
"Jin",
""
],
[
"Luo",
"Bin",
""
]
] | TITLE: Towards General Multimodal Visual Tracking
ABSTRACT: Existing multimodal tracking studies focus on bi-modal scenarios such as
RGB-Thermal, RGB-Event, and RGB-Language. Although promising tracking
performance is achieved through leveraging complementary cues from different
sources, it remains challenging in complex scenes due to the limitations of
bi-modal scenarios. In this work, we introduce a general multimodal visual
tracking task that fully exploits the advantages of four modalities, including
RGB, thermal infrared, event, and language, for robust tracking under
challenging conditions. To provide a comprehensive evaluation platform for
general multimodal visual tracking, we construct QuadTrack600, a large-scale,
high-quality benchmark comprising 600 video sequences (totaling 384.7K
high-resolution (640x480) frame groups). In each frame group, all four
modalities are spatially aligned and meticulously annotated with bounding
boxes, while 21 sequence-level challenge attributes are provided for detailed
performance analysis. Despite quad-modal data provides richer information, the
differences in information quantity among modalities and the computational
burden from four modalities are two challenging issues in fusing four
modalities. To handle these issues, we propose a novel approach called
QuadFusion, which incorporates an efficient Multiscale Fusion Mamba with four
different scanning scales to achieve sufficient interactions of the four
modalities while overcoming the exponential computational burden, for general
multimodal visual tracking. Extensive experiments on the QuadTrack600 dataset
and three bi-modal tracking datasets, including LasHeR, VisEvent, and TNL2K,
validate the effectiveness of our QuadFusion.
|
2503.11219 | Yuning Wu | Yansheng Li, Yuning Wu, Gong Cheng, Chao Tao, Bo Dang, Yu Wang, Jiahao
Zhang, Chuge Zhang, Yiting Liu, Xu Tang, Jiayi Ma and Yongjun Zhang | MEET: A Million-Scale Dataset for Fine-Grained Geospatial Scene
Classification with Zoom-Free Remote Sensing Imagery | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate fine-grained geospatial scene classification using remote sensing
imagery is essential for a wide range of applications. However, existing
approaches often rely on manually zooming remote sensing images at different
scales to create typical scene samples. This approach fails to adequately
support the fixed-resolution image interpretation requirements in real-world
scenarios. To address this limitation, we introduce the Million-scale
finE-grained geospatial scEne classification dataseT (MEET), which contains
over 1.03 million zoom-free remote sensing scene samples, manually annotated
into 80 fine-grained categories. In MEET, each scene sample follows a
scene-inscene layout, where the central scene serves as the reference, and
auxiliary scenes provide crucial spatial context for finegrained
classification. Moreover, to tackle the emerging challenge of scene-in-scene
classification, we present the Context-Aware Transformer (CAT), a model
specifically designed for this task, which adaptively fuses spatial context to
accurately classify the scene samples. CAT adaptively fuses spatial context to
accurately classify the scene samples by learning attentional features that
capture the relationships between the center and auxiliary scenes. Based on
MEET, we establish a comprehensive benchmark for fine-grained geospatial scene
classification, evaluating CAT against 11 competitive baselines. The results
demonstrate that CAT significantly outperforms these baselines, achieving a
1.88% higher balanced accuracy (BA) with the Swin-Large backbone, and a notable
7.87% improvement with the Swin-Huge backbone. Further experiments validate the
effectiveness of each module in CAT and show the practical applicability of CAT
in the urban functional zone mapping. The source code and dataset will be
publicly available at https://jerrywyn.github.io/project/MEET.html.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:10:45 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Li",
"Yansheng",
""
],
[
"Wu",
"Yuning",
""
],
[
"Cheng",
"Gong",
""
],
[
"Tao",
"Chao",
""
],
[
"Dang",
"Bo",
""
],
[
"Wang",
"Yu",
""
],
[
"Zhang",
"Jiahao",
""
],
[
"Zhang",
"Chuge",
""
],
[
"Liu",
"Yiting",
""
],
[
"Tang",
"Xu",
""
],
[
"Ma",
"Jiayi",
""
],
[
"Zhang",
"Yongjun",
""
]
] | TITLE: MEET: A Million-Scale Dataset for Fine-Grained Geospatial Scene
Classification with Zoom-Free Remote Sensing Imagery
ABSTRACT: Accurate fine-grained geospatial scene classification using remote sensing
imagery is essential for a wide range of applications. However, existing
approaches often rely on manually zooming remote sensing images at different
scales to create typical scene samples. This approach fails to adequately
support the fixed-resolution image interpretation requirements in real-world
scenarios. To address this limitation, we introduce the Million-scale
finE-grained geospatial scEne classification dataseT (MEET), which contains
over 1.03 million zoom-free remote sensing scene samples, manually annotated
into 80 fine-grained categories. In MEET, each scene sample follows a
scene-inscene layout, where the central scene serves as the reference, and
auxiliary scenes provide crucial spatial context for finegrained
classification. Moreover, to tackle the emerging challenge of scene-in-scene
classification, we present the Context-Aware Transformer (CAT), a model
specifically designed for this task, which adaptively fuses spatial context to
accurately classify the scene samples. CAT adaptively fuses spatial context to
accurately classify the scene samples by learning attentional features that
capture the relationships between the center and auxiliary scenes. Based on
MEET, we establish a comprehensive benchmark for fine-grained geospatial scene
classification, evaluating CAT against 11 competitive baselines. The results
demonstrate that CAT significantly outperforms these baselines, achieving a
1.88% higher balanced accuracy (BA) with the Swin-Large backbone, and a notable
7.87% improvement with the Swin-Huge backbone. Further experiments validate the
effectiveness of each module in CAT and show the practical applicability of CAT
in the urban functional zone mapping. The source code and dataset will be
publicly available at https://jerrywyn.github.io/project/MEET.html.
|
2503.11229 | Ke Wang | Ke Wang, Lei He, Kun Liu, Yan Deng, Wenning Wei, Sheng Zhao | Exploring the Potential of Large Multimodal Models as Effective
Alternatives for Pronunciation Assessment | 7 pages | null | null | null | cs.SD cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Multimodal Models (LMMs) have demonstrated exceptional performance
across a wide range of domains. This paper explores their potential in
pronunciation assessment tasks, with a particular focus on evaluating the
capabilities of the Generative Pre-trained Transformer (GPT) model,
specifically GPT-4o. Our study investigates its ability to process speech and
audio for pronunciation assessment across multiple levels of granularity and
dimensions, with an emphasis on feedback generation and scoring. For our
experiments, we use the publicly available Speechocean762 dataset. The
evaluation focuses on two key aspects: multi-level scoring and the practicality
of the generated feedback. Scoring results are compared against the manual
scores provided in the Speechocean762 dataset, while feedback quality is
assessed using Large Language Models (LLMs). The findings highlight the
effectiveness of integrating LMMs with traditional methods for pronunciation
assessment, offering insights into the model's strengths and identifying areas
for further improvement.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:26:07 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wang",
"Ke",
""
],
[
"He",
"Lei",
""
],
[
"Liu",
"Kun",
""
],
[
"Deng",
"Yan",
""
],
[
"Wei",
"Wenning",
""
],
[
"Zhao",
"Sheng",
""
]
] | TITLE: Exploring the Potential of Large Multimodal Models as Effective
Alternatives for Pronunciation Assessment
ABSTRACT: Large Multimodal Models (LMMs) have demonstrated exceptional performance
across a wide range of domains. This paper explores their potential in
pronunciation assessment tasks, with a particular focus on evaluating the
capabilities of the Generative Pre-trained Transformer (GPT) model,
specifically GPT-4o. Our study investigates its ability to process speech and
audio for pronunciation assessment across multiple levels of granularity and
dimensions, with an emphasis on feedback generation and scoring. For our
experiments, we use the publicly available Speechocean762 dataset. The
evaluation focuses on two key aspects: multi-level scoring and the practicality
of the generated feedback. Scoring results are compared against the manual
scores provided in the Speechocean762 dataset, while feedback quality is
assessed using Large Language Models (LLMs). The findings highlight the
effectiveness of integrating LMMs with traditional methods for pronunciation
assessment, offering insights into the model's strengths and identifying areas
for further improvement.
|
2503.11231 | Tiantian Li | Tiantian Li, Qunbing Xia, Yue Li, Ruixiao Guo, Gaobo Yang | Deep Lossless Image Compression via Masked Sampling and Coarse-to-Fine
Auto-Regression | 8 pages | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning-based lossless image compression employs pixel-based or
subimage-based auto-regression for probability estimation, which achieves
desirable performances. However, the existing works only consider context
dependencies in one direction, namely, those symbols that appear before the
current symbol in raster order. We believe that the dependencies between the
current and future symbols should be further considered. In this work, we
propose a deep lossless image compression via masked sampling and
coarse-to-fine auto-regression. It combines lossy reconstruction and
progressive residual compression, which fuses contexts from various directions
and is more consistent with human perception. Specifically,
the residuals are decomposed via $T$ iterative masked sampling, and each
sampling consists of three steps: 1) probability estimation, 2) mask
computation, and 3) arithmetic coding. The iterative process progressively
refines our prediction and gradually presents a real image. Extensive
experimental results show that compared with the existing traditional and
learned lossless compression, our method achieves comparable compression
performance on extensive datasets with competitive coding speed and more
flexibility.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:29:55 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Li",
"Tiantian",
""
],
[
"Xia",
"Qunbing",
""
],
[
"Li",
"Yue",
""
],
[
"Guo",
"Ruixiao",
""
],
[
"Yang",
"Gaobo",
""
]
] | TITLE: Deep Lossless Image Compression via Masked Sampling and Coarse-to-Fine
Auto-Regression
ABSTRACT: Learning-based lossless image compression employs pixel-based or
subimage-based auto-regression for probability estimation, which achieves
desirable performances. However, the existing works only consider context
dependencies in one direction, namely, those symbols that appear before the
current symbol in raster order. We believe that the dependencies between the
current and future symbols should be further considered. In this work, we
propose a deep lossless image compression via masked sampling and
coarse-to-fine auto-regression. It combines lossy reconstruction and
progressive residual compression, which fuses contexts from various directions
and is more consistent with human perception. Specifically,
the residuals are decomposed via $T$ iterative masked sampling, and each
sampling consists of three steps: 1) probability estimation, 2) mask
computation, and 3) arithmetic coding. The iterative process progressively
refines our prediction and gradually presents a real image. Extensive
experimental results show that compared with the existing traditional and
learned lossless compression, our method achieves comparable compression
performance on extensive datasets with competitive coding speed and more
flexibility.
|
2503.11232 | Ahmed Frikha | Ahmed Frikha, Muhammad Reza Ar Razi, Krishna Kanth Nakka, Ricardo
Mendes, Xue Jiang and Xuebing Zhou | PrivacyScalpel: Enhancing LLM Privacy via Interpretable Feature
Intervention with Sparse Autoencoders | null | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have demonstrated remarkable capabilities in
natural language processing but also pose significant privacy risks by
memorizing and leaking Personally Identifiable Information (PII). Existing
mitigation strategies, such as differential privacy and neuron-level
interventions, often degrade model utility or fail to effectively prevent
leakage. To address this challenge, we introduce PrivacyScalpel, a novel
privacy-preserving framework that leverages LLM interpretability techniques to
identify and mitigate PII leakage while maintaining performance. PrivacyScalpel
comprises three key steps: (1) Feature Probing, which identifies layers in the
model that encode PII-rich representations, (2) Sparse Autoencoding, where a
k-Sparse Autoencoder (k-SAE) disentangles and isolates privacy-sensitive
features,
and (3) Feature-Level Interventions, which employ targeted ablation and
vector steering to suppress PII leakage.
Our empirical evaluation on Gemma2-2b and Llama2-7b, fine-tuned on the Enron
dataset, shows that PrivacyScalpel significantly reduces email leakage from
5.15\% to as low as 0.0\%, while maintaining over 99.4\% of the original
model's utility. Notably, our method outperforms neuron-level interventions in
privacy-utility trade-offs, demonstrating that acting on sparse, monosemantic
features is more effective than manipulating polysemantic neurons. Beyond
improving LLM privacy, our approach offers insights into the mechanisms
underlying PII memorization, contributing to the broader field of model
interpretability and secure AI deployment.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:31:01 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Frikha",
"Ahmed",
""
],
[
"Razi",
"Muhammad Reza Ar",
""
],
[
"Nakka",
"Krishna Kanth",
""
],
[
"Mendes",
"Ricardo",
""
],
[
"Jiang",
"Xue",
""
],
[
"Zhou",
"Xuebing",
""
]
] | TITLE: PrivacyScalpel: Enhancing LLM Privacy via Interpretable Feature
Intervention with Sparse Autoencoders
ABSTRACT: Large Language Models (LLMs) have demonstrated remarkable capabilities in
natural language processing but also pose significant privacy risks by
memorizing and leaking Personally Identifiable Information (PII). Existing
mitigation strategies, such as differential privacy and neuron-level
interventions, often degrade model utility or fail to effectively prevent
leakage. To address this challenge, we introduce PrivacyScalpel, a novel
privacy-preserving framework that leverages LLM interpretability techniques to
identify and mitigate PII leakage while maintaining performance. PrivacyScalpel
comprises three key steps: (1) Feature Probing, which identifies layers in the
model that encode PII-rich representations, (2) Sparse Autoencoding, where a
k-Sparse Autoencoder (k-SAE) disentangles and isolates privacy-sensitive
features,
and (3) Feature-Level Interventions, which employ targeted ablation and
vector steering to suppress PII leakage.
Our empirical evaluation on Gemma2-2b and Llama2-7b, fine-tuned on the Enron
dataset, shows that PrivacyScalpel significantly reduces email leakage from
5.15\% to as low as 0.0\%, while maintaining over 99.4\% of the original
model's utility. Notably, our method outperforms neuron-level interventions in
privacy-utility trade-offs, demonstrating that acting on sparse, monosemantic
features is more effective than manipulating polysemantic neurons. Beyond
improving LLM privacy, our approach offers insights into the mechanisms
underlying PII memorization, contributing to the broader field of model
interpretability and secure AI deployment.
|
2503.11233 | Yi Xu | Yi Xu, Zhiyuan Lu, Xiaochen Li, Jinxin Hu, Hong Wen, Zulong Chen, Yu
Zhang and Jing Zhang | Addressing Information Loss and Interaction Collapse: A Dual Enhanced
Attention Framework for Feature Interaction | null | null | null | null | cs.IR cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | The Transformer has proven to be a significant approach in feature
interaction for CTR prediction, achieving considerable success in previous
works. However, it also presents potential challenges in handling feature
interactions. Firstly, Transformers may encounter information loss when
capturing feature interactions. By relying on inner products to represent
pairwise relationships, they compress raw interaction information, which can
result in a degradation of fidelity. Secondly, due to the long-tail features
distribution, feature fields with low information-abundance embeddings
constrain the information abundance of other fields, leading to collapsed
embedding matrices. To tackle these issues, we propose a Dual Attention
Framework for Enhanced Feature Interaction, known as Dual Enhanced Attention.
This framework integrates two attention mechanisms: the Combo-ID attention
mechanism and the collapse-avoiding attention mechanism. The Combo-ID attention
mechanism directly retains feature interaction pairs to mitigate information
loss, while the collapse-avoiding attention mechanism adaptively filters out
low information-abundance interaction pairs to prevent interaction collapse.
Extensive experiments conducted on industrial datasets have shown the
effectiveness of Dual Enhanced Attention.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:31:03 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Xu",
"Yi",
""
],
[
"Lu",
"Zhiyuan",
""
],
[
"Li",
"Xiaochen",
""
],
[
"Hu",
"Jinxin",
""
],
[
"Wen",
"Hong",
""
],
[
"Chen",
"Zulong",
""
],
[
"Zhang",
"Yu",
""
],
[
"Zhang",
"Jing",
""
]
] | TITLE: Addressing Information Loss and Interaction Collapse: A Dual Enhanced
Attention Framework for Feature Interaction
ABSTRACT: The Transformer has proven to be a significant approach in feature
interaction for CTR prediction, achieving considerable success in previous
works. However, it also presents potential challenges in handling feature
interactions. Firstly, Transformers may encounter information loss when
capturing feature interactions. By relying on inner products to represent
pairwise relationships, they compress raw interaction information, which can
result in a degradation of fidelity. Secondly, due to the long-tail features
distribution, feature fields with low information-abundance embeddings
constrain the information abundance of other fields, leading to collapsed
embedding matrices. To tackle these issues, we propose a Dual Attention
Framework for Enhanced Feature Interaction, known as Dual Enhanced Attention.
This framework integrates two attention mechanisms: the Combo-ID attention
mechanism and the collapse-avoiding attention mechanism. The Combo-ID attention
mechanism directly retains feature interaction pairs to mitigate information
loss, while the collapse-avoiding attention mechanism adaptively filters out
low information-abundance interaction pairs to prevent interaction collapse.
Extensive experiments conducted on industrial datasets have shown the
effectiveness of Dual Enhanced Attention.
|
2503.11241 | Xilong Lu | Jun Yu, Xilong Lu | Compound Expression Recognition via Large Vision-Language Models | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compound Expression Recognition (CER) is crucial for understanding human
emotions and improving human-computer interaction. However, CER faces
challenges due to the complexity of facial expressions and the difficulty of
capturing subtle emotional cues. To address these issues, we propose a novel
approach leveraging Large Vision-Language Models (LVLMs). Our method employs a
two-stage fine-tuning process: first, pre-trained LVLMs are fine-tuned on basic
facial expressions to establish foundational patterns; second, the model is
further optimized on a compound-expression dataset to refine visual-language
feature interactions. Our approach achieves advanced accuracy on the RAF-DB
dataset and demonstrates strong zero-shot generalization on the C-EXPR-DB
dataset, showcasing its potential for real-world applications in emotion
analysis and human-computer interaction.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:46:05 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yu",
"Jun",
""
],
[
"Lu",
"Xilong",
""
]
] | TITLE: Compound Expression Recognition via Large Vision-Language Models
ABSTRACT: Compound Expression Recognition (CER) is crucial for understanding human
emotions and improving human-computer interaction. However, CER faces
challenges due to the complexity of facial expressions and the difficulty of
capturing subtle emotional cues. To address these issues, we propose a novel
approach leveraging Large Vision-Language Models (LVLMs). Our method employs a
two-stage fine-tuning process: first, pre-trained LVLMs are fine-tuned on basic
facial expressions to establish foundational patterns; second, the model is
further optimized on a compound-expression dataset to refine visual-language
feature interactions. Our approach achieves advanced accuracy on the RAF-DB
dataset and demonstrates strong zero-shot generalization on the C-EXPR-DB
dataset, showcasing its potential for real-world applications in emotion
analysis and human-computer interaction.
|
2503.11244 | Khoi Nguyen N.M. | Khoi N.M. Nguyen, Hoang Duy Nguyen Do, Huyen Thao Le, Thanh Tuan Dao | LLMPerf: GPU Performance Modeling meets Large Language Models | null | null | null | null | cs.PF cs.DC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Performance modeling, a pivotal domain in program cost analysis, currently
relies on manually crafted models constrained by various program and hardware
limitations, especially in the intricate landscape of GPGPU. Meanwhile, Large
Language Models (LLMs) have demonstrated their effectiveness in addressing
diverse programming challenges. Our work establishes a connection between LLMs
and performance modeling, employing the LLM as a performance estimator. Through
experimental exploration with carefully designed large-scale OpenCL datasets,
we highlight the potential capability as well as the main difficulties of using
LLMs in handling performance modeling tasks for OpenCL device source programs.
As the first study for this line of work, our LLM-based performance model
achieves a mean absolute percentage error of $24.25\%$ for a large-scale
generated validation set. On a set of publicly available OpenCL programs, our
model achieves a mean absolute percentage error of $46.1\%$.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:52:30 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Nguyen",
"Khoi N. M.",
""
],
[
"Do",
"Hoang Duy Nguyen",
""
],
[
"Le",
"Huyen Thao",
""
],
[
"Dao",
"Thanh Tuan",
""
]
] | TITLE: LLMPerf: GPU Performance Modeling meets Large Language Models
ABSTRACT: Performance modeling, a pivotal domain in program cost analysis, currently
relies on manually crafted models constrained by various program and hardware
limitations, especially in the intricate landscape of GPGPU. Meanwhile, Large
Language Models (LLMs) have demonstrated their effectiveness in addressing
diverse programming challenges. Our work establishes a connection between LLMs
and performance modeling, employing the LLM as a performance estimator. Through
experimental exploration with carefully designed large-scale OpenCL datasets,
we highlight the potential capability as well as the main difficulties of using
LLMs in handling performance modeling tasks for OpenCL device source programs.
As the first study for this line of work, our LLM-based performance model
achieves a mean absolute percentage error of $24.25\%$ for a large-scale
generated validation set. On a set of publicly available OpenCL programs, our
model achieves a mean absolute percentage error of $46.1\%$.
|
2503.11245 | Ziwei Shi | Ziwei Shi, Xiaoran Zhang, Yan Xia, Yu Zang, Siqi Shen, Cheng Wang | L2RSI: Cross-view LiDAR-based Place Recognition for Large-scale Urban
Scenes via Remote Sensing Imagery | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the challenge of LiDAR-based place recognition, which traditionally
depends on costly and time-consuming prior 3D maps. To overcome this, we first
construct XA-L&RSI dataset, which encompasses approximately $110,000$ remote
sensing submaps and $13,000$ LiDAR point cloud submaps captured in urban
scenes, and propose a novel method, L2RSI, for cross-view LiDAR place
recognition using high-resolution Remote Sensing Imagery. This approach enables
large-scale localization capabilities at a reduced cost by leveraging readily
available overhead images as map proxies. L2RSI addresses the dual challenges
of cross-view and cross-modal place recognition by learning feature alignment
between point cloud submaps and remote sensing submaps in the semantic domain.
Additionally, we introduce a novel probability propagation method based on a
dynamic Gaussian mixture model to refine position predictions, effectively
leveraging temporal and spatial information. This approach enables large-scale
retrieval and cross-scene generalization without fine-tuning. Extensive
experiments on XA-L&RSI demonstrate that, within a $100km^2$ retrieval range,
L2RSI accurately localizes $95.08\%$ of point cloud submaps within a $30m$
radius for top-$1$ retrieved location. We provide a video to more vividly
display the place recognition results of L2RSI at
https://shizw695.github.io/L2RSI/.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:52:54 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Shi",
"Ziwei",
""
],
[
"Zhang",
"Xiaoran",
""
],
[
"Xia",
"Yan",
""
],
[
"Zang",
"Yu",
""
],
[
"Shen",
"Siqi",
""
],
[
"Wang",
"Cheng",
""
]
] | TITLE: L2RSI: Cross-view LiDAR-based Place Recognition for Large-scale Urban
Scenes via Remote Sensing Imagery
ABSTRACT: We tackle the challenge of LiDAR-based place recognition, which traditionally
depends on costly and time-consuming prior 3D maps. To overcome this, we first
construct XA-L&RSI dataset, which encompasses approximately $110,000$ remote
sensing submaps and $13,000$ LiDAR point cloud submaps captured in urban
scenes, and propose a novel method, L2RSI, for cross-view LiDAR place
recognition using high-resolution Remote Sensing Imagery. This approach enables
large-scale localization capabilities at a reduced cost by leveraging readily
available overhead images as map proxies. L2RSI addresses the dual challenges
of cross-view and cross-modal place recognition by learning feature alignment
between point cloud submaps and remote sensing submaps in the semantic domain.
Additionally, we introduce a novel probability propagation method based on a
dynamic Gaussian mixture model to refine position predictions, effectively
leveraging temporal and spatial information. This approach enables large-scale
retrieval and cross-scene generalization without fine-tuning. Extensive
experiments on XA-L&RSI demonstrate that, within a $100km^2$ retrieval range,
L2RSI accurately localizes $95.08\%$ of point cloud submaps within a $30m$
radius for top-$1$ retrieved location. We provide a video to more vividly
display the place recognition results of L2RSI at
https://shizw695.github.io/L2RSI/.
|
2503.11247 | Andong Lu | Andong Lu, Yuanzhi Guo, Wanyu Wang, Chenglong Li, Jin Tang and Bin Luo | Breaking Shallow Limits: Task-Driven Pixel Fusion for Gap-free RGBT
Tracking | In peer review | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Current RGBT tracking methods often overlook the impact of fusion location on
mitigating modality gap, which is key factor to effective tracking. Our
analysis reveals that shallower fusion yields smaller distribution gap.
However, the limited discriminative power of shallow networks hard to
distinguish task-relevant information from noise, limiting the potential of
pixel-level fusion. To break shallow limits, we propose a novel
\textbf{T}ask-driven \textbf{P}ixel-level \textbf{F}usion network, named
\textbf{TPF}, which unveils the power of pixel-level fusion in RGBT tracking
through a progressive learning framework. In particular, we design a
lightweight Pixel-level Fusion Adapter (PFA) that exploits Mamba's linear
complexity to ensure real-time, low-latency RGBT tracking. To enhance the
fusion capabilities of the PFA, our task-driven progressive learning framework
first utilizes adaptive multi-expert distillation to inherits fusion knowledge
from state-of-the-art image fusion models, establishing robust initialization,
and then employs a decoupled representation learning scheme to achieve
task-relevant information fusion. Moreover, to overcome appearance variations
between the initial template and search frames, we presents a nearest-neighbor
dynamic template updating scheme, which selects the most reliable frame closest
to the current search frame as the dynamic template. Extensive experiments
demonstrate that TPF significantly outperforms existing most of advanced
trackers on four public RGBT tracking datasets. The code will be released upon
acceptance.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 09:56:13 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Lu",
"Andong",
""
],
[
"Guo",
"Yuanzhi",
""
],
[
"Wang",
"Wanyu",
""
],
[
"Li",
"Chenglong",
""
],
[
"Tang",
"Jin",
""
],
[
"Luo",
"Bin",
""
]
] | TITLE: Breaking Shallow Limits: Task-Driven Pixel Fusion for Gap-free RGBT
Tracking
ABSTRACT: Current RGBT tracking methods often overlook the impact of fusion location on
mitigating modality gap, which is key factor to effective tracking. Our
analysis reveals that shallower fusion yields smaller distribution gap.
However, the limited discriminative power of shallow networks hard to
distinguish task-relevant information from noise, limiting the potential of
pixel-level fusion. To break shallow limits, we propose a novel
\textbf{T}ask-driven \textbf{P}ixel-level \textbf{F}usion network, named
\textbf{TPF}, which unveils the power of pixel-level fusion in RGBT tracking
through a progressive learning framework. In particular, we design a
lightweight Pixel-level Fusion Adapter (PFA) that exploits Mamba's linear
complexity to ensure real-time, low-latency RGBT tracking. To enhance the
fusion capabilities of the PFA, our task-driven progressive learning framework
first utilizes adaptive multi-expert distillation to inherits fusion knowledge
from state-of-the-art image fusion models, establishing robust initialization,
and then employs a decoupled representation learning scheme to achieve
task-relevant information fusion. Moreover, to overcome appearance variations
between the initial template and search frames, we presents a nearest-neighbor
dynamic template updating scheme, which selects the most reliable frame closest
to the current search frame as the dynamic template. Extensive experiments
demonstrate that TPF significantly outperforms existing most of advanced
trackers on four public RGBT tracking datasets. The code will be released upon
acceptance.
|
2503.11251 | Haoyang Huang | Haoyang Huang, Guoqing Ma, Nan Duan, Xing Chen, Changyi Wan, Ranchen
Ming, Tianyu Wang, Bo Wang, Zhiying Lu, Aojie Li, Xianfang Zeng, Xinhao
Zhang, Gang Yu, Yuhe Yin, Qiling Wu, Wen Sun, Kang An, Xin Han, Deshan Sun,
Wei Ji, Bizhu Huang, Brian Li, Chenfei Wu, Guanzhe Huang, Huixin Xiong,
Jiaxin He, Jianchang Wu, Jianlong Yuan, Jie Wu, Jiashuai Liu, Junjing Guo,
Kaijun Tan, Liangyu Chen, Qiaohui Chen, Ran Sun, Shanshan Yuan, Shengming
Yin, Sitong Liu, Wei Chen, Yaqi Dai, Yuchu Luo, Zheng Ge, Zhisheng Guan,
Xiaoniu Song, Yu Zhou, Binxing Jiao, Jiansheng Chen, Jing Li, Shuchang Zhou,
Xiangyu Zhang, Yi Xiu, Yibo Zhu, Heung-Yeung Shum, Daxin Jiang | Step-Video-TI2V Technical Report: A State-of-the-Art Text-Driven
Image-to-Video Generation Model | 7 pages | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Step-Video-TI2V, a state-of-the-art text-driven image-to-video
generation model with 30B parameters, capable of generating videos up to 102
frames based on both text and image inputs. We build Step-Video-TI2V-Eval as a
new benchmark for the text-driven image-to-video task and compare
Step-Video-TI2V with open-source and commercial TI2V engines using this
dataset. Experimental results demonstrate the state-of-the-art performance of
Step-Video-TI2V in the image-to-video generation task. Both Step-Video-TI2V and
Step-Video-TI2V-Eval are available at
https://github.com/stepfun-ai/Step-Video-TI2V.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 10:01:55 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Huang",
"Haoyang",
""
],
[
"Ma",
"Guoqing",
""
],
[
"Duan",
"Nan",
""
],
[
"Chen",
"Xing",
""
],
[
"Wan",
"Changyi",
""
],
[
"Ming",
"Ranchen",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Wang",
"Bo",
""
],
[
"Lu",
"Zhiying",
""
],
[
"Li",
"Aojie",
""
],
[
"Zeng",
"Xianfang",
""
],
[
"Zhang",
"Xinhao",
""
],
[
"Yu",
"Gang",
""
],
[
"Yin",
"Yuhe",
""
],
[
"Wu",
"Qiling",
""
],
[
"Sun",
"Wen",
""
],
[
"An",
"Kang",
""
],
[
"Han",
"Xin",
""
],
[
"Sun",
"Deshan",
""
],
[
"Ji",
"Wei",
""
],
[
"Huang",
"Bizhu",
""
],
[
"Li",
"Brian",
""
],
[
"Wu",
"Chenfei",
""
],
[
"Huang",
"Guanzhe",
""
],
[
"Xiong",
"Huixin",
""
],
[
"He",
"Jiaxin",
""
],
[
"Wu",
"Jianchang",
""
],
[
"Yuan",
"Jianlong",
""
],
[
"Wu",
"Jie",
""
],
[
"Liu",
"Jiashuai",
""
],
[
"Guo",
"Junjing",
""
],
[
"Tan",
"Kaijun",
""
],
[
"Chen",
"Liangyu",
""
],
[
"Chen",
"Qiaohui",
""
],
[
"Sun",
"Ran",
""
],
[
"Yuan",
"Shanshan",
""
],
[
"Yin",
"Shengming",
""
],
[
"Liu",
"Sitong",
""
],
[
"Chen",
"Wei",
""
],
[
"Dai",
"Yaqi",
""
],
[
"Luo",
"Yuchu",
""
],
[
"Ge",
"Zheng",
""
],
[
"Guan",
"Zhisheng",
""
],
[
"Song",
"Xiaoniu",
""
],
[
"Zhou",
"Yu",
""
],
[
"Jiao",
"Binxing",
""
],
[
"Chen",
"Jiansheng",
""
],
[
"Li",
"Jing",
""
],
[
"Zhou",
"Shuchang",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Xiu",
"Yi",
""
],
[
"Zhu",
"Yibo",
""
],
[
"Shum",
"Heung-Yeung",
""
],
[
"Jiang",
"Daxin",
""
]
] | TITLE: Step-Video-TI2V Technical Report: A State-of-the-Art Text-Driven
Image-to-Video Generation Model
ABSTRACT: We present Step-Video-TI2V, a state-of-the-art text-driven image-to-video
generation model with 30B parameters, capable of generating videos up to 102
frames based on both text and image inputs. We build Step-Video-TI2V-Eval as a
new benchmark for the text-driven image-to-video task and compare
Step-Video-TI2V with open-source and commercial TI2V engines using this
dataset. Experimental results demonstrate the state-of-the-art performance of
Step-Video-TI2V in the image-to-video generation task. Both Step-Video-TI2V and
Step-Video-TI2V-Eval are available at
https://github.com/stepfun-ai/Step-Video-TI2V.
|
2503.11255 | Han Shu | Long Tan Le, Tung-Anh Nguyen, Han Shu, Suranga Seneviratne, Choong
Seon Hong, Nguyen H. Tran | Federated Koopman-Reservoir Learning for Large-Scale Multivariate
Time-Series Anomaly Detection | Accepted at SDM 2025 | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by/4.0/ | The proliferation of edge devices has dramatically increased the generation
of multivariate time-series (MVTS) data, essential for applications from
healthcare to smart cities. Such data streams, however, are vulnerable to
anomalies that signal crucial problems like system failures or security
incidents. Traditional MVTS anomaly detection methods, encompassing statistical
and centralized machine learning approaches, struggle with the heterogeneity,
variability, and privacy concerns of large-scale, distributed environments. In
response, we introduce FedKO, a novel unsupervised Federated Learning framework
that leverages the linear predictive capabilities of Koopman operator theory
along with the dynamic adaptability of Reservoir Computing. This enables
effective spatiotemporal processing and privacy preservation for MVTS data.
FedKO is formulated as a bi-level optimization problem, utilizing a specific
federated algorithm to explore a shared Reservoir-Koopman model across diverse
datasets. Such a model is then deployable on edge devices for efficient
detection of anomalies in local MVTS streams. Experimental results across
various datasets showcase FedKO's superior performance against state-of-the-art
methods in MVTS anomaly detection. Moreover, FedKO reduces up to 8x
communication size and 2x memory usage, making it highly suitable for
large-scale systems.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 10:06:52 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Le",
"Long Tan",
""
],
[
"Nguyen",
"Tung-Anh",
""
],
[
"Shu",
"Han",
""
],
[
"Seneviratne",
"Suranga",
""
],
[
"Hong",
"Choong Seon",
""
],
[
"Tran",
"Nguyen H.",
""
]
] | TITLE: Federated Koopman-Reservoir Learning for Large-Scale Multivariate
Time-Series Anomaly Detection
ABSTRACT: The proliferation of edge devices has dramatically increased the generation
of multivariate time-series (MVTS) data, essential for applications from
healthcare to smart cities. Such data streams, however, are vulnerable to
anomalies that signal crucial problems like system failures or security
incidents. Traditional MVTS anomaly detection methods, encompassing statistical
and centralized machine learning approaches, struggle with the heterogeneity,
variability, and privacy concerns of large-scale, distributed environments. In
response, we introduce FedKO, a novel unsupervised Federated Learning framework
that leverages the linear predictive capabilities of Koopman operator theory
along with the dynamic adaptability of Reservoir Computing. This enables
effective spatiotemporal processing and privacy preservation for MVTS data.
FedKO is formulated as a bi-level optimization problem, utilizing a specific
federated algorithm to explore a shared Reservoir-Koopman model across diverse
datasets. Such a model is then deployable on edge devices for efficient
detection of anomalies in local MVTS streams. Experimental results across
various datasets showcase FedKO's superior performance against state-of-the-art
methods in MVTS anomaly detection. Moreover, FedKO reduces up to 8x
communication size and 2x memory usage, making it highly suitable for
large-scale systems.
|
2503.11262 | Liying Lu | Liying Lu, Rapha\"el Achddou, Sabine S\"usstrunk | Noise Synthesis for Low-Light Image Denoising with Diffusion Models | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low-light photography produces images with low signal-to-noise ratios due to
limited photons. In such conditions, common approximations like the Gaussian
noise model fall short, and many denoising techniques fail to remove noise
effectively. Although deep-learning methods perform well, they require large
datasets of paired images that are impractical to acquire. As a remedy,
synthesizing realistic low-light noise has gained significant attention. In
this paper, we investigate the ability of diffusion models to capture the
complex distribution of low-light noise. We show that a naive application of
conventional diffusion models is inadequate for this task and propose three key
adaptations that enable high-precision noise generation without calibration or
post-processing: a two-branch architecture to better model signal-dependent and
signal-independent noise, the incorporation of positional information to
capture fixed-pattern noise, and a tailored diffusion noise schedule.
Consequently, our model enables the generation of large datasets for training
low-light denoising networks, leading to state-of-the-art performance. Through
comprehensive analysis, including statistical evaluation and noise
decomposition, we provide deeper insights into the characteristics of the
generated data.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 10:16:54 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Lu",
"Liying",
""
],
[
"Achddou",
"Raphaël",
""
],
[
"Süsstrunk",
"Sabine",
""
]
] | TITLE: Noise Synthesis for Low-Light Image Denoising with Diffusion Models
ABSTRACT: Low-light photography produces images with low signal-to-noise ratios due to
limited photons. In such conditions, common approximations like the Gaussian
noise model fall short, and many denoising techniques fail to remove noise
effectively. Although deep-learning methods perform well, they require large
datasets of paired images that are impractical to acquire. As a remedy,
synthesizing realistic low-light noise has gained significant attention. In
this paper, we investigate the ability of diffusion models to capture the
complex distribution of low-light noise. We show that a naive application of
conventional diffusion models is inadequate for this task and propose three key
adaptations that enable high-precision noise generation without calibration or
post-processing: a two-branch architecture to better model signal-dependent and
signal-independent noise, the incorporation of positional information to
capture fixed-pattern noise, and a tailored diffusion noise schedule.
Consequently, our model enables the generation of large datasets for training
low-light denoising networks, leading to state-of-the-art performance. Through
comprehensive analysis, including statistical evaluation and noise
decomposition, we provide deeper insights into the characteristics of the
generated data.
|
2503.11266 | Jonas Utz | Jonas Utz, Stefan Vocht, Anne Tjorven Buessen, Dennis Possart, Fabian
Wagner, Mareike Thies, Mingxuan Gu, Stefan Uderhardt, Katharina Breininger | CyclePose -- Leveraging Cycle-Consistency for Annotation-Free Nuclei
Segmentation in Fluorescence Microscopy | under review for MICCAI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In recent years, numerous neural network architectures specifically designed
for the instance segmentation of nuclei in microscopic images have been
released. These models embed nuclei-specific priors to outperform generic
architectures like U-Nets; however, they require large annotated datasets,
which are often not available. Generative models (GANs, diffusion models) have
been used to compensate for this by synthesizing training data. These two-stage
approaches are computationally expensive, as first a generative model and then
a segmentation model has to be trained. We propose CyclePose, a hybrid
framework integrating synthetic data generation and segmentation training.
CyclePose builds on a CycleGAN architecture, which allows unpaired translation
between microscopy images and segmentation masks. We embed a segmentation model
into CycleGAN and leverage a cycle consistency loss for self-supervision.
Without annotated data, CyclePose outperforms other weakly or unsupervised
methods on two public datasets. Code is available at
https://github.com/jonasutz/CyclePose
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 10:22:26 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Utz",
"Jonas",
""
],
[
"Vocht",
"Stefan",
""
],
[
"Buessen",
"Anne Tjorven",
""
],
[
"Possart",
"Dennis",
""
],
[
"Wagner",
"Fabian",
""
],
[
"Thies",
"Mareike",
""
],
[
"Gu",
"Mingxuan",
""
],
[
"Uderhardt",
"Stefan",
""
],
[
"Breininger",
"Katharina",
""
]
] | TITLE: CyclePose -- Leveraging Cycle-Consistency for Annotation-Free Nuclei
Segmentation in Fluorescence Microscopy
ABSTRACT: In recent years, numerous neural network architectures specifically designed
for the instance segmentation of nuclei in microscopic images have been
released. These models embed nuclei-specific priors to outperform generic
architectures like U-Nets; however, they require large annotated datasets,
which are often not available. Generative models (GANs, diffusion models) have
been used to compensate for this by synthesizing training data. These two-stage
approaches are computationally expensive, as first a generative model and then
a segmentation model has to be trained. We propose CyclePose, a hybrid
framework integrating synthetic data generation and segmentation training.
CyclePose builds on a CycleGAN architecture, which allows unpaired translation
between microscopy images and segmentation masks. We embed a segmentation model
into CycleGAN and leverage a cycle consistency loss for self-supervision.
Without annotated data, CyclePose outperforms other weakly or unsupervised
methods on two public datasets. Code is available at
https://github.com/jonasutz/CyclePose
|
2503.11273 | Nicholas Chancellor | Babak Emami, Wesley Dyk, David Haycraft, Carrie Spear, Lac Nguyen,
Nicholas Chancellor | Financial Fraud Detection with Entropy Computing | 15 pages including references and appendix, 6 figures | null | null | null | cs.LG cs.AI physics.optics quant-ph | http://creativecommons.org/licenses/by/4.0/ | We introduce CVQBoost, a novel classification algorithm that leverages early
hardware implementing Quantum Computing Inc's Entropy Quantum Computing (EQC)
paradigm, Dirac-3 [Nguyen et. al. arXiv:2407.04512]. We apply CVQBoost to a
fraud detection test case and benchmark its performance against XGBoost, a
widely utilized ML method. Running on Dirac-3, CVQBoost demonstrates a
significant runtime advantage over XGBoost, which we evaluate on
high-performance hardware comprising up to 48 CPUs and four NVIDIA L4 GPUs
using the RAPIDS AI framework. Our results show that CVQBoost maintains
competitive accuracy (measured by AUC) while significantly reducing training
time, particularly as dataset size and feature complexity increase. To assess
scalability, we extend our study to large synthetic datasets ranging from 1M to
70M samples, demonstrating that CVQBoost on Dirac-3 is well-suited for
large-scale classification tasks. These findings position CVQBoost as a
promising alternative to gradient boosting methods, offering superior
scalability and efficiency for high-dimensional ML applications such as fraud
detection.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 10:30:43 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Emami",
"Babak",
""
],
[
"Dyk",
"Wesley",
""
],
[
"Haycraft",
"David",
""
],
[
"Spear",
"Carrie",
""
],
[
"Nguyen",
"Lac",
""
],
[
"Chancellor",
"Nicholas",
""
]
] | TITLE: Financial Fraud Detection with Entropy Computing
ABSTRACT: We introduce CVQBoost, a novel classification algorithm that leverages early
hardware implementing Quantum Computing Inc's Entropy Quantum Computing (EQC)
paradigm, Dirac-3 [Nguyen et. al. arXiv:2407.04512]. We apply CVQBoost to a
fraud detection test case and benchmark its performance against XGBoost, a
widely utilized ML method. Running on Dirac-3, CVQBoost demonstrates a
significant runtime advantage over XGBoost, which we evaluate on
high-performance hardware comprising up to 48 CPUs and four NVIDIA L4 GPUs
using the RAPIDS AI framework. Our results show that CVQBoost maintains
competitive accuracy (measured by AUC) while significantly reducing training
time, particularly as dataset size and feature complexity increase. To assess
scalability, we extend our study to large synthetic datasets ranging from 1M to
70M samples, demonstrating that CVQBoost on Dirac-3 is well-suited for
large-scale classification tasks. These findings position CVQBoost as a
promising alternative to gradient boosting methods, offering superior
scalability and efficiency for high-dimensional ML applications such as fraud
detection.
|
2503.11283 | Wen Xiong | Wen Xiong, Jinduo Liu, Junzhong Ji, Fenglong Ma | Brain Effective Connectivity Estimation via Fourier Spatiotemporal
Attention | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating brain effective connectivity (EC) from functional magnetic
resonance imaging (fMRI) data can aid in comprehending the neural mechanisms
underlying human behavior and cognition, providing a foundation for disease
diagnosis. However, current spatiotemporal attention modules handle temporal
and spatial attention separately, extracting temporal and spatial features
either sequentially or in parallel. These approaches overlook the inherent
spatiotemporal correlations present in real world fMRI data. Additionally, the
presence of noise in fMRI data further limits the performance of existing
methods. In this paper, we propose a novel brain effective connectivity
estimation method based on Fourier spatiotemporal attention (FSTA-EC), which
combines Fourier attention and spatiotemporal attention to simultaneously
capture inter-series (spatial) dynamics and intra-series (temporal)
dependencies from high-noise fMRI data. Specifically, Fourier attention is
designed to convert the high-noise fMRI data to frequency domain, and map the
denoised fMRI data back to physical domain, and spatiotemporal attention is
crafted to simultaneously learn spatiotemporal dynamics. Furthermore, through a
series of proofs, we demonstrate that incorporating learnable filter into fast
Fourier transform and inverse fast Fourier transform processes is
mathematically equivalent to performing cyclic convolution. The experimental
results on simulated and real-resting-state fMRI datasets demonstrate that the
proposed method exhibits superior performance when compared to state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 10:41:27 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Xiong",
"Wen",
""
],
[
"Liu",
"Jinduo",
""
],
[
"Ji",
"Junzhong",
""
],
[
"Ma",
"Fenglong",
""
]
] | TITLE: Brain Effective Connectivity Estimation via Fourier Spatiotemporal
Attention
ABSTRACT: Estimating brain effective connectivity (EC) from functional magnetic
resonance imaging (fMRI) data can aid in comprehending the neural mechanisms
underlying human behavior and cognition, providing a foundation for disease
diagnosis. However, current spatiotemporal attention modules handle temporal
and spatial attention separately, extracting temporal and spatial features
either sequentially or in parallel. These approaches overlook the inherent
spatiotemporal correlations present in real world fMRI data. Additionally, the
presence of noise in fMRI data further limits the performance of existing
methods. In this paper, we propose a novel brain effective connectivity
estimation method based on Fourier spatiotemporal attention (FSTA-EC), which
combines Fourier attention and spatiotemporal attention to simultaneously
capture inter-series (spatial) dynamics and intra-series (temporal)
dependencies from high-noise fMRI data. Specifically, Fourier attention is
designed to convert the high-noise fMRI data to frequency domain, and map the
denoised fMRI data back to physical domain, and spatiotemporal attention is
crafted to simultaneously learn spatiotemporal dynamics. Furthermore, through a
series of proofs, we demonstrate that incorporating learnable filter into fast
Fourier transform and inverse fast Fourier transform processes is
mathematically equivalent to performing cyclic convolution. The experimental
results on simulated and real-resting-state fMRI datasets demonstrate that the
proposed method exhibits superior performance when compared to state-of-the-art
methods.
|
2503.11294 | Martin V\'yboh | Martin V\'yboh, Zuzana Chladn\'a, Gabriela Grmanov\'a, M\'aria Luck\'a | Latent Space Representation of Electricity Market Curves for Improved
Prediction Efficiency | Submitted to Applied Soft Computing | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | This work presents a three-phase ML prediction framework designed to handle a
high dimensionality and multivariate time series character of the electricity
market curves. In the preprocessing phase, we transform the original data to
achieve a unified structure and mitigate the effect of possible outliers.
Further, to address the challenge of high dimensionality, we test three
dimensionality reduction techniques (PCA, kPCA, UMAP). Finally, we predict
supply and demand curves, once represented in a latent space, with a variety of
machine learning methods (RF, LSTM, TSMixer). As our results on the MIBEL
dataset show, a high dimensional structure of the market curves can be best
handled by the nonlinear reduction technique UMAP. Regardless of the ML
technique used for prediction, we achieved the lowest values for all considered
precision metrics with a UMAP latent space representation in only two or three
dimensions, even when compared to PCA and kPCA with five or six dimensions.
Further, we demonstrate that the most promising machine learning technique to
handle the complex structure of the electricity market curves is a novel
TSMixer architecture. Finally, we fill the gap in the field of electricity
market curves prediction literature: in addition to standard analysis on the
supply side, we applied the ML framework and predicted demand curves too. We
discussed the differences in the achieved results for these two types of
curves.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 11:04:46 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Výboh",
"Martin",
""
],
[
"Chladná",
"Zuzana",
""
],
[
"Grmanová",
"Gabriela",
""
],
[
"Lucká",
"Mária",
""
]
] | TITLE: Latent Space Representation of Electricity Market Curves for Improved
Prediction Efficiency
ABSTRACT: This work presents a three-phase ML prediction framework designed to handle a
high dimensionality and multivariate time series character of the electricity
market curves. In the preprocessing phase, we transform the original data to
achieve a unified structure and mitigate the effect of possible outliers.
Further, to address the challenge of high dimensionality, we test three
dimensionality reduction techniques (PCA, kPCA, UMAP). Finally, we predict
supply and demand curves, once represented in a latent space, with a variety of
machine learning methods (RF, LSTM, TSMixer). As our results on the MIBEL
dataset show, a high dimensional structure of the market curves can be best
handled by the nonlinear reduction technique UMAP. Regardless of the ML
technique used for prediction, we achieved the lowest values for all considered
precision metrics with a UMAP latent space representation in only two or three
dimensions, even when compared to PCA and kPCA with five or six dimensions.
Further, we demonstrate that the most promising machine learning technique to
handle the complex structure of the electricity market curves is a novel
TSMixer architecture. Finally, we fill the gap in the field of electricity
market curves prediction literature: in addition to standard analysis on the
supply side, we applied the ML framework and predicted demand curves too. We
discussed the differences in the achieved results for these two types of
curves.
|
2503.11312 | Juan Antonio De Rus Arance | Juan Antonio De Rus and Mario Montagud and Jesus Lopez-Ballester and
Francesc J. Ferri and Maximo Cobos | A Data-Driven Exploration of Elevation Cues in HRTFs: An Explainable AI
Perspective Across Multiple Datasets | 14 pages, 9 figures | null | null | null | eess.SP cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Precise elevation perception in binaural audio remains a challenge, despite
extensive research on head-related transfer functions (HRTFs) and spectral
cues. While prior studies have advanced our understanding of sound localization
cues, the interplay between spectral features and elevation perception is still
not fully understood. This paper presents a comprehensive analysis of over 600
subjects from 11 diverse public HRTF datasets, employing a convolutional neural
network (CNN) model combined with explainable artificial intelligence (XAI)
techniques to investigate elevation cues. In addition to testing various HRTF
pre-processing methods, we focus on both within-dataset and inter-dataset
generalization and explainability, assessing the model's robustness across
different HRTF variations stemming from subjects and measurement setups. By
leveraging class activation mapping (CAM) saliency maps, we identify key
frequency bands that may contribute to elevation perception, providing deeper
insights into the spectral features that drive elevation-specific
classification. This study offers new perspectives on HRTF modeling and
elevation perception by analyzing diverse datasets and pre-processing
techniques, expanding our understanding of these cues across a wide range of
conditions.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 11:27:50 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"De Rus",
"Juan Antonio",
""
],
[
"Montagud",
"Mario",
""
],
[
"Lopez-Ballester",
"Jesus",
""
],
[
"Ferri",
"Francesc J.",
""
],
[
"Cobos",
"Maximo",
""
]
] | TITLE: A Data-Driven Exploration of Elevation Cues in HRTFs: An Explainable AI
Perspective Across Multiple Datasets
ABSTRACT: Precise elevation perception in binaural audio remains a challenge, despite
extensive research on head-related transfer functions (HRTFs) and spectral
cues. While prior studies have advanced our understanding of sound localization
cues, the interplay between spectral features and elevation perception is still
not fully understood. This paper presents a comprehensive analysis of over 600
subjects from 11 diverse public HRTF datasets, employing a convolutional neural
network (CNN) model combined with explainable artificial intelligence (XAI)
techniques to investigate elevation cues. In addition to testing various HRTF
pre-processing methods, we focus on both within-dataset and inter-dataset
generalization and explainability, assessing the model's robustness across
different HRTF variations stemming from subjects and measurement setups. By
leveraging class activation mapping (CAM) saliency maps, we identify key
frequency bands that may contribute to elevation perception, providing deeper
insights into the spectral features that drive elevation-specific
classification. This study offers new perspectives on HRTF modeling and
elevation perception by analyzing diverse datasets and pre-processing
techniques, expanding our understanding of these cues across a wide range of
conditions.
|
2503.11315 | JeongHun Yeo | Jeong Hun Yeo, Hyeongseop Rha, Se Jin Park, Yong Man Ro | MMS-LLaMA: Efficient LLM-based Audio-Visual Speech Recognition with
Minimal Multimodal Speech Tokens | The code and models are available
https://github.com/JeongHun0716/MMS-LLaMA | null | null | null | cs.CV cs.MM cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-Visual Speech Recognition (AVSR) achieves robust speech recognition in
noisy environments by combining auditory and visual information. However,
recent Large Language Model (LLM) based AVSR systems incur high computational
costs due to the high temporal resolution of audio-visual speech processed by
LLMs. In this work, we introduce an efficient multimodal speech LLM framework
that minimizes token length while preserving essential linguistic content. Our
approach employs an early av-fusion module for streamlined feature integration,
an audio-visual speech Q-Former that dynamically allocates tokens based on
input duration, and a refined query allocation strategy with a speech rate
predictor to adjust token allocation according to speaking speed of each audio
sample. Extensive experiments on the LRS3 dataset show that our method achieves
state-of-the-art performance with a WER of 0.74% while using only 3.5 tokens
per second. Moreover, our approach not only reduces token usage by 86% compared
to the previous multimodal speech LLM framework, but also improves
computational efficiency by reducing FLOPs by 35.7%.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 11:31:30 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yeo",
"Jeong Hun",
""
],
[
"Rha",
"Hyeongseop",
""
],
[
"Park",
"Se Jin",
""
],
[
"Ro",
"Yong Man",
""
]
] | TITLE: MMS-LLaMA: Efficient LLM-based Audio-Visual Speech Recognition with
Minimal Multimodal Speech Tokens
ABSTRACT: Audio-Visual Speech Recognition (AVSR) achieves robust speech recognition in
noisy environments by combining auditory and visual information. However,
recent Large Language Model (LLM) based AVSR systems incur high computational
costs due to the high temporal resolution of audio-visual speech processed by
LLMs. In this work, we introduce an efficient multimodal speech LLM framework
that minimizes token length while preserving essential linguistic content. Our
approach employs an early av-fusion module for streamlined feature integration,
an audio-visual speech Q-Former that dynamically allocates tokens based on
input duration, and a refined query allocation strategy with a speech rate
predictor to adjust token allocation according to speaking speed of each audio
sample. Extensive experiments on the LRS3 dataset show that our method achieves
state-of-the-art performance with a WER of 0.74% while using only 3.5 tokens
per second. Moreover, our approach not only reduces token usage by 86% compared
to the previous multimodal speech LLM framework, but also improves
computational efficiency by reducing FLOPs by 35.7%.
|
2503.11318 | Joona Kareinen | Joona Kareinen, Annaliina Skytt\"a, Tuomas Eerola, Kaisa Kraft, Lasse
Lensu, Sanna Suikkanen, Maiju Lehtiniemi, Heikki K\"alvi\"ainen | Open-Set Plankton Recognition | ECCV 2024, OOD-CV workshop paper | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper considers open-set recognition (OSR) of plankton images. Plankton
include a diverse range of microscopic aquatic organisms that have an important
role in marine ecosystems as primary producers and as a base of food webs.
Given their sensitivity to environmental changes, fluctuations in plankton
populations offer valuable information about oceans' health and climate change
motivating their monitoring. Modern automatic plankton imaging devices enable
the collection of large-scale plankton image datasets, facilitating
species-level analysis. Plankton species recognition can be seen as an image
classification task and is typically solved using deep learning-based image
recognition models. However, data collection in real aquatic environments
results in imaging devices capturing a variety of non-plankton particles and
plankton species not present in the training set. This creates a challenging
fine-grained OSR problem, characterized by subtle differences between
taxonomically close plankton species. We address this challenge by conducting
extensive experiments on three OSR approaches using both phyto- and zooplankton
images analyzing also on the effect of the rejection thresholds for OSR. The
results demonstrate that high OSR accuracy can be obtained promoting the use of
these methods in operational plankton research. We have made the data publicly
available to the research community.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 11:35:36 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Kareinen",
"Joona",
""
],
[
"Skyttä",
"Annaliina",
""
],
[
"Eerola",
"Tuomas",
""
],
[
"Kraft",
"Kaisa",
""
],
[
"Lensu",
"Lasse",
""
],
[
"Suikkanen",
"Sanna",
""
],
[
"Lehtiniemi",
"Maiju",
""
],
[
"Kälviäinen",
"Heikki",
""
]
] | TITLE: Open-Set Plankton Recognition
ABSTRACT: This paper considers open-set recognition (OSR) of plankton images. Plankton
include a diverse range of microscopic aquatic organisms that have an important
role in marine ecosystems as primary producers and as a base of food webs.
Given their sensitivity to environmental changes, fluctuations in plankton
populations offer valuable information about oceans' health and climate change
motivating their monitoring. Modern automatic plankton imaging devices enable
the collection of large-scale plankton image datasets, facilitating
species-level analysis. Plankton species recognition can be seen as an image
classification task and is typically solved using deep learning-based image
recognition models. However, data collection in real aquatic environments
results in imaging devices capturing a variety of non-plankton particles and
plankton species not present in the training set. This creates a challenging
fine-grained OSR problem, characterized by subtle differences between
taxonomically close plankton species. We address this challenge by conducting
extensive experiments on three OSR approaches using both phyto- and zooplankton
images analyzing also on the effect of the rejection thresholds for OSR. The
results demonstrate that high OSR accuracy can be obtained promoting the use of
these methods in operational plankton research. We have made the data publicly
available to the research community.
|
2503.11324 | Ziyi Wang | Ziyi Wang, Songbai Tan, Gang Xu, Xuerui Qiu, Hongbin Xu, Xin Meng,
Ming Li, Fei Richard Yu | Safe-VAR: Safe Visual Autoregressive Model for Text-to-Image Generative
Watermarking | null | null | null | null | cs.MM cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the success of autoregressive learning in large language models, it has
become a dominant approach for text-to-image generation, offering high
efficiency and visual quality. However, invisible watermarking for visual
autoregressive (VAR) models remains underexplored, despite its importance in
misuse prevention. Existing watermarking methods, designed for diffusion
models, often struggle to adapt to the sequential nature of VAR models. To
bridge this gap, we propose Safe-VAR, the first watermarking framework
specifically designed for autoregressive text-to-image generation. Our study
reveals that the timing of watermark injection significantly impacts generation
quality, and watermarks of different complexities exhibit varying optimal
injection times. Motivated by this observation, we propose an Adaptive Scale
Interaction Module, which dynamically determines the optimal watermark
embedding strategy based on the watermark information and the visual
characteristics of the generated image. This ensures watermark robustness while
minimizing its impact on image quality. Furthermore, we introduce a Cross-Scale
Fusion mechanism, which integrates mixture of both heads and experts to
effectively fuse multi-resolution features and handle complex interactions
between image content and watermark patterns. Experimental results demonstrate
that Safe-VAR achieves state-of-the-art performance, significantly surpassing
existing counterparts regarding image quality, watermarking fidelity, and
robustness against perturbations. Moreover, our method exhibits strong
generalization to an out-of-domain watermark dataset QR Codes.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 11:45:10 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wang",
"Ziyi",
""
],
[
"Tan",
"Songbai",
""
],
[
"Xu",
"Gang",
""
],
[
"Qiu",
"Xuerui",
""
],
[
"Xu",
"Hongbin",
""
],
[
"Meng",
"Xin",
""
],
[
"Li",
"Ming",
""
],
[
"Yu",
"Fei Richard",
""
]
] | TITLE: Safe-VAR: Safe Visual Autoregressive Model for Text-to-Image Generative
Watermarking
ABSTRACT: With the success of autoregressive learning in large language models, it has
become a dominant approach for text-to-image generation, offering high
efficiency and visual quality. However, invisible watermarking for visual
autoregressive (VAR) models remains underexplored, despite its importance in
misuse prevention. Existing watermarking methods, designed for diffusion
models, often struggle to adapt to the sequential nature of VAR models. To
bridge this gap, we propose Safe-VAR, the first watermarking framework
specifically designed for autoregressive text-to-image generation. Our study
reveals that the timing of watermark injection significantly impacts generation
quality, and watermarks of different complexities exhibit varying optimal
injection times. Motivated by this observation, we propose an Adaptive Scale
Interaction Module, which dynamically determines the optimal watermark
embedding strategy based on the watermark information and the visual
characteristics of the generated image. This ensures watermark robustness while
minimizing its impact on image quality. Furthermore, we introduce a Cross-Scale
Fusion mechanism, which integrates mixture of both heads and experts to
effectively fuse multi-resolution features and handle complex interactions
between image content and watermark patterns. Experimental results demonstrate
that Safe-VAR achieves state-of-the-art performance, significantly surpassing
existing counterparts regarding image quality, watermarking fidelity, and
robustness against perturbations. Moreover, our method exhibits strong
generalization to an out-of-domain watermark dataset QR Codes.
|
2503.11328 | Ruiqian Li | Ruiqian Li, Siyuan Shen, Suan Xia, Ziheng Wang, Xingyue Peng,
Chengxuan Song, Yingsheng Zhu, Tao Wu, Shiying Li and Jingyi Yu | TransiT: Transient Transformer for Non-line-of-sight Videography | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High quality and high speed videography using Non-Line-of-Sight (NLOS)
imaging benefit autonomous navigation, collision prevention, and post-disaster
search and rescue tasks. Current solutions have to balance between the frame
rate and image quality. High frame rates, for example, can be achieved by
reducing either per-point scanning time or scanning density, but at the cost of
lowering the information density at individual frames. Fast scanning process
further reduces the signal-to-noise ratio and different scanning systems
exhibit different distortion characteristics. In this work, we design and
employ a new Transient Transformer architecture called TransiT to achieve
real-time NLOS recovery under fast scans. TransiT directly compresses the
temporal dimension of input transients to extract features, reducing
computation costs and meeting high frame rate requirements. It further adopts a
feature fusion mechanism as well as employs a spatial-temporal Transformer to
help capture features of NLOS transient videos. Moreover, TransiT applies
transfer learning to bridge the gap between synthetic and real-measured data.
In real experiments, TransiT manages to reconstruct from sparse transients of
$16 \times 16$ measured at an exposure time of 0.4 ms per point to NLOS videos
at a $64 \times 64$ resolution at 10 frames per second. We will make our code
and dataset available to the community.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 11:56:37 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Li",
"Ruiqian",
""
],
[
"Shen",
"Siyuan",
""
],
[
"Xia",
"Suan",
""
],
[
"Wang",
"Ziheng",
""
],
[
"Peng",
"Xingyue",
""
],
[
"Song",
"Chengxuan",
""
],
[
"Zhu",
"Yingsheng",
""
],
[
"Wu",
"Tao",
""
],
[
"Li",
"Shiying",
""
],
[
"Yu",
"Jingyi",
""
]
] | TITLE: TransiT: Transient Transformer for Non-line-of-sight Videography
ABSTRACT: High quality and high speed videography using Non-Line-of-Sight (NLOS)
imaging benefit autonomous navigation, collision prevention, and post-disaster
search and rescue tasks. Current solutions have to balance between the frame
rate and image quality. High frame rates, for example, can be achieved by
reducing either per-point scanning time or scanning density, but at the cost of
lowering the information density at individual frames. Fast scanning process
further reduces the signal-to-noise ratio and different scanning systems
exhibit different distortion characteristics. In this work, we design and
employ a new Transient Transformer architecture called TransiT to achieve
real-time NLOS recovery under fast scans. TransiT directly compresses the
temporal dimension of input transients to extract features, reducing
computation costs and meeting high frame rate requirements. It further adopts a
feature fusion mechanism as well as employs a spatial-temporal Transformer to
help capture features of NLOS transient videos. Moreover, TransiT applies
transfer learning to bridge the gap between synthetic and real-measured data.
In real experiments, TransiT manages to reconstruct from sparse transients of
$16 \times 16$ measured at an exposure time of 0.4 ms per point to NLOS videos
at a $64 \times 64$ resolution at 10 frames per second. We will make our code
and dataset available to the community.
|
2503.11341 | Joona Kareinen | Joona Kareinen, Tuomas Eerola, Kaisa Kraft, Lasse Lensu, Sanna
Suikkanen, Heikki K\"alvi\"ainen | Self-Supervised Pretraining for Fine-Grained Plankton Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Plankton recognition is an important computer vision problem due to
plankton's essential role in ocean food webs and carbon capture, highlighting
the need for species-level monitoring. However, this task is challenging due to
its fine-grained nature and dataset shifts caused by different imaging
instruments and varying species distributions. As new plankton image datasets
are collected at an increasing pace, there is a need for general plankton
recognition models that require minimal expert effort for data labeling. In
this work, we study large-scale self-supervised pretraining for fine-grained
plankton recognition. We first employ masked autoencoding and a large volume of
diverse plankton image data to pretrain a general-purpose plankton image
encoder. Then we utilize fine-tuning to obtain accurate plankton recognition
models for new datasets with a very limited number of labeled training images.
Our experiments show that self-supervised pretraining with diverse plankton
data clearly increases plankton recognition accuracy compared to standard
ImageNet pretraining when the amount of training data is limited. Moreover, the
accuracy can be further improved when unlabeled target data is available and
utilized during the pretraining.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 12:15:20 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Kareinen",
"Joona",
""
],
[
"Eerola",
"Tuomas",
""
],
[
"Kraft",
"Kaisa",
""
],
[
"Lensu",
"Lasse",
""
],
[
"Suikkanen",
"Sanna",
""
],
[
"Kälviäinen",
"Heikki",
""
]
] | TITLE: Self-Supervised Pretraining for Fine-Grained Plankton Recognition
ABSTRACT: Plankton recognition is an important computer vision problem due to
plankton's essential role in ocean food webs and carbon capture, highlighting
the need for species-level monitoring. However, this task is challenging due to
its fine-grained nature and dataset shifts caused by different imaging
instruments and varying species distributions. As new plankton image datasets
are collected at an increasing pace, there is a need for general plankton
recognition models that require minimal expert effort for data labeling. In
this work, we study large-scale self-supervised pretraining for fine-grained
plankton recognition. We first employ masked autoencoding and a large volume of
diverse plankton image data to pretrain a general-purpose plankton image
encoder. Then we utilize fine-tuning to obtain accurate plankton recognition
models for new datasets with a very limited number of labeled training images.
Our experiments show that self-supervised pretraining with diverse plankton
data clearly increases plankton recognition accuracy compared to standard
ImageNet pretraining when the amount of training data is limited. Moreover, the
accuracy can be further improved when unlabeled target data is available and
utilized during the pretraining.
|
2503.11342 | Yibing Weng | Yibing Weng, Yu Gu, Fuji Ren | Road Rage Reasoning with Vision-language Models (VLMs): Task Definition
and Evaluation Dataset | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Road rage, triggered by driving-related stimuli such as traffic congestion
and aggressive driving, poses a significant threat to road safety. Previous
research on road rage regulation has primarily focused on response suppression,
lacking proactive prevention capabilities. With the advent of Vision-Language
Models (VLMs), it has become possible to reason about trigger events visually
and then engage in dialog-based comforting before drivers' anger escalates. To
this end, we propose the road rage reasoning task, along with a finely
annotated test dataset and evaluation metrics, to assess the capabilities of
current mainstream VLMs in scene understanding, event recognition, and road
rage reasoning. The results indicate that current VLMs exhibit significant
shortcomings in scene understanding within the visual modality, as well as in
comprehending the spatial relationships between objects in the textual
modality. Improving VLMs' performance in these areas will greatly benefit
downstream tasks like antecedent-focused road rage regulation.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 12:18:11 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Weng",
"Yibing",
""
],
[
"Gu",
"Yu",
""
],
[
"Ren",
"Fuji",
""
]
] | TITLE: Road Rage Reasoning with Vision-language Models (VLMs): Task Definition
and Evaluation Dataset
ABSTRACT: Road rage, triggered by driving-related stimuli such as traffic congestion
and aggressive driving, poses a significant threat to road safety. Previous
research on road rage regulation has primarily focused on response suppression,
lacking proactive prevention capabilities. With the advent of Vision-Language
Models (VLMs), it has become possible to reason about trigger events visually
and then engage in dialog-based comforting before drivers' anger escalates. To
this end, we propose the road rage reasoning task, along with a finely
annotated test dataset and evaluation metrics, to assess the capabilities of
current mainstream VLMs in scene understanding, event recognition, and road
rage reasoning. The results indicate that current VLMs exhibit significant
shortcomings in scene understanding within the visual modality, as well as in
comprehending the spatial relationships between objects in the textual
modality. Improving VLMs' performance in these areas will greatly benefit
downstream tasks like antecedent-focused road rage regulation.
|
2503.11345 | Di Li | Di Li, Jie Feng, Jiahao Chen, Weisheng Dong, Guanbin Li, Guangming Shi
and Licheng Jiao | EgoSplat: Open-Vocabulary Egocentric Scene Understanding with Language
Embedded 3D Gaussian Splatting | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Egocentric scenes exhibit frequent occlusions, varied viewpoints, and dynamic
interactions compared to typical scene understanding tasks. Occlusions and
varied viewpoints can lead to multi-view semantic inconsistencies, while
dynamic objects may act as transient distractors, introducing artifacts into
semantic feature modeling. To address these challenges, we propose EgoSplat, a
language-embedded 3D Gaussian Splatting framework for open-vocabulary
egocentric scene understanding. A multi-view consistent instance feature
aggregation method is designed to leverage the segmentation and tracking
capabilities of SAM2 to selectively aggregate complementary features across
views for each instance, ensuring precise semantic representation of scenes.
Additionally, an instance-aware spatial-temporal transient prediction module is
constructed to improve spatial integrity and temporal continuity in predictions
by incorporating spatial-temporal associations across multi-view instances,
effectively reducing artifacts in the semantic reconstruction of egocentric
scenes. EgoSplat achieves state-of-the-art performance in both localization and
segmentation tasks on two datasets, outperforming existing methods with a 8.2%
improvement in localization accuracy and a 3.7% improvement in segmentation
mIoU on the ADT dataset, and setting a new benchmark in open-vocabulary
egocentric scene understanding. The code will be made publicly available.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 12:21:26 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Li",
"Di",
""
],
[
"Feng",
"Jie",
""
],
[
"Chen",
"Jiahao",
""
],
[
"Dong",
"Weisheng",
""
],
[
"Li",
"Guanbin",
""
],
[
"Shi",
"Guangming",
""
],
[
"Jiao",
"Licheng",
""
]
] | TITLE: EgoSplat: Open-Vocabulary Egocentric Scene Understanding with Language
Embedded 3D Gaussian Splatting
ABSTRACT: Egocentric scenes exhibit frequent occlusions, varied viewpoints, and dynamic
interactions compared to typical scene understanding tasks. Occlusions and
varied viewpoints can lead to multi-view semantic inconsistencies, while
dynamic objects may act as transient distractors, introducing artifacts into
semantic feature modeling. To address these challenges, we propose EgoSplat, a
language-embedded 3D Gaussian Splatting framework for open-vocabulary
egocentric scene understanding. A multi-view consistent instance feature
aggregation method is designed to leverage the segmentation and tracking
capabilities of SAM2 to selectively aggregate complementary features across
views for each instance, ensuring precise semantic representation of scenes.
Additionally, an instance-aware spatial-temporal transient prediction module is
constructed to improve spatial integrity and temporal continuity in predictions
by incorporating spatial-temporal associations across multi-view instances,
effectively reducing artifacts in the semantic reconstruction of egocentric
scenes. EgoSplat achieves state-of-the-art performance in both localization and
segmentation tasks on two datasets, outperforming existing methods with a 8.2%
improvement in localization accuracy and a 3.7% improvement in segmentation
mIoU on the ADT dataset, and setting a new benchmark in open-vocabulary
egocentric scene understanding. The code will be made publicly available.
|
2503.11346 | Fengyu Li | Fengyu Li (1), Yilin Li (1), Junhao Zhu (1), Lu Chen (1), Yanfei Zhang
(1), Jia Zhou (1), Hui Zu (1), Jingwen Zhao (2), Yunjun Gao (1) ((1) Zhejiang
University, (2) Poisson Lab, Huawei) | AIstorian lets AI be a historian: A KG-powered multi-agent system for
accurate biography generation | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Huawei has always been committed to exploring the AI application in
historical research. Biography generation, as a specialized form of abstractive
summarization, plays a crucial role in historical research but faces unique
challenges that existing large language models (LLMs) struggle to address.
These challenges include maintaining stylistic adherence to historical writing
conventions, ensuring factual fidelity, and handling fragmented information
across multiple documents. We present AIstorian, a novel end-to-end agentic
system featured with a knowledge graph (KG)-powered retrieval-augmented
generation (RAG) and anti-hallucination multi-agents. Specifically, AIstorian
introduces an in-context learning based chunking strategy and a KG-based index
for accurate and efficient reference retrieval. Meanwhile, AIstorian
orchestrates multi-agents to conduct on-the-fly hallucination detection and
error-type-aware correction. Additionally, to teach LLMs a certain language
style, we finetune LLMs based on a two-step training approach combining data
augmentation-enhanced supervised fine-tuning with stylistic preference
optimization. Extensive experiments on a real-life historical Jinshi dataset
demonstrate that AIstorian achieves a 3.8x improvement in factual accuracy and
a 47.6% reduction in hallucination rate compared to existing baselines. The
data and code are available at: https://github.com/ZJU-DAILY/AIstorian.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 12:23:45 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Li",
"Fengyu",
""
],
[
"Li",
"Yilin",
""
],
[
"Zhu",
"Junhao",
""
],
[
"Chen",
"Lu",
""
],
[
"Zhang",
"Yanfei",
""
],
[
"Zhou",
"Jia",
""
],
[
"Zu",
"Hui",
""
],
[
"Zhao",
"Jingwen",
""
],
[
"Gao",
"Yunjun",
""
]
] | TITLE: AIstorian lets AI be a historian: A KG-powered multi-agent system for
accurate biography generation
ABSTRACT: Huawei has always been committed to exploring the AI application in
historical research. Biography generation, as a specialized form of abstractive
summarization, plays a crucial role in historical research but faces unique
challenges that existing large language models (LLMs) struggle to address.
These challenges include maintaining stylistic adherence to historical writing
conventions, ensuring factual fidelity, and handling fragmented information
across multiple documents. We present AIstorian, a novel end-to-end agentic
system featured with a knowledge graph (KG)-powered retrieval-augmented
generation (RAG) and anti-hallucination multi-agents. Specifically, AIstorian
introduces an in-context learning based chunking strategy and a KG-based index
for accurate and efficient reference retrieval. Meanwhile, AIstorian
orchestrates multi-agents to conduct on-the-fly hallucination detection and
error-type-aware correction. Additionally, to teach LLMs a certain language
style, we finetune LLMs based on a two-step training approach combining data
augmentation-enhanced supervised fine-tuning with stylistic preference
optimization. Extensive experiments on a real-life historical Jinshi dataset
demonstrate that AIstorian achieves a 3.8x improvement in factual accuracy and
a 47.6% reduction in hallucination rate compared to existing baselines. The
data and code are available at: https://github.com/ZJU-DAILY/AIstorian.
|
2503.11348 | Aissatou Diallo | Aissatou Diallo, Antonis Bikakis, Luke Dickens, Anthony Hunter, Rob
Miller | RESPONSE: Benchmarking the Ability of Language Models to Undertake
Commonsense Reasoning in Crisis Situation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | An interesting class of commonsense reasoning problems arises when people are
faced with natural disasters. To investigate this topic, we present
\textsf{RESPONSE}, a human-curated dataset containing 1789 annotated instances
featuring 6037 sets of questions designed to assess LLMs' commonsense reasoning
in disaster situations across different time frames. The dataset includes
problem descriptions, missing resources, time-sensitive solutions, and their
justifications, with a subset validated by environmental engineers. Through
both automatic metrics and human evaluation, we compare LLM-generated
recommendations against human responses. Our findings show that even
state-of-the-art models like GPT-4 achieve only 37\% human-evaluated
correctness for immediate response actions, highlighting significant room for
improvement in LLMs' ability for commonsense reasoning in crises.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 12:32:40 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Diallo",
"Aissatou",
""
],
[
"Bikakis",
"Antonis",
""
],
[
"Dickens",
"Luke",
""
],
[
"Hunter",
"Anthony",
""
],
[
"Miller",
"Rob",
""
]
] | TITLE: RESPONSE: Benchmarking the Ability of Language Models to Undertake
Commonsense Reasoning in Crisis Situation
ABSTRACT: An interesting class of commonsense reasoning problems arises when people are
faced with natural disasters. To investigate this topic, we present
\textsf{RESPONSE}, a human-curated dataset containing 1789 annotated instances
featuring 6037 sets of questions designed to assess LLMs' commonsense reasoning
in disaster situations across different time frames. The dataset includes
problem descriptions, missing resources, time-sensitive solutions, and their
justifications, with a subset validated by environmental engineers. Through
both automatic metrics and human evaluation, we compare LLM-generated
recommendations against human responses. Our findings show that even
state-of-the-art models like GPT-4 achieve only 37\% human-evaluated
correctness for immediate response actions, highlighting significant room for
improvement in LLMs' ability for commonsense reasoning in crises.
|
2503.11349 | Adam Marinela | Marinela Adam | An experimental approach on Few Shot Class Incremental Learning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Few-Shot Class-Incremental Learning (FSCIL) represents a cutting-edge
paradigm within the broader scope of machine learning, designed to empower
models with the ability to assimilate new classes of data with limited examples
while safeguarding existing knowledge. The paper will present different
solutions which contain extensive experiments across large-scale datasets,
domain shifts, and network architectures to evaluate and compare the selected
methods. We highlight their advantages and then present an experimental
approach with the purpose of improving the most promising one by replacing the
visual-language (V-L) model (CLIP) with another V-L model (CLOOB) that seem to
outperform it on zero-shot learning tasks. The aim of this report is to present
an experimental method for FSCIL that would improve its performance. We also
plan to offer an overview followed by an analysis of the recent advancements in
FSCIL domain, focusing on various strategies to mitigate catastrophic
forgetting and improve the adaptability of models to evolving tasks and
datasets.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 12:36:15 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Adam",
"Marinela",
""
]
] | TITLE: An experimental approach on Few Shot Class Incremental Learning
ABSTRACT: Few-Shot Class-Incremental Learning (FSCIL) represents a cutting-edge
paradigm within the broader scope of machine learning, designed to empower
models with the ability to assimilate new classes of data with limited examples
while safeguarding existing knowledge. The paper will present different
solutions which contain extensive experiments across large-scale datasets,
domain shifts, and network architectures to evaluate and compare the selected
methods. We highlight their advantages and then present an experimental
approach with the purpose of improving the most promising one by replacing the
visual-language (V-L) model (CLIP) with another V-L model (CLOOB) that seem to
outperform it on zero-shot learning tasks. The aim of this report is to present
an experimental method for FSCIL that would improve its performance. We also
plan to offer an overview followed by an analysis of the recent advancements in
FSCIL domain, focusing on various strategies to mitigate catastrophic
forgetting and improve the adaptability of models to evolving tasks and
datasets.
|
2503.11352 | Arno Verduyn | Arno Verduyn and Maxim Vochten and Joris De Schutter | Enhancing Hand Palm Motion Gesture Recognition by Eliminating Reference
Frame Bias via Frame-Invariant Similarity Measures | 8 pages, 4 figures, this work has been submitted as a conference
paper for consideration in the 2025 IEEE International Conference on
Automation Science and Engineering (CASE), the content in this preprint is
identical to the version submitted for peer review | null | null | null | cs.RO cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability of robots to recognize human gestures facilitates a natural and
accessible human-robot collaboration. However, most work in gesture recognition
remains rooted in reference frame-dependent representations. This poses a
challenge when reference frames vary due to different work cell layouts,
imprecise frame calibrations, or other environmental changes. This paper
investigated the use of invariant trajectory descriptors for robust hand palm
motion gesture recognition under reference frame changes. First, a novel
dataset of recorded Hand Palm Motion (HPM) gestures is introduced. The motion
gestures in this dataset were specifically designed to be distinguishable
without dependence on specific reference frames or directional cues.
Afterwards, multiple invariant trajectory descriptor approaches were
benchmarked to assess how their performances generalize to this novel HPM
dataset. After this offline benchmarking, the best scoring approach is
validated for online recognition by developing a real-time Proof of Concept
(PoC). In this PoC, hand palm motion gestures were used to control the
real-time movement of a manipulator arm. The PoC demonstrated a high
recognition reliability in real-time operation, achieving an $F_1$-score of
92.3%. This work demonstrates the effectiveness of the invariant descriptor
approach as a standalone solution. Moreover, we believe that the invariant
descriptor approach can also be utilized within other state-of-the-art pattern
recognition and learning systems to improve their robustness against reference
frame variations.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 12:40:43 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Verduyn",
"Arno",
""
],
[
"Vochten",
"Maxim",
""
],
[
"De Schutter",
"Joris",
""
]
] | TITLE: Enhancing Hand Palm Motion Gesture Recognition by Eliminating Reference
Frame Bias via Frame-Invariant Similarity Measures
ABSTRACT: The ability of robots to recognize human gestures facilitates a natural and
accessible human-robot collaboration. However, most work in gesture recognition
remains rooted in reference frame-dependent representations. This poses a
challenge when reference frames vary due to different work cell layouts,
imprecise frame calibrations, or other environmental changes. This paper
investigated the use of invariant trajectory descriptors for robust hand palm
motion gesture recognition under reference frame changes. First, a novel
dataset of recorded Hand Palm Motion (HPM) gestures is introduced. The motion
gestures in this dataset were specifically designed to be distinguishable
without dependence on specific reference frames or directional cues.
Afterwards, multiple invariant trajectory descriptor approaches were
benchmarked to assess how their performances generalize to this novel HPM
dataset. After this offline benchmarking, the best scoring approach is
validated for online recognition by developing a real-time Proof of Concept
(PoC). In this PoC, hand palm motion gestures were used to control the
real-time movement of a manipulator arm. The PoC demonstrated a high
recognition reliability in real-time operation, achieving an $F_1$-score of
92.3%. This work demonstrates the effectiveness of the invariant descriptor
approach as a standalone solution. Moreover, we believe that the invariant
descriptor approach can also be utilized within other state-of-the-art pattern
recognition and learning systems to improve their robustness against reference
frame variations.
|
2503.11360 | Mayank Nautiyal | Mayank Nautiyal, Stela Arranz Gheorghe, Kristiana Stefa, Li Ju,
Ida-Maria Sintorn, Prashant Singh | PARIC: Probabilistic Attention Regularization for Language Guided Image
Classification from Pre-trained Vison Language Models | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Language-guided attention frameworks have significantly enhanced both
interpretability and performance in image classification; however, the reliance
on deterministic embeddings from pre-trained vision-language foundation models
to generate reference attention maps frequently overlooks the intrinsic
multivaluedness and ill-posed characteristics of cross-modal mappings. To
address these limitations, we introduce PARIC, a probabilistic framework for
guiding visual attention via language specifications. Our approach enables
pre-trained vision-language models to generate probabilistic reference
attention maps, which align textual and visual modalities more effectively
while incorporating uncertainty estimates, as compared to their deterministic
counterparts. Experiments on benchmark test problems demonstrate that PARIC
enhances prediction accuracy, mitigates bias, ensures consistent predictions,
and improves robustness across various datasets.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 12:53:37 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Nautiyal",
"Mayank",
""
],
[
"Gheorghe",
"Stela Arranz",
""
],
[
"Stefa",
"Kristiana",
""
],
[
"Ju",
"Li",
""
],
[
"Sintorn",
"Ida-Maria",
""
],
[
"Singh",
"Prashant",
""
]
] | TITLE: PARIC: Probabilistic Attention Regularization for Language Guided Image
Classification from Pre-trained Vison Language Models
ABSTRACT: Language-guided attention frameworks have significantly enhanced both
interpretability and performance in image classification; however, the reliance
on deterministic embeddings from pre-trained vision-language foundation models
to generate reference attention maps frequently overlooks the intrinsic
multivaluedness and ill-posed characteristics of cross-modal mappings. To
address these limitations, we introduce PARIC, a probabilistic framework for
guiding visual attention via language specifications. Our approach enables
pre-trained vision-language models to generate probabilistic reference
attention maps, which align textual and visual modalities more effectively
while incorporating uncertainty estimates, as compared to their deterministic
counterparts. Experiments on benchmark test problems demonstrate that PARIC
enhances prediction accuracy, mitigates bias, ensures consistent predictions,
and improves robustness across various datasets.
|
2503.11366 | Sedir Mohammed | Sedir Mohammed, Felix Naumann, Hazar Harmouch | Step-by-Step Data Cleaning Recommendations to Improve ML Prediction
Accuracy | null | Proceedings 28th International Conference on Extending Database
Technology (EDBT) 2025, Barcelona, Spain, March 25-28, 2025, 542-554 | 10.48786/edbt.2025.43 | null | cs.DB | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data quality is crucial in machine learning (ML) applications, as errors in
the data can significantly impact the prediction accuracy of the underlying ML
model. Therefore, data cleaning is an integral component of any ML pipeline.
However, in practical scenarios, data cleaning incurs significant costs, as it
often involves domain experts for configuring and executing the cleaning
process. Thus, efficient resource allocation during data cleaning can enhance
ML prediction accuracy while controlling expenses.
This paper presents COMET, a system designed to optimize data cleaning
efforts for ML tasks. COMET gives step-by-step recommendations on which feature
to clean next, maximizing the efficiency of data cleaning under resource
constraints. We evaluated COMET across various datasets, ML algorithms, and
data error types, demonstrating its robustness and adaptability. Our results
show that COMET consistently outperforms feature importance-based, random, and
another well-known cleaning method, achieving up to 52 and on average 5
percentage points higher ML prediction accuracy than the proposed baselines.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:04:39 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Mohammed",
"Sedir",
""
],
[
"Naumann",
"Felix",
""
],
[
"Harmouch",
"Hazar",
""
]
] | TITLE: Step-by-Step Data Cleaning Recommendations to Improve ML Prediction
Accuracy
ABSTRACT: Data quality is crucial in machine learning (ML) applications, as errors in
the data can significantly impact the prediction accuracy of the underlying ML
model. Therefore, data cleaning is an integral component of any ML pipeline.
However, in practical scenarios, data cleaning incurs significant costs, as it
often involves domain experts for configuring and executing the cleaning
process. Thus, efficient resource allocation during data cleaning can enhance
ML prediction accuracy while controlling expenses.
This paper presents COMET, a system designed to optimize data cleaning
efforts for ML tasks. COMET gives step-by-step recommendations on which feature
to clean next, maximizing the efficiency of data cleaning under resource
constraints. We evaluated COMET across various datasets, ML algorithms, and
data error types, demonstrating its robustness and adaptability. Our results
show that COMET consistently outperforms feature importance-based, random, and
another well-known cleaning method, achieving up to 52 and on average 5
percentage points higher ML prediction accuracy than the proposed baselines.
|
2503.11372 | Ziyue Wang | Ziyue Wang, Chenghao Shi, Neng Wang, Qinghua Yu, Xieyuanli Chen,
Huimin Lu | BEVDiffLoc: End-to-End LiDAR Global Localization in BEV View based on
Diffusion Model | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Localization is one of the core parts of modern robotics. Classic
localization methods typically follow the retrieve-then-register paradigm,
achieving remarkable success. Recently, the emergence of end-to-end
localization approaches has offered distinct advantages, including a
streamlined system architecture and the elimination of the need to store
extensive map data. Although these methods have demonstrated promising results,
current end-to-end localization approaches still face limitations in robustness
and accuracy. Bird's-Eye-View (BEV) image is one of the most widely adopted
data representations in autonomous driving. It significantly reduces data
complexity while preserving spatial structure and scale consistency, making it
an ideal representation for localization tasks. However, research on BEV-based
end-to-end localization remains notably insufficient. To fill this gap, we
propose BEVDiffLoc, a novel framework that formulates LiDAR localization as a
conditional generation of poses. Leveraging the properties of BEV, we first
introduce a specific data augmentation method to significantly enhance the
diversity of input data. Then, the Maximum Feature Aggregation Module and
Vision Transformer are employed to learn robust features while maintaining
robustness against significant rotational view variations. Finally, we
incorporate a diffusion model that iteratively refines the learned features to
recover the absolute pose. Extensive experiments on the Oxford Radar RobotCar
and NCLT datasets demonstrate that BEVDiffLoc outperforms the baseline methods.
Our code is available at https://github.com/nubot-nudt/BEVDiffLoc.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:17:43 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wang",
"Ziyue",
""
],
[
"Shi",
"Chenghao",
""
],
[
"Wang",
"Neng",
""
],
[
"Yu",
"Qinghua",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Lu",
"Huimin",
""
]
] | TITLE: BEVDiffLoc: End-to-End LiDAR Global Localization in BEV View based on
Diffusion Model
ABSTRACT: Localization is one of the core parts of modern robotics. Classic
localization methods typically follow the retrieve-then-register paradigm,
achieving remarkable success. Recently, the emergence of end-to-end
localization approaches has offered distinct advantages, including a
streamlined system architecture and the elimination of the need to store
extensive map data. Although these methods have demonstrated promising results,
current end-to-end localization approaches still face limitations in robustness
and accuracy. Bird's-Eye-View (BEV) image is one of the most widely adopted
data representations in autonomous driving. It significantly reduces data
complexity while preserving spatial structure and scale consistency, making it
an ideal representation for localization tasks. However, research on BEV-based
end-to-end localization remains notably insufficient. To fill this gap, we
propose BEVDiffLoc, a novel framework that formulates LiDAR localization as a
conditional generation of poses. Leveraging the properties of BEV, we first
introduce a specific data augmentation method to significantly enhance the
diversity of input data. Then, the Maximum Feature Aggregation Module and
Vision Transformer are employed to learn robust features while maintaining
robustness against significant rotational view variations. Finally, we
incorporate a diffusion model that iteratively refines the learned features to
recover the absolute pose. Extensive experiments on the Oxford Radar RobotCar
and NCLT datasets demonstrate that BEVDiffLoc outperforms the baseline methods.
Our code is available at https://github.com/nubot-nudt/BEVDiffLoc.
|
2503.11387 | Wenbo Yan | Wenbo Yan, Shurui Wang, Ying Tan | Hierarchical Information-Guided Spatio-Temporal Mamba for Stock Time
Series Forecasting | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Mamba has demonstrated excellent performance in various time series
forecasting tasks due to its superior selection mechanism. Nevertheless,
conventional Mamba-based models encounter significant challenges in accurately
predicting stock time series, as they fail to adequately capture both the
overarching market dynamics and the intricate interdependencies among
individual stocks. To overcome these constraints, we introduce the Hierarchical
Information-Guided Spatio-Temporal Mamba (HIGSTM) framework. HIGSTM introduces
Index-Guided Frequency Filtering Decomposition to extract commonality and
specificity from time series. The model architecture features a meticulously
designed hierarchical framework that systematically captures both temporal
dynamic patterns and global static relationships within the stock market.
Furthermore, we propose an Information-Guided Mamba that integrates macro
informations into the sequence selection process, thereby facilitating more
market-conscious decision-making. Comprehensive experimental evaluations
conducted on the CSI500, CSI800 and CSI1000 datasets demonstrate that HIGSTM
achieves state-of-the-art performance.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:30:38 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Yan",
"Wenbo",
""
],
[
"Wang",
"Shurui",
""
],
[
"Tan",
"Ying",
""
]
] | TITLE: Hierarchical Information-Guided Spatio-Temporal Mamba for Stock Time
Series Forecasting
ABSTRACT: Mamba has demonstrated excellent performance in various time series
forecasting tasks due to its superior selection mechanism. Nevertheless,
conventional Mamba-based models encounter significant challenges in accurately
predicting stock time series, as they fail to adequately capture both the
overarching market dynamics and the intricate interdependencies among
individual stocks. To overcome these constraints, we introduce the Hierarchical
Information-Guided Spatio-Temporal Mamba (HIGSTM) framework. HIGSTM introduces
Index-Guided Frequency Filtering Decomposition to extract commonality and
specificity from time series. The model architecture features a meticulously
designed hierarchical framework that systematically captures both temporal
dynamic patterns and global static relationships within the stock market.
Furthermore, we propose an Information-Guided Mamba that integrates macro
informations into the sequence selection process, thereby facilitating more
market-conscious decision-making. Comprehensive experimental evaluations
conducted on the CSI500, CSI800 and CSI1000 datasets demonstrate that HIGSTM
achieves state-of-the-art performance.
|
2503.11389 | Lukas Kroiss | Lukas Kroi{\ss} and Johannes Reschke | Deepfake Detection of Face Images based on a Convolutional Neural
Network | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Fake News and especially deepfakes (generated, non-real image or video
content) have become a serious topic over the last years. With the emergence of
machine learning algorithms it is now easier than ever before to generate such
fake content, even for private persons. This issue of generated fake images is
especially critical in the context of politics and public figures. We want to
address this conflict by building a model based on a Convolutions Neural
Network in order to detect such generated and fake images showing human
portraits. As a basis, we use a pre-trained ResNet-50 model due to its
effectiveness in terms of classifying images. We then adopted the base model to
our task of classifying a single image as authentic/real or fake by adding an
fully connected output layer containing a single neuron indicating the
authenticity of an image. We applied fine tuning and transfer learning to
develop the model and improve its parameters. For the training process we
collected the image data set "Diverse Face Fake Dataset" containing a wide
range of different image manipulation methods and also diversity in terms of
faces visible on the images. With our final model we reached the following
outstanding performance metrics: precision = 0.98, recall 0.96, F1-Score = 0.97
and an area-under-curve = 0.99.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:33:22 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Kroiß",
"Lukas",
""
],
[
"Reschke",
"Johannes",
""
]
] | TITLE: Deepfake Detection of Face Images based on a Convolutional Neural
Network
ABSTRACT: Fake News and especially deepfakes (generated, non-real image or video
content) have become a serious topic over the last years. With the emergence of
machine learning algorithms it is now easier than ever before to generate such
fake content, even for private persons. This issue of generated fake images is
especially critical in the context of politics and public figures. We want to
address this conflict by building a model based on a Convolutions Neural
Network in order to detect such generated and fake images showing human
portraits. As a basis, we use a pre-trained ResNet-50 model due to its
effectiveness in terms of classifying images. We then adopted the base model to
our task of classifying a single image as authentic/real or fake by adding an
fully connected output layer containing a single neuron indicating the
authenticity of an image. We applied fine tuning and transfer learning to
develop the model and improve its parameters. For the training process we
collected the image data set "Diverse Face Fake Dataset" containing a wide
range of different image manipulation methods and also diversity in terms of
faces visible on the images. With our final model we reached the following
outstanding performance metrics: precision = 0.98, recall 0.96, F1-Score = 0.97
and an area-under-curve = 0.99.
|
2503.11392 | David Gastager | David Gastager and Ghazal Ghazaei and Constantin Patsch | Watch and Learn: Leveraging Expert Knowledge and Language for Surgical
Video Understanding | 14 pages main manuscript with 3 figures; 6 pages supplementary
material with 3 figures. To be presented at International Conference on
Information Processing in Computer-Assisted Interventions (IPCAI 2025). To be
published in International Journal of Computer Assisted Radiology and Surgery
(IJCARS) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Automated surgical workflow analysis is crucial for education, research, and
clinical decision-making, but the lack of annotated datasets hinders the
development of accurate and comprehensive workflow analysis solutions. We
introduce a novel approach for addressing the sparsity and heterogeneity of
annotated training data inspired by the human learning procedure of watching
experts and understanding their explanations. Our method leverages a
video-language model trained on alignment, denoising, and generative tasks to
learn short-term spatio-temporal and multimodal representations. A
task-specific temporal model is then used to capture relationships across
entire videos. To achieve comprehensive video-language understanding in the
surgical domain, we introduce a data collection and filtering strategy to
construct a large-scale pretraining dataset from educational YouTube videos. We
then utilize parameter-efficient fine-tuning by projecting downstream task
annotations from publicly available surgical datasets into the language domain.
Extensive experiments in two surgical domains demonstrate the effectiveness of
our approach, with performance improvements of up to 7% in phase segmentation
tasks, 8% in zero-shot phase segmentation, and comparable capabilities to
fully-supervised models in few-shot settings. Harnessing our model's
capabilities for long-range temporal localization and text generation, we
present the first comprehensive solution for dense video captioning (DVC) of
surgical videos, addressing this task despite the absence of existing DVC
datasets in the surgical domain. We introduce a novel approach to surgical
workflow understanding that leverages video-language pretraining, large-scale
video pretraining, and optimized fine-tuning. Our method improves performance
over state-of-the-art techniques and enables new downstream tasks for surgical
video understanding.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:36:13 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Gastager",
"David",
""
],
[
"Ghazaei",
"Ghazal",
""
],
[
"Patsch",
"Constantin",
""
]
] | TITLE: Watch and Learn: Leveraging Expert Knowledge and Language for Surgical
Video Understanding
ABSTRACT: Automated surgical workflow analysis is crucial for education, research, and
clinical decision-making, but the lack of annotated datasets hinders the
development of accurate and comprehensive workflow analysis solutions. We
introduce a novel approach for addressing the sparsity and heterogeneity of
annotated training data inspired by the human learning procedure of watching
experts and understanding their explanations. Our method leverages a
video-language model trained on alignment, denoising, and generative tasks to
learn short-term spatio-temporal and multimodal representations. A
task-specific temporal model is then used to capture relationships across
entire videos. To achieve comprehensive video-language understanding in the
surgical domain, we introduce a data collection and filtering strategy to
construct a large-scale pretraining dataset from educational YouTube videos. We
then utilize parameter-efficient fine-tuning by projecting downstream task
annotations from publicly available surgical datasets into the language domain.
Extensive experiments in two surgical domains demonstrate the effectiveness of
our approach, with performance improvements of up to 7% in phase segmentation
tasks, 8% in zero-shot phase segmentation, and comparable capabilities to
fully-supervised models in few-shot settings. Harnessing our model's
capabilities for long-range temporal localization and text generation, we
present the first comprehensive solution for dense video captioning (DVC) of
surgical videos, addressing this task despite the absence of existing DVC
datasets in the surgical domain. We introduce a novel approach to surgical
workflow understanding that leverages video-language pretraining, large-scale
video pretraining, and optimized fine-tuning. Our method improves performance
over state-of-the-art techniques and enables new downstream tasks for surgical
video understanding.
|
2503.11402 | Cristina Improta | Cristina Improta, Rosalia Tufano, Pietro Liguori, Domenico Cotroneo,
Gabriele Bavota | Quality In, Quality Out: Investigating Training Data's Role in AI Code
Generation | Accepted to the 33rd IEEE/ACM International Conference on Program
Comprehension (ICPC 2025) | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Learning-based code generators have seen significant advancements in
recent years. Tools such as GitHub Copilot are used by thousands of developers
with the main promise of a boost in productivity. However, researchers have
recently questioned their impact on code quality showing, for example, that
code generated by DL-based tools may be affected by security vulnerabilities.
Since DL models are trained on large code corpora, one may conjecture that
low-quality code they output is the result of low-quality code they have seen
during training. However, there is very little empirical evidence documenting
this phenomenon. Indeed, most of previous work look at the frequency with which
commercial code generators recommend low-quality code without the possibility
of relating this to their training set. We investigate the extent to which
low-quality code instances seen during training affect the quality of the code
generated at inference time. We start by fine-tuning a pre-trained DL model on
a large-scale dataset being representative of those usually adopted in the
training of code generators. We show that 4.98% of functions in this dataset
exhibit one or more quality issues related to security, maintainability, best
practices, etc. We use the fine-tuned model to generate 551k Python functions,
showing that 5.85% of them are affected by at least one quality issue. We then
remove from the training set the low-quality functions, and use the cleaned
dataset to fine-tune a second model which has been used to generate the same
551k Python functions. We show that the model trained on the cleaned dataset
exhibits similar performance in terms of functional correctness as compared to
the original model while, however, generating a statistically significant lower
number of low-quality functions (2.16%). Our study empirically documents the
importance of high-quality training data for code generators.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:43:43 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Improta",
"Cristina",
""
],
[
"Tufano",
"Rosalia",
""
],
[
"Liguori",
"Pietro",
""
],
[
"Cotroneo",
"Domenico",
""
],
[
"Bavota",
"Gabriele",
""
]
] | TITLE: Quality In, Quality Out: Investigating Training Data's Role in AI Code
Generation
ABSTRACT: Deep Learning-based code generators have seen significant advancements in
recent years. Tools such as GitHub Copilot are used by thousands of developers
with the main promise of a boost in productivity. However, researchers have
recently questioned their impact on code quality showing, for example, that
code generated by DL-based tools may be affected by security vulnerabilities.
Since DL models are trained on large code corpora, one may conjecture that
low-quality code they output is the result of low-quality code they have seen
during training. However, there is very little empirical evidence documenting
this phenomenon. Indeed, most of previous work look at the frequency with which
commercial code generators recommend low-quality code without the possibility
of relating this to their training set. We investigate the extent to which
low-quality code instances seen during training affect the quality of the code
generated at inference time. We start by fine-tuning a pre-trained DL model on
a large-scale dataset being representative of those usually adopted in the
training of code generators. We show that 4.98% of functions in this dataset
exhibit one or more quality issues related to security, maintainability, best
practices, etc. We use the fine-tuned model to generate 551k Python functions,
showing that 5.85% of them are affected by at least one quality issue. We then
remove from the training set the low-quality functions, and use the cleaned
dataset to fine-tune a second model which has been used to generate the same
551k Python functions. We show that the model trained on the cleaned dataset
exhibits similar performance in terms of functional correctness as compared to
the original model while, however, generating a statistically significant lower
number of low-quality functions (2.16%). Our study empirically documents the
importance of high-quality training data for code generators.
|
2503.11408 | Zhong Xin | Xin Zhong, Weiwei Ling, Kejia Pan, Pinxia Wu, Jiajing Zhang, Zhiliang
Zhan, Wenbo Xiao | A Neural Network Architecture Based on Attention Gate Mechanism for 3D
Magnetotelluric Forward Modeling | 12 pages, 16 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional three-dimensional magnetotelluric (MT) numerical forward modeling
methods, such as the finite element method (FEM) and finite volume method
(FVM), suffer from high computational costs and low efficiency due to
limitations in mesh refinement and computational resources. We propose a novel
neural network architecture named MTAGU-Net, which integrates an attention
gating mechanism for 3D MT forward modeling. Specifically, a dual-path
attention gating module is designed based on forward response data images and
embedded in the skip connections between the encoder and decoder. This module
enables the fusion of critical anomaly information from shallow feature maps
during the decoding of deep feature maps, significantly enhancing the network's
capability to extract features from anomalous regions. Furthermore, we
introduce a synthetic model generation method utilizing 3D Gaussian random
field (GRF), which accurately replicates the electrical structures of
real-world geological scenarios with high fidelity. Numerical experiments
demonstrate that MTAGU-Net outperforms conventional 3D U-Net in terms of
convergence stability and prediction accuracy, with the structural similarity
index (SSIM) of the forward response data consistently exceeding 0.98.
Moreover, the network can accurately predict forward response data on
previously unseen datasets models, demonstrating its strong generalization
ability and validating the feasibility and effectiveness of this method in
practical applications.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:48:25 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhong",
"Xin",
""
],
[
"Ling",
"Weiwei",
""
],
[
"Pan",
"Kejia",
""
],
[
"Wu",
"Pinxia",
""
],
[
"Zhang",
"Jiajing",
""
],
[
"Zhan",
"Zhiliang",
""
],
[
"Xiao",
"Wenbo",
""
]
] | TITLE: A Neural Network Architecture Based on Attention Gate Mechanism for 3D
Magnetotelluric Forward Modeling
ABSTRACT: Traditional three-dimensional magnetotelluric (MT) numerical forward modeling
methods, such as the finite element method (FEM) and finite volume method
(FVM), suffer from high computational costs and low efficiency due to
limitations in mesh refinement and computational resources. We propose a novel
neural network architecture named MTAGU-Net, which integrates an attention
gating mechanism for 3D MT forward modeling. Specifically, a dual-path
attention gating module is designed based on forward response data images and
embedded in the skip connections between the encoder and decoder. This module
enables the fusion of critical anomaly information from shallow feature maps
during the decoding of deep feature maps, significantly enhancing the network's
capability to extract features from anomalous regions. Furthermore, we
introduce a synthetic model generation method utilizing 3D Gaussian random
field (GRF), which accurately replicates the electrical structures of
real-world geological scenarios with high fidelity. Numerical experiments
demonstrate that MTAGU-Net outperforms conventional 3D U-Net in terms of
convergence stability and prediction accuracy, with the structural similarity
index (SSIM) of the forward response data consistently exceeding 0.98.
Moreover, the network can accurately predict forward response data on
previously unseen datasets models, demonstrating its strong generalization
ability and validating the feasibility and effectiveness of this method in
practical applications.
|
2503.11409 | Shuaifeng Jiao | Shuaifeng Jiao and Zhiwen Zeng and Zhuoqun Su and Xieyuanli Chen and
Zongtan Zhou and Huimin Lu | LuSeg: Efficient Negative and Positive Obstacles Segmentation via
Contrast-Driven Multi-Modal Feature Fusion on the Lunar | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | As lunar exploration missions grow increasingly complex, ensuring safe and
autonomous rover-based surface exploration has become one of the key challenges
in lunar exploration tasks. In this work, we have developed a lunar surface
simulation system called the Lunar Exploration Simulator System (LESS) and the
LunarSeg dataset, which provides RGB-D data for lunar obstacle segmentation
that includes both positive and negative obstacles. Additionally, we propose a
novel two-stage segmentation network called LuSeg. Through contrastive
learning, it enforces semantic consistency between the RGB encoder from Stage I
and the depth encoder from Stage II. Experimental results on our proposed
LunarSeg dataset and additional public real-world NPO road obstacle dataset
demonstrate that LuSeg achieves state-of-the-art segmentation performance for
both positive and negative obstacles while maintaining a high inference speed
of approximately 57\,Hz. We have released the implementation of our LESS
system, LunarSeg dataset, and the code of LuSeg
at:https://github.com/nubot-nudt/LuSeg.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:51:52 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Jiao",
"Shuaifeng",
""
],
[
"Zeng",
"Zhiwen",
""
],
[
"Su",
"Zhuoqun",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Zhou",
"Zongtan",
""
],
[
"Lu",
"Huimin",
""
]
] | TITLE: LuSeg: Efficient Negative and Positive Obstacles Segmentation via
Contrast-Driven Multi-Modal Feature Fusion on the Lunar
ABSTRACT: As lunar exploration missions grow increasingly complex, ensuring safe and
autonomous rover-based surface exploration has become one of the key challenges
in lunar exploration tasks. In this work, we have developed a lunar surface
simulation system called the Lunar Exploration Simulator System (LESS) and the
LunarSeg dataset, which provides RGB-D data for lunar obstacle segmentation
that includes both positive and negative obstacles. Additionally, we propose a
novel two-stage segmentation network called LuSeg. Through contrastive
learning, it enforces semantic consistency between the RGB encoder from Stage I
and the depth encoder from Stage II. Experimental results on our proposed
LunarSeg dataset and additional public real-world NPO road obstacle dataset
demonstrate that LuSeg achieves state-of-the-art segmentation performance for
both positive and negative obstacles while maintaining a high inference speed
of approximately 57\,Hz. We have released the implementation of our LESS
system, LunarSeg dataset, and the code of LuSeg
at:https://github.com/nubot-nudt/LuSeg.
|
2503.11411 | Xu Liu | Xu Liu, Taha Aksu, Juncheng Liu, Qingsong Wen, Yuxuan Liang, Caiming
Xiong, Silvio Savarese, Doyen Sahoo, Junnan Li, Chenghao Liu | Empowering Time Series Analysis with Synthetic Data: A Survey and
Outlook in the Era of Foundation Models | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Time series analysis is crucial for understanding dynamics of complex
systems. Recent advances in foundation models have led to task-agnostic Time
Series Foundation Models (TSFMs) and Large Language Model-based Time Series
Models (TSLLMs), enabling generalized learning and integrating contextual
information. However, their success depends on large, diverse, and high-quality
datasets, which are challenging to build due to regulatory, diversity, quality,
and quantity constraints. Synthetic data emerge as a viable solution,
addressing these challenges by offering scalable, unbiased, and high-quality
alternatives. This survey provides a comprehensive review of synthetic data for
TSFMs and TSLLMs, analyzing data generation strategies, their role in model
pretraining, fine-tuning, and evaluation, and identifying future research
directions.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:53:46 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liu",
"Xu",
""
],
[
"Aksu",
"Taha",
""
],
[
"Liu",
"Juncheng",
""
],
[
"Wen",
"Qingsong",
""
],
[
"Liang",
"Yuxuan",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Sahoo",
"Doyen",
""
],
[
"Li",
"Junnan",
""
],
[
"Liu",
"Chenghao",
""
]
] | TITLE: Empowering Time Series Analysis with Synthetic Data: A Survey and
Outlook in the Era of Foundation Models
ABSTRACT: Time series analysis is crucial for understanding dynamics of complex
systems. Recent advances in foundation models have led to task-agnostic Time
Series Foundation Models (TSFMs) and Large Language Model-based Time Series
Models (TSLLMs), enabling generalized learning and integrating contextual
information. However, their success depends on large, diverse, and high-quality
datasets, which are challenging to build due to regulatory, diversity, quality,
and quantity constraints. Synthetic data emerge as a viable solution,
addressing these challenges by offering scalable, unbiased, and high-quality
alternatives. This survey provides a comprehensive review of synthetic data for
TSFMs and TSLLMs, analyzing data generation strategies, their role in model
pretraining, fine-tuning, and evaluation, and identifying future research
directions.
|
2503.11414 | Yang Lu | Chen Shu, Mengke Li, Yiqun Zhang, Yang Lu, Bo Han, Yiu-ming Cheung,
Hanzi Wang | Classifying Long-tailed and Label-noise Data via Disentangling and
Unlearning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real-world datasets, the challenges of long-tailed distributions and noisy
labels often coexist, posing obstacles to the model training and performance.
Existing studies on long-tailed noisy label learning (LTNLL) typically assume
that the generation of noisy labels is independent of the long-tailed
distribution, which may not be true from a practical perspective. In real-world
situaiton, we observe that the tail class samples are more likely to be
mislabeled as head, exacerbating the original degree of imbalance. We call this
phenomenon as ``tail-to-head (T2H)'' noise. T2H noise severely degrades model
performance by polluting the head classes and forcing the model to learn the
tail samples as head. To address this challenge, we investigate the dynamic
misleading process of the nosiy labels and propose a novel method called
Disentangling and Unlearning for Long-tailed and Label-noisy data (DULL). It
first employs the Inner-Feature Disentangling (IFD) to disentangle feature
internally. Based on this, the Inner-Feature Partial Unlearning (IFPU) is then
applied to weaken and unlearn incorrect feature regions correlated to wrong
classes. This method prevents the model from being misled by noisy labels,
enhancing the model's robustness against noise. To provide a controlled
experimental environment, we further propose a new noise addition algorithm to
simulate T2H noise. Extensive experiments on both simulated and real-world
datasets demonstrate the effectiveness of our proposed method.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 13:58:27 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Shu",
"Chen",
""
],
[
"Li",
"Mengke",
""
],
[
"Zhang",
"Yiqun",
""
],
[
"Lu",
"Yang",
""
],
[
"Han",
"Bo",
""
],
[
"Cheung",
"Yiu-ming",
""
],
[
"Wang",
"Hanzi",
""
]
] | TITLE: Classifying Long-tailed and Label-noise Data via Disentangling and
Unlearning
ABSTRACT: In real-world datasets, the challenges of long-tailed distributions and noisy
labels often coexist, posing obstacles to the model training and performance.
Existing studies on long-tailed noisy label learning (LTNLL) typically assume
that the generation of noisy labels is independent of the long-tailed
distribution, which may not be true from a practical perspective. In real-world
situaiton, we observe that the tail class samples are more likely to be
mislabeled as head, exacerbating the original degree of imbalance. We call this
phenomenon as ``tail-to-head (T2H)'' noise. T2H noise severely degrades model
performance by polluting the head classes and forcing the model to learn the
tail samples as head. To address this challenge, we investigate the dynamic
misleading process of the nosiy labels and propose a novel method called
Disentangling and Unlearning for Long-tailed and Label-noisy data (DULL). It
first employs the Inner-Feature Disentangling (IFD) to disentangle feature
internally. Based on this, the Inner-Feature Partial Unlearning (IFPU) is then
applied to weaken and unlearn incorrect feature regions correlated to wrong
classes. This method prevents the model from being misled by noisy labels,
enhancing the model's robustness against noise. To provide a controlled
experimental environment, we further propose a new noise addition algorithm to
simulate T2H noise. Extensive experiments on both simulated and real-world
datasets demonstrate the effectiveness of our proposed method.
|
2503.11423 | Hongxiang Zhao | Hongxiang Zhao, Xingchen Liu, Mutian Xu, Yiming Hao, Weikai Chen,
Xiaoguang Han | TASTE-Rob: Advancing Video Generation of Task-Oriented Hand-Object
Interaction for Generalizable Robotic Manipulation | Conference on Computer Vision and Pattern Recognition 2025 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address key limitations in existing datasets and models for task-oriented
hand-object interaction video generation, a critical approach of generating
video demonstrations for robotic imitation learning. Current datasets, such as
Ego4D, often suffer from inconsistent view perspectives and misaligned
interactions, leading to reduced video quality and limiting their applicability
for precise imitation learning tasks. Towards this end, we introduce TASTE-Rob
-- a pioneering large-scale dataset of 100,856 ego-centric hand-object
interaction videos. Each video is meticulously aligned with language
instructions and recorded from a consistent camera viewpoint to ensure
interaction clarity. By fine-tuning a Video Diffusion Model (VDM) on TASTE-Rob,
we achieve realistic object interactions, though we observed occasional
inconsistencies in hand grasping postures. To enhance realism, we introduce a
three-stage pose-refinement pipeline that improves hand posture accuracy in
generated videos. Our curated dataset, coupled with the specialized
pose-refinement framework, provides notable performance gains in generating
high-quality, task-oriented hand-object interaction videos, resulting in
achieving superior generalizable robotic manipulation. The TASTE-Rob dataset
will be made publicly available upon publication to foster further advancements
in the field.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 14:09:31 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhao",
"Hongxiang",
""
],
[
"Liu",
"Xingchen",
""
],
[
"Xu",
"Mutian",
""
],
[
"Hao",
"Yiming",
""
],
[
"Chen",
"Weikai",
""
],
[
"Han",
"Xiaoguang",
""
]
] | TITLE: TASTE-Rob: Advancing Video Generation of Task-Oriented Hand-Object
Interaction for Generalizable Robotic Manipulation
ABSTRACT: We address key limitations in existing datasets and models for task-oriented
hand-object interaction video generation, a critical approach of generating
video demonstrations for robotic imitation learning. Current datasets, such as
Ego4D, often suffer from inconsistent view perspectives and misaligned
interactions, leading to reduced video quality and limiting their applicability
for precise imitation learning tasks. Towards this end, we introduce TASTE-Rob
-- a pioneering large-scale dataset of 100,856 ego-centric hand-object
interaction videos. Each video is meticulously aligned with language
instructions and recorded from a consistent camera viewpoint to ensure
interaction clarity. By fine-tuning a Video Diffusion Model (VDM) on TASTE-Rob,
we achieve realistic object interactions, though we observed occasional
inconsistencies in hand grasping postures. To enhance realism, we introduce a
three-stage pose-refinement pipeline that improves hand posture accuracy in
generated videos. Our curated dataset, coupled with the specialized
pose-refinement framework, provides notable performance gains in generating
high-quality, task-oriented hand-object interaction videos, resulting in
achieving superior generalizable robotic manipulation. The TASTE-Rob dataset
will be made publicly available upon publication to foster further advancements
in the field.
|
2503.11441 | Jia Zhang | Jia Zhang, Chen-Xi Zhang, Yao Liu, Yi-Xuan Jin, Xiao-Wen Yang, Bo
Zheng, Yi Liu and Lan-Zhe Guo | D3: Diversity, Difficulty, and Dependability-Aware Data Selection for
Sample-Efficient LLM Instruction Tuning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in instruction tuning for large language models (LLMs)
suggest that a small, high-quality dataset can significantly equip LLMs with
instruction-following capabilities, outperforming large datasets often burdened
by quality and redundancy issues. However, the challenge lies in automatically
identifying valuable subsets from large datasets to boost both the
effectiveness and efficiency of instruction tuning. In this paper, we first
establish data selection criteria based on three distinct aspects of data
value: diversity, difficulty, and dependability, and then propose the D3 method
comprising two key steps of scoring and selection. Specifically, in the scoring
step, we define the diversity function to measure sample distinctiveness and
introduce the uncertainty-based prediction difficulty to evaluate sample
difficulty by mitigating the interference of context-oriented generation
diversity. Additionally, we integrate an external LLM for dependability
assessment. In the selection step, we formulate the D3 weighted coreset
objective, which jointly optimizes three aspects of data value to solve for the
most valuable subset. The two steps of D3 can iterate multiple rounds,
incorporating feedback to refine the selection focus adaptively. Experiments on
three datasets demonstrate the effectiveness of D3 in endowing LLMs with
competitive or even superior instruction-following capabilities using less than
10% of the entire dataset.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 14:28:19 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zhang",
"Jia",
""
],
[
"Zhang",
"Chen-Xi",
""
],
[
"Liu",
"Yao",
""
],
[
"Jin",
"Yi-Xuan",
""
],
[
"Yang",
"Xiao-Wen",
""
],
[
"Zheng",
"Bo",
""
],
[
"Liu",
"Yi",
""
],
[
"Guo",
"Lan-Zhe",
""
]
] | TITLE: D3: Diversity, Difficulty, and Dependability-Aware Data Selection for
Sample-Efficient LLM Instruction Tuning
ABSTRACT: Recent advancements in instruction tuning for large language models (LLMs)
suggest that a small, high-quality dataset can significantly equip LLMs with
instruction-following capabilities, outperforming large datasets often burdened
by quality and redundancy issues. However, the challenge lies in automatically
identifying valuable subsets from large datasets to boost both the
effectiveness and efficiency of instruction tuning. In this paper, we first
establish data selection criteria based on three distinct aspects of data
value: diversity, difficulty, and dependability, and then propose the D3 method
comprising two key steps of scoring and selection. Specifically, in the scoring
step, we define the diversity function to measure sample distinctiveness and
introduce the uncertainty-based prediction difficulty to evaluate sample
difficulty by mitigating the interference of context-oriented generation
diversity. Additionally, we integrate an external LLM for dependability
assessment. In the selection step, we formulate the D3 weighted coreset
objective, which jointly optimizes three aspects of data value to solve for the
most valuable subset. The two steps of D3 can iterate multiple rounds,
incorporating feedback to refine the selection focus adaptively. Experiments on
three datasets demonstrate the effectiveness of D3 in endowing LLMs with
competitive or even superior instruction-following capabilities using less than
10% of the entire dataset.
|
2503.11461 | Runze Xiao | Runze Xiao, Yongdong Wang, Yusuke Tsunoda, Koichi Osuka and Hajime
Asama | MRS-CWC: A Weakly Constrained Multi-Robot System with Controllable
Constraint Stiffness for Mobility and Navigation in Unknown 3D Rough
Environments | null | null | null | null | cs.RO cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Navigating unknown three-dimensional (3D) rugged environments is challenging
for multi-robot systems. Traditional discrete systems struggle with rough
terrain due to limited individual mobility, while modular systems--where rigid,
controllable constraints link robot units--improve traversal but suffer from
high control complexity and reduced flexibility. To address these limitations,
we propose the Multi-Robot System with Controllable Weak Constraints (MRS-CWC),
where robot units are connected by constraints with dynamically adjustable
stiffness. This adaptive mechanism softens or stiffens in real-time during
environmental interactions, ensuring a balance between flexibility and
mobility. We formulate the system's dynamics and control model and evaluate
MRS-CWC against six baseline methods and an ablation variant in a benchmark
dataset with 100 different simulation terrains. Results show that MRS-CWC
achieves the highest navigation completion rate and ranks second in success
rate, efficiency, and energy cost in the highly rugged terrain group,
outperforming all baseline methods without relying on environmental modeling,
path planning, or complex control. Even where MRS-CWC ranks second, its
performance is only slightly behind a more complex ablation variant with
environmental modeling and path planning. Finally, we develop a physical
prototype and validate its feasibility in a constructed rugged environment. For
videos, simulation benchmarks, and code, please visit
https://wyd0817.github.io/project-mrs-cwc/.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 14:47:58 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Xiao",
"Runze",
""
],
[
"Wang",
"Yongdong",
""
],
[
"Tsunoda",
"Yusuke",
""
],
[
"Osuka",
"Koichi",
""
],
[
"Asama",
"Hajime",
""
]
] | TITLE: MRS-CWC: A Weakly Constrained Multi-Robot System with Controllable
Constraint Stiffness for Mobility and Navigation in Unknown 3D Rough
Environments
ABSTRACT: Navigating unknown three-dimensional (3D) rugged environments is challenging
for multi-robot systems. Traditional discrete systems struggle with rough
terrain due to limited individual mobility, while modular systems--where rigid,
controllable constraints link robot units--improve traversal but suffer from
high control complexity and reduced flexibility. To address these limitations,
we propose the Multi-Robot System with Controllable Weak Constraints (MRS-CWC),
where robot units are connected by constraints with dynamically adjustable
stiffness. This adaptive mechanism softens or stiffens in real-time during
environmental interactions, ensuring a balance between flexibility and
mobility. We formulate the system's dynamics and control model and evaluate
MRS-CWC against six baseline methods and an ablation variant in a benchmark
dataset with 100 different simulation terrains. Results show that MRS-CWC
achieves the highest navigation completion rate and ranks second in success
rate, efficiency, and energy cost in the highly rugged terrain group,
outperforming all baseline methods without relying on environmental modeling,
path planning, or complex control. Even where MRS-CWC ranks second, its
performance is only slightly behind a more complex ablation variant with
environmental modeling and path planning. Finally, we develop a physical
prototype and validate its feasibility in a constructed rugged environment. For
videos, simulation benchmarks, and code, please visit
https://wyd0817.github.io/project-mrs-cwc/.
|
2503.11465 | Hang Shao | Hang Shao, Lei Luo, Jianjun Qian, Mengkai Yan, Shuo Chen, Jian Yang | Remote Photoplethysmography in Real-World and Extreme Lighting Scenarios | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Physiological activities can be manifested by the sensitive changes in facial
imaging. While they are barely observable to our eyes, computer vision manners
can, and the derived remote photoplethysmography (rPPG) has shown considerable
promise. However, existing studies mainly rely on spatial skin recognition and
temporal rhythmic interactions, so they focus on identifying explicit features
under ideal light conditions, but perform poorly in-the-wild with intricate
obstacles and extreme illumination exposure. In this paper, we propose an
end-to-end video transformer model for rPPG. It strives to eliminate complex
and unknown external time-varying interferences, whether they are sufficient to
occupy subtle biosignal amplitudes or exist as periodic perturbations that
hinder network training. In the specific implementation, we utilize global
interference sharing, subject background reference, and self-supervised
disentanglement to eliminate interference, and further guide learning based on
spatiotemporal filtering, reconstruction guidance, and frequency domain and
biological prior constraints to achieve effective rPPG. To the best of our
knowledge, this is the first robust rPPG model for real outdoor scenarios based
on natural face videos, and is lightweight to deploy. Extensive experiments
show the competitiveness and performance of our model in rPPG prediction across
datasets and scenes.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 14:50:58 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Shao",
"Hang",
""
],
[
"Luo",
"Lei",
""
],
[
"Qian",
"Jianjun",
""
],
[
"Yan",
"Mengkai",
""
],
[
"Chen",
"Shuo",
""
],
[
"Yang",
"Jian",
""
]
] | TITLE: Remote Photoplethysmography in Real-World and Extreme Lighting Scenarios
ABSTRACT: Physiological activities can be manifested by the sensitive changes in facial
imaging. While they are barely observable to our eyes, computer vision manners
can, and the derived remote photoplethysmography (rPPG) has shown considerable
promise. However, existing studies mainly rely on spatial skin recognition and
temporal rhythmic interactions, so they focus on identifying explicit features
under ideal light conditions, but perform poorly in-the-wild with intricate
obstacles and extreme illumination exposure. In this paper, we propose an
end-to-end video transformer model for rPPG. It strives to eliminate complex
and unknown external time-varying interferences, whether they are sufficient to
occupy subtle biosignal amplitudes or exist as periodic perturbations that
hinder network training. In the specific implementation, we utilize global
interference sharing, subject background reference, and self-supervised
disentanglement to eliminate interference, and further guide learning based on
spatiotemporal filtering, reconstruction guidance, and frequency domain and
biological prior constraints to achieve effective rPPG. To the best of our
knowledge, this is the first robust rPPG model for real outdoor scenarios based
on natural face videos, and is lightweight to deploy. Extensive experiments
show the competitiveness and performance of our model in rPPG prediction across
datasets and scenes.
|
2503.11466 | Paula Lago | Azhar Ali Khaked, Nobuyuki Oishi, Daniel Roggen and Paula Lago | In Shift and In Variance: Assessing the Robustness of HAR Deep Learning
Models against Variability | null | Sensors, 25(2), 430 (2025) | 10.3390/s25020430 | null | cs.HC cs.LG eess.SP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Human Activity Recognition (HAR) using wearable inertial measurement unit
(IMU) sensors can revolutionize healthcare by enabling continual health
monitoring, disease prediction, and routine recognition. Despite the high
accuracy of Deep Learning (DL) HAR models, their robustness to real-world
variabilities remains untested, as they have primarily been trained and tested
on limited lab-confined data. In this study, we isolate subject, device,
position, and orientation variability to determine their effect on DL HAR
models and assess the robustness of these models in real-world conditions. We
evaluated the DL HAR models using the HARVAR and REALDISP datasets, providing a
comprehensive discussion on the impact of variability on data distribution
shifts and changes in model performance. Our experiments measured shifts in
data distribution using Maximum Mean Discrepancy (MMD) and observed DL model
performance drops due to variability. We concur that studied variabilities
affect DL HAR models differently, and there is an inverse relationship between
data distribution shifts and model performance. The compounding effect of
variability was analyzed, and the implications of variabilities in real-world
scenarios were highlighted. MMD proved an effective metric for calculating data
distribution shifts and explained the drop in performance due to variabilities
in HARVAR and REALDISP datasets. Combining our understanding of variability
with evaluating its effects will facilitate the development of more robust DL
HAR models and optimal training techniques. Allowing Future models to not only
be assessed based on their maximum F1 score but also on their ability to
generalize effectively
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 14:53:56 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Khaked",
"Azhar Ali",
""
],
[
"Oishi",
"Nobuyuki",
""
],
[
"Roggen",
"Daniel",
""
],
[
"Lago",
"Paula",
""
]
] | TITLE: In Shift and In Variance: Assessing the Robustness of HAR Deep Learning
Models against Variability
ABSTRACT: Human Activity Recognition (HAR) using wearable inertial measurement unit
(IMU) sensors can revolutionize healthcare by enabling continual health
monitoring, disease prediction, and routine recognition. Despite the high
accuracy of Deep Learning (DL) HAR models, their robustness to real-world
variabilities remains untested, as they have primarily been trained and tested
on limited lab-confined data. In this study, we isolate subject, device,
position, and orientation variability to determine their effect on DL HAR
models and assess the robustness of these models in real-world conditions. We
evaluated the DL HAR models using the HARVAR and REALDISP datasets, providing a
comprehensive discussion on the impact of variability on data distribution
shifts and changes in model performance. Our experiments measured shifts in
data distribution using Maximum Mean Discrepancy (MMD) and observed DL model
performance drops due to variability. We concur that studied variabilities
affect DL HAR models differently, and there is an inverse relationship between
data distribution shifts and model performance. The compounding effect of
variability was analyzed, and the implications of variabilities in real-world
scenarios were highlighted. MMD proved an effective metric for calculating data
distribution shifts and explained the drop in performance due to variabilities
in HARVAR and REALDISP datasets. Combining our understanding of variability
with evaluating its effects will facilitate the development of more robust DL
HAR models and optimal training techniques. Allowing Future models to not only
be assessed based on their maximum F1 score but also on their ability to
generalize effectively
|
2503.11469 | Jens Engel | Jens Engel, Andrea Castellani, Patricia Wollstadt, Felix Lanfermann,
Thomas Schmitt, Sebastian Schmitt, Lydia Fischer, Steffen Limmer, David
Luttropp, Florian Jomrich, Ren\'e Unger, Tobias Rodemann | A Real-World Energy Management Dataset from a Smart Company Building for
Optimization and Machine Learning | 22 pages, 9 figures. Preprint submitted to Scientific Data | null | null | null | eess.SY cs.LG cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present a large real-world dataset obtained from monitoring a smart
company facility over the course of six years, from 2018 to 2023. The dataset
includes energy consumption data from various facility areas and components,
energy production data from a photovoltaic system and a combined heat and power
plant, operational data from heating and cooling systems, and weather data from
an on-site weather station. The measurement sensors installed throughout the
facility are organized in a hierarchical metering structure with multiple
sub-metering levels, which is reflected in the dataset. The dataset contains
measurement data from 72 energy meters, 9 heat meters and a weather station.
Both raw and processed data at different processing levels, including labeled
issues, is available. In this paper, we describe the data acquisition and
post-processing employed to create the dataset. The dataset enables the
application of a wide range of methods in the domain of energy management,
including optimization, modeling, and machine learning to optimize building
operations and reduce costs and carbon emissions.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 14:55:22 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Engel",
"Jens",
""
],
[
"Castellani",
"Andrea",
""
],
[
"Wollstadt",
"Patricia",
""
],
[
"Lanfermann",
"Felix",
""
],
[
"Schmitt",
"Thomas",
""
],
[
"Schmitt",
"Sebastian",
""
],
[
"Fischer",
"Lydia",
""
],
[
"Limmer",
"Steffen",
""
],
[
"Luttropp",
"David",
""
],
[
"Jomrich",
"Florian",
""
],
[
"Unger",
"René",
""
],
[
"Rodemann",
"Tobias",
""
]
] | TITLE: A Real-World Energy Management Dataset from a Smart Company Building for
Optimization and Machine Learning
ABSTRACT: We present a large real-world dataset obtained from monitoring a smart
company facility over the course of six years, from 2018 to 2023. The dataset
includes energy consumption data from various facility areas and components,
energy production data from a photovoltaic system and a combined heat and power
plant, operational data from heating and cooling systems, and weather data from
an on-site weather station. The measurement sensors installed throughout the
facility are organized in a hierarchical metering structure with multiple
sub-metering levels, which is reflected in the dataset. The dataset contains
measurement data from 72 energy meters, 9 heat meters and a weather station.
Both raw and processed data at different processing levels, including labeled
issues, is available. In this paper, we describe the data acquisition and
post-processing employed to create the dataset. The dataset enables the
application of a wide range of methods in the domain of energy management,
including optimization, modeling, and machine learning to optimize building
operations and reduce costs and carbon emissions.
|
2503.11495 | Jian Hu | Zixu Cheng, Jian Hu, Ziquan Liu, Chenyang Si, Wei Li, Shaogang Gong | V-STaR: Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning | A benchmark for Video Spatio-Temporal Reasoning | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Human processes video reasoning in a sequential spatio-temporal reasoning
logic, we first identify the relevant frames ("when") and then analyse the
spatial relationships ("where") between key objects, and finally leverage these
relationships to draw inferences ("what"). However, can Video Large Language
Models (Video-LLMs) also "reason through a sequential spatio-temporal logic" in
videos? Existing Video-LLM benchmarks primarily focus on assessing object
presence, neglecting relational reasoning. Consequently, it is difficult to
measure whether a model truly comprehends object interactions (actions/events)
in videos or merely relies on pre-trained "memory" of co-occurrences as biases
in generating answers. In this work, we introduce a Video Spatio-Temporal
Reasoning (V-STaR) benchmark to address these shortcomings. The key idea is to
decompose video understanding into a Reverse Spatio-Temporal Reasoning (RSTR)
task that simultaneously evaluates what objects are present, when events occur,
and where they are located while capturing the underlying Chain-of-thought
(CoT) logic. To support this evaluation, we construct a dataset to elicit the
spatial-temporal reasoning process of Video-LLMs. It contains coarse-to-fine
CoT questions generated by a semi-automated GPT-4-powered pipeline, embedding
explicit reasoning chains to mimic human cognition. Experiments from 14
Video-LLMs on our V-STaR reveal significant gaps between current Video-LLMs and
the needs for robust and consistent spatio-temporal reasoning.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 15:21:44 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Cheng",
"Zixu",
""
],
[
"Hu",
"Jian",
""
],
[
"Liu",
"Ziquan",
""
],
[
"Si",
"Chenyang",
""
],
[
"Li",
"Wei",
""
],
[
"Gong",
"Shaogang",
""
]
] | TITLE: V-STaR: Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning
ABSTRACT: Human processes video reasoning in a sequential spatio-temporal reasoning
logic, we first identify the relevant frames ("when") and then analyse the
spatial relationships ("where") between key objects, and finally leverage these
relationships to draw inferences ("what"). However, can Video Large Language
Models (Video-LLMs) also "reason through a sequential spatio-temporal logic" in
videos? Existing Video-LLM benchmarks primarily focus on assessing object
presence, neglecting relational reasoning. Consequently, it is difficult to
measure whether a model truly comprehends object interactions (actions/events)
in videos or merely relies on pre-trained "memory" of co-occurrences as biases
in generating answers. In this work, we introduce a Video Spatio-Temporal
Reasoning (V-STaR) benchmark to address these shortcomings. The key idea is to
decompose video understanding into a Reverse Spatio-Temporal Reasoning (RSTR)
task that simultaneously evaluates what objects are present, when events occur,
and where they are located while capturing the underlying Chain-of-thought
(CoT) logic. To support this evaluation, we construct a dataset to elicit the
spatial-temporal reasoning process of Video-LLMs. It contains coarse-to-fine
CoT questions generated by a semi-automated GPT-4-powered pipeline, embedding
explicit reasoning chains to mimic human cognition. Experiments from 14
Video-LLMs on our V-STaR reveal significant gaps between current Video-LLMs and
the needs for robust and consistent spatio-temporal reasoning.
|
2503.11496 | Shaofeng Liang | Shaofeng Liang and Runwei Guan and Wangwang Lian and Daizong Liu and
Xiaolou Sun and Dongming Wu and Yutao Yue and Weiping Ding and Hui Xiong | Cognitive Disentanglement for Referring Multi-Object Tracking | 24 pages, 9 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a significant application of multi-source information fusion in
intelligent transportation perception systems, Referring Multi-Object Tracking
(RMOT) involves localizing and tracking specific objects in video sequences
based on language references. However, existing RMOT approaches often treat
language descriptions as holistic embeddings and struggle to effectively
integrate the rich semantic information contained in language expressions with
visual features. This limitation is especially apparent in complex scenes
requiring comprehensive understanding of both static object attributes and
spatial motion information. In this paper, we propose a Cognitive
Disentanglement for Referring Multi-Object Tracking (CDRMT) framework that
addresses these challenges. It adapts the "what" and "where" pathways from
human visual processing system to RMOT tasks. Specifically, our framework
comprises three collaborative components: (1)The Bidirectional Interactive
Fusion module first establishes cross-modal connections while preserving
modality-specific characteristics; (2) Building upon this foundation, the
Progressive Semantic-Decoupled Query Learning mechanism hierarchically injects
complementary information into object queries, progressively refining object
understanding from coarse to fine-grained semantic levels; (3) Finally, the
Structural Consensus Constraint enforces bidirectional semantic consistency
between visual features and language descriptions, ensuring that tracked
objects faithfully reflect the referring expression. Extensive experiments on
different benchmark datasets demonstrate that CDRMT achieves substantial
improvements over state-of-the-art methods, with average gains of 6.0% in HOTA
score on Refer-KITTI and 3.2% on Refer-KITTI-V2. Our approach advances the
state-of-the-art in RMOT while simultaneously providing new insights into
multi-source information fusion.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 15:21:54 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Liang",
"Shaofeng",
""
],
[
"Guan",
"Runwei",
""
],
[
"Lian",
"Wangwang",
""
],
[
"Liu",
"Daizong",
""
],
[
"Sun",
"Xiaolou",
""
],
[
"Wu",
"Dongming",
""
],
[
"Yue",
"Yutao",
""
],
[
"Ding",
"Weiping",
""
],
[
"Xiong",
"Hui",
""
]
] | TITLE: Cognitive Disentanglement for Referring Multi-Object Tracking
ABSTRACT: As a significant application of multi-source information fusion in
intelligent transportation perception systems, Referring Multi-Object Tracking
(RMOT) involves localizing and tracking specific objects in video sequences
based on language references. However, existing RMOT approaches often treat
language descriptions as holistic embeddings and struggle to effectively
integrate the rich semantic information contained in language expressions with
visual features. This limitation is especially apparent in complex scenes
requiring comprehensive understanding of both static object attributes and
spatial motion information. In this paper, we propose a Cognitive
Disentanglement for Referring Multi-Object Tracking (CDRMT) framework that
addresses these challenges. It adapts the "what" and "where" pathways from
human visual processing system to RMOT tasks. Specifically, our framework
comprises three collaborative components: (1)The Bidirectional Interactive
Fusion module first establishes cross-modal connections while preserving
modality-specific characteristics; (2) Building upon this foundation, the
Progressive Semantic-Decoupled Query Learning mechanism hierarchically injects
complementary information into object queries, progressively refining object
understanding from coarse to fine-grained semantic levels; (3) Finally, the
Structural Consensus Constraint enforces bidirectional semantic consistency
between visual features and language descriptions, ensuring that tracked
objects faithfully reflect the referring expression. Extensive experiments on
different benchmark datasets demonstrate that CDRMT achieves substantial
improvements over state-of-the-art methods, with average gains of 6.0% in HOTA
score on Refer-KITTI and 3.2% on Refer-KITTI-V2. Our approach advances the
state-of-the-art in RMOT while simultaneously providing new insights into
multi-source information fusion.
|
2503.11519 | Hao Cheng | Hao Cheng, Erjia Xiao, Yichi Wang, Kaidi Xu, Mengshu Sun, Jindong Gu,
Renjing Xu | Exploring Typographic Visual Prompts Injection Threats in Cross-Modality
Generation Models | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current Cross-Modality Generation Models (GMs) demonstrate remarkable
capabilities in various generative tasks. Given the ubiquity and information
richness of vision modality inputs in real-world scenarios, Cross-vision,
encompassing Vision-Language Perception (VLP) and Image-to-Image (I2I), tasks
have attracted significant attention. Large Vision Language Models (LVLMs) and
I2I GMs are employed to handle VLP and I2I tasks, respectively. Previous
research indicates that printing typographic words into input images
significantly induces LVLMs and I2I GMs to generate disruptive outputs
semantically related to those words. Additionally, visual prompts, as a more
sophisticated form of typography, are also revealed to pose security risks to
various applications of VLP tasks when injected into images. In this paper, we
comprehensively investigate the performance impact induced by Typographic
Visual Prompt Injection (TVPI) in various LVLMs and I2I GMs. To better observe
performance modifications and characteristics of this threat, we also introduce
the TVPI Dataset. Through extensive explorations, we deepen the understanding
of the underlying causes of the TVPI threat in various GMs and offer valuable
insights into its potential origins.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 15:42:42 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Cheng",
"Hao",
""
],
[
"Xiao",
"Erjia",
""
],
[
"Wang",
"Yichi",
""
],
[
"Xu",
"Kaidi",
""
],
[
"Sun",
"Mengshu",
""
],
[
"Gu",
"Jindong",
""
],
[
"Xu",
"Renjing",
""
]
] | TITLE: Exploring Typographic Visual Prompts Injection Threats in Cross-Modality
Generation Models
ABSTRACT: Current Cross-Modality Generation Models (GMs) demonstrate remarkable
capabilities in various generative tasks. Given the ubiquity and information
richness of vision modality inputs in real-world scenarios, Cross-vision,
encompassing Vision-Language Perception (VLP) and Image-to-Image (I2I), tasks
have attracted significant attention. Large Vision Language Models (LVLMs) and
I2I GMs are employed to handle VLP and I2I tasks, respectively. Previous
research indicates that printing typographic words into input images
significantly induces LVLMs and I2I GMs to generate disruptive outputs
semantically related to those words. Additionally, visual prompts, as a more
sophisticated form of typography, are also revealed to pose security risks to
various applications of VLP tasks when injected into images. In this paper, we
comprehensively investigate the performance impact induced by Typographic
Visual Prompt Injection (TVPI) in various LVLMs and I2I GMs. To better observe
performance modifications and characteristics of this threat, we also introduce
the TVPI Dataset. Through extensive explorations, we deepen the understanding
of the underlying causes of the TVPI threat in various GMs and offer valuable
insights into its potential origins.
|
2503.11535 | Mario Scrocca | Mario Scrocca, Lina Molinas Comet, Benjamin Witsch, Daham Mohammed
Mustafa, Christoph Lange, Marco Comerio, Peter Lubrich | mobilityDCAT-AP: a Metadata Specification for Enhanced Cross-border
Mobility Data Sharing | Paper accepted for publication at the 22th Extended Semantic Web
Conference (ESWC) 2025. This preprint has not undergone peer review or any
post-submission improvements or corrections. The Version of Record of this
contribution will be published in the conference proceedings | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | Integrated and efficient mobility requires data sharing among the involved
stakeholders. In this direction, regulators and transport authorities have been
defining policies to foster the digitalisation and online publication of
mobility data. However, the creation of several heterogeneous data portals for
mobility data resulted in a fragmented ecosystem that challenges data
accessibility. In this context, metadata is a key enabler to foster the
findability and reusability of relevant datasets, but their interoperability
across different data portals should be ensured. Moreover, each domain presents
specificities on the relevant information that should be encoded through
metadata. To solve these issues within the mobility domain, we present
mobilityDCAT-AP, a reference metadata specification for mobility data portals
specified by putting together domain experts and the Semantic Web community. We
report on the work done to develop the metadata model behind mobilityDCAT-AP
and the best practices followed in its implementation and publication. Finally,
we describe the available educational resources and the activities performed to
ensure broader adoption of mobilityDCAT-AP across mobility data portals. We
present success stories from early adopters and discuss the challenges they
encountered in implementing a metadata specification based on Semantic Web
technologies.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 16:01:32 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Scrocca",
"Mario",
""
],
[
"Comet",
"Lina Molinas",
""
],
[
"Witsch",
"Benjamin",
""
],
[
"Mustafa",
"Daham Mohammed",
""
],
[
"Lange",
"Christoph",
""
],
[
"Comerio",
"Marco",
""
],
[
"Lubrich",
"Peter",
""
]
] | TITLE: mobilityDCAT-AP: a Metadata Specification for Enhanced Cross-border
Mobility Data Sharing
ABSTRACT: Integrated and efficient mobility requires data sharing among the involved
stakeholders. In this direction, regulators and transport authorities have been
defining policies to foster the digitalisation and online publication of
mobility data. However, the creation of several heterogeneous data portals for
mobility data resulted in a fragmented ecosystem that challenges data
accessibility. In this context, metadata is a key enabler to foster the
findability and reusability of relevant datasets, but their interoperability
across different data portals should be ensured. Moreover, each domain presents
specificities on the relevant information that should be encoded through
metadata. To solve these issues within the mobility domain, we present
mobilityDCAT-AP, a reference metadata specification for mobility data portals
specified by putting together domain experts and the Semantic Web community. We
report on the work done to develop the metadata model behind mobilityDCAT-AP
and the best practices followed in its implementation and publication. Finally,
we describe the available educational resources and the activities performed to
ensure broader adoption of mobilityDCAT-AP across mobility data portals. We
present success stories from early adopters and discuss the challenges they
encountered in implementing a metadata specification based on Semantic Web
technologies.
|
2503.11537 | Gerhard Koenig | Kavindri Ranasinghe, Adam L. Baskerville, Geoffrey P. F. Wood, Gerhard
Koenig | Basic stability tests of machine learning potentials for molecular
simulations in computational drug discovery | 30 pages, 5 figures | null | null | null | physics.comp-ph physics.chem-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Neural network potentials trained on quantum-mechanical data can calculate
molecular interactions with relatively high speed and accuracy. However, neural
network potentials might exhibit instabilities, nonphysical behavior, or lack
accuracy. To assess the reliability of neural network potentials, a series of
tests is conducted during model training, in the gas phase, and in the
condensed phase. The testing procedure is performed for eight in-house neural
network potentials based on the ANI-2x dataset, using both the ANI-2x and MACE
architectures. This allows an evaluation of the effect of the model
architecture on its performance. We also perform stability tests of the
publicly available neural network potentials ANI-2x, ANI-1ccx, MACE-OFF23, and
AIMNet2. A normal mode analysis of 14 simple benchmark molecules revealed that
the small MACE-OFF23 model shows large deviations from the reference
quantum-mechanical energy surface. Also, some MACE models with a reduced number
of parameters failed to produce stable molecular dynamics simulations in the
gas phase, and all MACE models exhibit unfavorable behavior during steric
clashes. The published ANI-2x and one of the in-house MACE models are not able
to reproduce the structure of liquid water at ambient conditions, forming an
amorphous solid phase instead. The ANI-1ccx model shows nonphysical additional
energy minima in bond length and bond angle space, which caused a phase
transition to an amorphous solid. Out of all 13 considered public and in-house
models, only one in-house model based on the ANI-2x B97-3c dataset shows better
agreement with the experimental radial distribution function of water than the
simple molecular mechanics TIP3P model. This shows that great care must be
taken during model training and when selecting a neural network potential for
real-world applications.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 16:03:27 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Ranasinghe",
"Kavindri",
""
],
[
"Baskerville",
"Adam L.",
""
],
[
"Wood",
"Geoffrey P. F.",
""
],
[
"Koenig",
"Gerhard",
""
]
] | TITLE: Basic stability tests of machine learning potentials for molecular
simulations in computational drug discovery
ABSTRACT: Neural network potentials trained on quantum-mechanical data can calculate
molecular interactions with relatively high speed and accuracy. However, neural
network potentials might exhibit instabilities, nonphysical behavior, or lack
accuracy. To assess the reliability of neural network potentials, a series of
tests is conducted during model training, in the gas phase, and in the
condensed phase. The testing procedure is performed for eight in-house neural
network potentials based on the ANI-2x dataset, using both the ANI-2x and MACE
architectures. This allows an evaluation of the effect of the model
architecture on its performance. We also perform stability tests of the
publicly available neural network potentials ANI-2x, ANI-1ccx, MACE-OFF23, and
AIMNet2. A normal mode analysis of 14 simple benchmark molecules revealed that
the small MACE-OFF23 model shows large deviations from the reference
quantum-mechanical energy surface. Also, some MACE models with a reduced number
of parameters failed to produce stable molecular dynamics simulations in the
gas phase, and all MACE models exhibit unfavorable behavior during steric
clashes. The published ANI-2x and one of the in-house MACE models are not able
to reproduce the structure of liquid water at ambient conditions, forming an
amorphous solid phase instead. The ANI-1ccx model shows nonphysical additional
energy minima in bond length and bond angle space, which caused a phase
transition to an amorphous solid. Out of all 13 considered public and in-house
models, only one in-house model based on the ANI-2x B97-3c dataset shows better
agreement with the experimental radial distribution function of water than the
simple molecular mechanics TIP3P model. This shows that great care must be
taken during model training and when selecting a neural network potential for
real-world applications.
|
2503.11544 | Parsa Rahimi Noshanagh | Parsa Rahimi, Damien Teney, Sebastien Marcel | AugGen: Synthetic Augmentation Can Improve Discriminative Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The increasing dependence on large-scale datasets in machine learning
introduces significant privacy and ethical challenges. Synthetic data
generation offers a promising solution; however, most current methods rely on
external datasets or pre-trained models, which add complexity and escalate
resource demands. In this work, we introduce a novel self-contained synthetic
augmentation technique that strategically samples from a conditional generative
model trained exclusively on the target dataset. This approach eliminates the
need for auxiliary data sources. Applied to face recognition datasets, our
method achieves 1--12\% performance improvements on the IJB-C and IJB-B
benchmarks. It outperforms models trained solely on real data and exceeds the
performance of state-of-the-art synthetic data generation baselines. Notably,
these enhancements often surpass those achieved through architectural
improvements, underscoring the significant impact of synthetic augmentation in
data-scarce environments. These findings demonstrate that carefully integrated
synthetic data not only addresses privacy and resource constraints but also
substantially boosts model performance. Project page
https://parsa-ra.github.io/auggen
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 16:10:21 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Rahimi",
"Parsa",
""
],
[
"Teney",
"Damien",
""
],
[
"Marcel",
"Sebastien",
""
]
] | TITLE: AugGen: Synthetic Augmentation Can Improve Discriminative Models
ABSTRACT: The increasing dependence on large-scale datasets in machine learning
introduces significant privacy and ethical challenges. Synthetic data
generation offers a promising solution; however, most current methods rely on
external datasets or pre-trained models, which add complexity and escalate
resource demands. In this work, we introduce a novel self-contained synthetic
augmentation technique that strategically samples from a conditional generative
model trained exclusively on the target dataset. This approach eliminates the
need for auxiliary data sources. Applied to face recognition datasets, our
method achieves 1--12\% performance improvements on the IJB-C and IJB-B
benchmarks. It outperforms models trained solely on real data and exceeds the
performance of state-of-the-art synthetic data generation baselines. Notably,
these enhancements often surpass those achieved through architectural
improvements, underscoring the significant impact of synthetic augmentation in
data-scarce environments. These findings demonstrate that carefully integrated
synthetic data not only addresses privacy and resource constraints but also
substantially boosts model performance. Project page
https://parsa-ra.github.io/auggen
|
2503.11575 | Guangya Cai | Guangya Cai | Finding a Fair Scoring Function for Top-$k$ Selection: Hardness,
Algorithms, and Experiments | null | null | null | null | cs.DB cs.CC cs.CY cs.DC cs.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Selecting a subset of the $k$ "best" items from a dataset of $n$ items, based
on a scoring function, is a key task in decision-making. Given the widespread
use of automated decision-making software nowadays, it is important that the
outcome of this process, called top-$k$ selection, is fair. Here we consider
the problem of identifying a linear scoring function for top-$k$ selection that
is fair. The function computes a score for each item as a weighted sum of its
(numerical) attribute values. Additionally, the function must ensure that the
subset selected is a faithful representative of the entire dataset for a
minority or historically disadvantaged group. Existing algorithms do not scale
effectively on large, high-dimensional datasets. Our theoretical analysis shows
that in more than two dimensions, no algorithm is likely to achieve good
scalability with respect to dataset size (i.e., a run time of $O(n\cdot
\text{polylog}(n))$), and the computational complexity is likely to increase
rapidly with dimensionality. However, there are exceptions for small values of
$k$ and for this case we provide significantly faster algorithms. We also
provide efficient practical variants of these algorithms. Our implementations
of these take advantage of modern hardware (e.g., exploiting parallelism). For
large values of $k$, we give an alternative algorithm that, while theoretically
worse, performs better in practice. Experimental results on real-world datasets
demonstrate the efficiency of our proposed algorithms, which achieve speed-ups
of up to several orders of magnitude compared to the state of the art (SoTA).
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 16:40:36 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Cai",
"Guangya",
""
]
] | TITLE: Finding a Fair Scoring Function for Top-$k$ Selection: Hardness,
Algorithms, and Experiments
ABSTRACT: Selecting a subset of the $k$ "best" items from a dataset of $n$ items, based
on a scoring function, is a key task in decision-making. Given the widespread
use of automated decision-making software nowadays, it is important that the
outcome of this process, called top-$k$ selection, is fair. Here we consider
the problem of identifying a linear scoring function for top-$k$ selection that
is fair. The function computes a score for each item as a weighted sum of its
(numerical) attribute values. Additionally, the function must ensure that the
subset selected is a faithful representative of the entire dataset for a
minority or historically disadvantaged group. Existing algorithms do not scale
effectively on large, high-dimensional datasets. Our theoretical analysis shows
that in more than two dimensions, no algorithm is likely to achieve good
scalability with respect to dataset size (i.e., a run time of $O(n\cdot
\text{polylog}(n))$), and the computational complexity is likely to increase
rapidly with dimensionality. However, there are exceptions for small values of
$k$ and for this case we provide significantly faster algorithms. We also
provide efficient practical variants of these algorithms. Our implementations
of these take advantage of modern hardware (e.g., exploiting parallelism). For
large values of $k$, we give an alternative algorithm that, while theoretically
worse, performs better in practice. Experimental results on real-world datasets
demonstrate the efficiency of our proposed algorithms, which achieve speed-ups
of up to several orders of magnitude compared to the state of the art (SoTA).
|
2503.11576 | Ahmed Nassar | Ahmed Nassar, Andres Marafioti, Matteo Omenetti, Maksym Lysak,
Nikolaos Livathinos, Christoph Auer, Lucas Morin, Rafael Teixeira de Lima,
Yusik Kim, A. Said Gurbuz, Michele Dolfi, Miquel Farr\'e, Peter W. J. Staar | SmolDocling: An ultra-compact vision-language model for end-to-end
multi-modal document conversion | 24 pages, 10 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce SmolDocling, an ultra-compact vision-language model targeting
end-to-end document conversion. Our model comprehensively processes entire
pages by generating DocTags, a new universal markup format that captures all
page elements in their full context with location. Unlike existing approaches
that rely on large foundational models, or ensemble solutions that rely on
handcrafted pipelines of multiple specialized models, SmolDocling offers an
end-to-end conversion for accurately capturing content, structure and spatial
location of document elements in a 256M parameters vision-language model.
SmolDocling exhibits robust performance in correctly reproducing document
features such as code listings, tables, equations, charts, lists, and more
across a diverse range of document types including business documents, academic
papers, technical reports, patents, and forms -- significantly extending beyond
the commonly observed focus on scientific papers. Additionally, we contribute
novel publicly sourced datasets for charts, tables, equations, and code
recognition. Experimental results demonstrate that SmolDocling competes with
other Vision Language Models that are up to 27 times larger in size, while
reducing computational requirements substantially. The model is currently
available, datasets will be publicly available soon.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 16:44:14 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Nassar",
"Ahmed",
""
],
[
"Marafioti",
"Andres",
""
],
[
"Omenetti",
"Matteo",
""
],
[
"Lysak",
"Maksym",
""
],
[
"Livathinos",
"Nikolaos",
""
],
[
"Auer",
"Christoph",
""
],
[
"Morin",
"Lucas",
""
],
[
"de Lima",
"Rafael Teixeira",
""
],
[
"Kim",
"Yusik",
""
],
[
"Gurbuz",
"A. Said",
""
],
[
"Dolfi",
"Michele",
""
],
[
"Farré",
"Miquel",
""
],
[
"Staar",
"Peter W. J.",
""
]
] | TITLE: SmolDocling: An ultra-compact vision-language model for end-to-end
multi-modal document conversion
ABSTRACT: We introduce SmolDocling, an ultra-compact vision-language model targeting
end-to-end document conversion. Our model comprehensively processes entire
pages by generating DocTags, a new universal markup format that captures all
page elements in their full context with location. Unlike existing approaches
that rely on large foundational models, or ensemble solutions that rely on
handcrafted pipelines of multiple specialized models, SmolDocling offers an
end-to-end conversion for accurately capturing content, structure and spatial
location of document elements in a 256M parameters vision-language model.
SmolDocling exhibits robust performance in correctly reproducing document
features such as code listings, tables, equations, charts, lists, and more
across a diverse range of document types including business documents, academic
papers, technical reports, patents, and forms -- significantly extending beyond
the commonly observed focus on scientific papers. Additionally, we contribute
novel publicly sourced datasets for charts, tables, equations, and code
recognition. Experimental results demonstrate that SmolDocling competes with
other Vision Language Models that are up to 27 times larger in size, while
reducing computational requirements substantially. The model is currently
available, datasets will be publicly available soon.
|
2503.11609 | Matteo Farina | Matteo Farina, Massimiliano Mancini, Giovanni Iacca and Elisa Ricci | Rethinking Few-Shot Adaptation of Vision-Language Models in Two Stages | Camera-ready version for CVPR 2025 (w/ SuppMat, 23 pages) | null | null | null | cs.CV cs.LG cs.MM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | An old-school recipe for training a classifier is to (i) learn a good feature
extractor and (ii) optimize a linear layer atop. When only a handful of samples
are available per category, as in Few-Shot Adaptation (FSA), data are
insufficient to fit a large number of parameters, rendering the above
impractical. This is especially true with large pre-trained Vision-Language
Models (VLMs), which motivated successful research at the intersection of
Parameter-Efficient Fine-tuning (PEFT) and FSA. In this work, we start by
analyzing the learning dynamics of PEFT techniques when trained on few-shot
data from only a subset of categories, referred to as the ``base'' classes. We
show that such dynamics naturally splits into two distinct phases: (i)
task-level feature extraction and (ii) specialization to the available
concepts. To accommodate this dynamic, we then depart from prompt- or
adapter-based methods and tackle FSA differently. Specifically, given a fixed
computational budget, we split it to (i) learn a task-specific feature
extractor via PEFT and (ii) train a linear classifier on top. We call this
scheme Two-Stage Few-Shot Adaptation (2SFS). Differently from established
methods, our scheme enables a novel form of selective inference at a category
level, i.e., at test time, only novel categories are embedded by the adapted
text encoder, while embeddings of base categories are available within the
classifier. Results with fixed hyperparameters across two settings, three
backbones, and eleven datasets, show that 2SFS matches or surpasses the
state-of-the-art, while established methods degrade significantly across
settings.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:24:01 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Farina",
"Matteo",
""
],
[
"Mancini",
"Massimiliano",
""
],
[
"Iacca",
"Giovanni",
""
],
[
"Ricci",
"Elisa",
""
]
] | TITLE: Rethinking Few-Shot Adaptation of Vision-Language Models in Two Stages
ABSTRACT: An old-school recipe for training a classifier is to (i) learn a good feature
extractor and (ii) optimize a linear layer atop. When only a handful of samples
are available per category, as in Few-Shot Adaptation (FSA), data are
insufficient to fit a large number of parameters, rendering the above
impractical. This is especially true with large pre-trained Vision-Language
Models (VLMs), which motivated successful research at the intersection of
Parameter-Efficient Fine-tuning (PEFT) and FSA. In this work, we start by
analyzing the learning dynamics of PEFT techniques when trained on few-shot
data from only a subset of categories, referred to as the ``base'' classes. We
show that such dynamics naturally splits into two distinct phases: (i)
task-level feature extraction and (ii) specialization to the available
concepts. To accommodate this dynamic, we then depart from prompt- or
adapter-based methods and tackle FSA differently. Specifically, given a fixed
computational budget, we split it to (i) learn a task-specific feature
extractor via PEFT and (ii) train a linear classifier on top. We call this
scheme Two-Stage Few-Shot Adaptation (2SFS). Differently from established
methods, our scheme enables a novel form of selective inference at a category
level, i.e., at test time, only novel categories are embedded by the adapted
text encoder, while embeddings of base categories are available within the
classifier. Results with fixed hyperparameters across two settings, three
backbones, and eleven datasets, show that 2SFS matches or surpasses the
state-of-the-art, while established methods degrade significantly across
settings.
|
2503.11612 | Joseph Zuber | Joseph Zuber, Aishwarya Sarkar, Joseph Jennings, Ali Jannesari | Enhanced Soups for Graph Neural Networks | 10 pages, 4 figures, 3 tables, accepted to GrAPL 2025 (colocated with
IPDPS 2025) | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNN) have demonstrated state-of-the-art performance in
numerous scientific and high-performance computing (HPC) applications. Recent
work suggests that "souping" (combining) individually trained GNNs into a
single model can improve performance without increasing compute and memory
costs during inference. However, existing souping algorithms are often slow and
memory-intensive, which limits their scalability.
We introduce Learned Souping for GNNs, a gradient-descent-based souping
strategy that substantially reduces time and memory overhead compared to
existing methods. Our approach is evaluated across multiple Open Graph
Benchmark (OGB) datasets and GNN architectures, achieving up to 1.2% accuracy
improvement and 2.1X speedup. Additionally, we propose Partition Learned
Souping, a novel partition-based variant of learned souping that significantly
reduces memory usage. On the ogbn-products dataset with GraphSAGE, partition
learned souping achieves a 24.5X speedup and a 76% memory reduction without
compromising accuracy.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:29:27 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Zuber",
"Joseph",
""
],
[
"Sarkar",
"Aishwarya",
""
],
[
"Jennings",
"Joseph",
""
],
[
"Jannesari",
"Ali",
""
]
] | TITLE: Enhanced Soups for Graph Neural Networks
ABSTRACT: Graph Neural Networks (GNN) have demonstrated state-of-the-art performance in
numerous scientific and high-performance computing (HPC) applications. Recent
work suggests that "souping" (combining) individually trained GNNs into a
single model can improve performance without increasing compute and memory
costs during inference. However, existing souping algorithms are often slow and
memory-intensive, which limits their scalability.
We introduce Learned Souping for GNNs, a gradient-descent-based souping
strategy that substantially reduces time and memory overhead compared to
existing methods. Our approach is evaluated across multiple Open Graph
Benchmark (OGB) datasets and GNN architectures, achieving up to 1.2% accuracy
improvement and 2.1X speedup. Additionally, we propose Partition Learned
Souping, a novel partition-based variant of learned souping that significantly
reduces memory usage. On the ogbn-products dataset with GraphSAGE, partition
learned souping achieves a 24.5X speedup and a 76% memory reduction without
compromising accuracy.
|
2503.11614 | Liang Cheng | Liang Cheng, Tianyi Li, Zhaowei Wang, Tianyang Liu, Mark Steedman | Neutralizing Bias in LLM Reasoning using Entailment Graphs | 17 pages, 7 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | LLMs are often claimed to be capable of Natural Language Inference (NLI),
which is widely regarded as a cornerstone of more complex forms of reasoning.
However, recent works show that LLMs still suffer from hallucinations in NLI
due to attestation bias, where LLMs overly rely on propositional memory to
build shortcuts. To solve the issue, we design an unsupervised framework to
construct counterfactual reasoning data and fine-tune LLMs to reduce
attestation bias. To measure bias reduction, we build bias-adversarial variants
of NLI datasets with randomly replaced predicates in premises while keeping
hypotheses unchanged. Extensive evaluations show that our framework can
significantly reduce hallucinations from attestation bias. Then, we further
evaluate LLMs fine-tuned with our framework on original NLI datasets and their
bias-neutralized versions, where original entities are replaced with randomly
sampled ones. Extensive results show that our framework consistently improves
inferential performance on both original and bias-neutralized NLI datasets.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:33:30 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Cheng",
"Liang",
""
],
[
"Li",
"Tianyi",
""
],
[
"Wang",
"Zhaowei",
""
],
[
"Liu",
"Tianyang",
""
],
[
"Steedman",
"Mark",
""
]
] | TITLE: Neutralizing Bias in LLM Reasoning using Entailment Graphs
ABSTRACT: LLMs are often claimed to be capable of Natural Language Inference (NLI),
which is widely regarded as a cornerstone of more complex forms of reasoning.
However, recent works show that LLMs still suffer from hallucinations in NLI
due to attestation bias, where LLMs overly rely on propositional memory to
build shortcuts. To solve the issue, we design an unsupervised framework to
construct counterfactual reasoning data and fine-tune LLMs to reduce
attestation bias. To measure bias reduction, we build bias-adversarial variants
of NLI datasets with randomly replaced predicates in premises while keeping
hypotheses unchanged. Extensive evaluations show that our framework can
significantly reduce hallucinations from attestation bias. Then, we further
evaluate LLMs fine-tuned with our framework on original NLI datasets and their
bias-neutralized versions, where original entities are replaced with randomly
sampled ones. Extensive results show that our framework consistently improves
inferential performance on both original and bias-neutralized NLI datasets.
|
2503.11617 | Xinyi Wang | Xinyi Wang, Jiashui Wang, Peng Chen, Jinbo Su, Yanming Liu, Long Liu,
Yangdong Wang, Qiyuan Chen, Kai Yun, Chunfu Jia | ASMA-Tune: Unlocking LLMs' Assembly Code Comprehension via
Structural-Semantic Instruction Tuning | 19 pages, multiple figures | null | null | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis and comprehension of assembly code are crucial in various
applications, such as reverse engineering. However, the low information density
and lack of explicit syntactic structures in assembly code pose significant
challenges. Pioneering approaches with masked language modeling (MLM)-based
methods have been limited by facilitating natural language interaction. While
recent methods based on decoder-focused large language models (LLMs) have
significantly enhanced semantic representation, they still struggle to capture
the nuanced and sparse semantics in assembly code. In this paper, we propose
Assembly Augmented Tuning (ASMA-Tune), an end-to-end structural-semantic
instruction-tuning framework. Our approach synergizes encoder architectures
with decoder-based LLMs through projector modules to enable comprehensive code
understanding. Experiments show that ASMA-Tune outperforms existing benchmarks,
significantly enhancing assembly code comprehension and instruction-following
abilities. Our model and dataset are public at
https://github.com/wxy3596/ASMA-Tune.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:36:08 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wang",
"Xinyi",
""
],
[
"Wang",
"Jiashui",
""
],
[
"Chen",
"Peng",
""
],
[
"Su",
"Jinbo",
""
],
[
"Liu",
"Yanming",
""
],
[
"Liu",
"Long",
""
],
[
"Wang",
"Yangdong",
""
],
[
"Chen",
"Qiyuan",
""
],
[
"Yun",
"Kai",
""
],
[
"Jia",
"Chunfu",
""
]
] | TITLE: ASMA-Tune: Unlocking LLMs' Assembly Code Comprehension via
Structural-Semantic Instruction Tuning
ABSTRACT: Analysis and comprehension of assembly code are crucial in various
applications, such as reverse engineering. However, the low information density
and lack of explicit syntactic structures in assembly code pose significant
challenges. Pioneering approaches with masked language modeling (MLM)-based
methods have been limited by facilitating natural language interaction. While
recent methods based on decoder-focused large language models (LLMs) have
significantly enhanced semantic representation, they still struggle to capture
the nuanced and sparse semantics in assembly code. In this paper, we propose
Assembly Augmented Tuning (ASMA-Tune), an end-to-end structural-semantic
instruction-tuning framework. Our approach synergizes encoder architectures
with decoder-based LLMs through projector modules to enable comprehensive code
understanding. Experiments show that ASMA-Tune outperforms existing benchmarks,
significantly enhancing assembly code comprehension and instruction-following
abilities. Our model and dataset are public at
https://github.com/wxy3596/ASMA-Tune.
|
2503.11633 | Hongyu Wen | Hongyu Wen, Yiming Zuo, Venkat Subramanian, Patrick Chen, Jia Deng | Seeing and Seeing Through the Glass: Real and Synthetic Data for
Multi-Layer Depth Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transparent objects are common in daily life, and understanding their
multi-layer depth information -- perceiving both the transparent surface and
the objects behind it -- is crucial for real-world applications that interact
with transparent materials. In this paper, we introduce LayeredDepth, the first
dataset with multi-layer depth annotations, including a real-world benchmark
and a synthetic data generator, to support the task of multi-layer depth
estimation. Our real-world benchmark consists of 1,500 images from diverse
scenes, and evaluating state-of-the-art depth estimation methods on it reveals
that they struggle with transparent objects. The synthetic data generator is
fully procedural and capable of providing training data for this task with an
unlimited variety of objects and scene compositions. Using this generator, we
create a synthetic dataset with 15,300 images. Baseline models training solely
on this synthetic dataset produce good cross-domain multi-layer depth
estimation. Fine-tuning state-of-the-art single-layer depth models on it
substantially improves their performance on transparent objects, with
quadruplet accuracy on our benchmark increased from 55.14% to 75.20%. All
images and validation annotations are available under CC0 at
https://layereddepth.cs.princeton.edu.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:52:06 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Wen",
"Hongyu",
""
],
[
"Zuo",
"Yiming",
""
],
[
"Subramanian",
"Venkat",
""
],
[
"Chen",
"Patrick",
""
],
[
"Deng",
"Jia",
""
]
] | TITLE: Seeing and Seeing Through the Glass: Real and Synthetic Data for
Multi-Layer Depth Estimation
ABSTRACT: Transparent objects are common in daily life, and understanding their
multi-layer depth information -- perceiving both the transparent surface and
the objects behind it -- is crucial for real-world applications that interact
with transparent materials. In this paper, we introduce LayeredDepth, the first
dataset with multi-layer depth annotations, including a real-world benchmark
and a synthetic data generator, to support the task of multi-layer depth
estimation. Our real-world benchmark consists of 1,500 images from diverse
scenes, and evaluating state-of-the-art depth estimation methods on it reveals
that they struggle with transparent objects. The synthetic data generator is
fully procedural and capable of providing training data for this task with an
unlimited variety of objects and scene compositions. Using this generator, we
create a synthetic dataset with 15,300 images. Baseline models training solely
on this synthetic dataset produce good cross-domain multi-layer depth
estimation. Fine-tuning state-of-the-art single-layer depth models on it
substantially improves their performance on transparent objects, with
quadruplet accuracy on our benchmark increased from 55.14% to 75.20%. All
images and validation annotations are available under CC0 at
https://layereddepth.cs.princeton.edu.
|
2503.11643 | Paolo Campeti | P. Campeti, J.-M. Delouis, L. Pagano, E. Allys, M. Lattanzi, M.
Gerbino | From few to many maps: A fast map-level emulator for extreme
augmentation of CMB systematics datasets | Codes and examples available at https://github.com/pcampeti/CMBSCAT/
and https://github.com/jmdelouis/HealpixML. 12 pages + appendices, 12
figures. Submitted to A&A | null | null | null | astro-ph.CO astro-ph.IM physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel, fast, and efficient generative model built upon
scattering covariances, the most recent iteration of the scattering transforms
statistics. This model is designed to augment by several orders of magnitude
the number of map simulations in datasets of computationally expensive CMB
instrumental systematics simulations, including their non-Gaussian and
inhomogeneous features. Unlike conventional neural network-based algorithms,
this generative model requires only a minimal number of training samples,
making it highly compatible with the computational constraints of typical CMB
simulation campaigns. We validate the method using realistic simulations of CMB
systematics, which are particularly challenging to emulate, and perform
extensive statistical tests to confirm its ability to produce new statistically
independent approximate realizations. Remarkably, even when trained on as few
as 10 simulations, the emulator closely reproduces key summary statistics --
including the angular power spectrum, scattering coefficients, and Minkowski
functionals -- and provides pixel-to-pixel covariance estimates with
substantially reduced sample noise compared to those obtained without
augmentation. The proposed approach has the potential to shift the paradigm in
simulation campaign design. Instead of producing large numbers of low- or
medium-accuracy simulations, future pipelines can focus on generating a few
high-accuracy simulations that are then efficiently augmented using such
generative model. This promises significant benefits for current and
forthcoming cosmological surveys such as $Planck$, $LiteBIRD$, Simons
Observatory, CMB-S4, Euclid and Rubin-LSST. We make both the general framework
for scattering transform statistics available at
https://github.com/jmdelouis/HealpixML and the emulator available at
https://github.com/pcampeti/CMBSCAT.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:58:07 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Campeti",
"P.",
""
],
[
"Delouis",
"J. -M.",
""
],
[
"Pagano",
"L.",
""
],
[
"Allys",
"E.",
""
],
[
"Lattanzi",
"M.",
""
],
[
"Gerbino",
"M.",
""
]
] | TITLE: From few to many maps: A fast map-level emulator for extreme
augmentation of CMB systematics datasets
ABSTRACT: We introduce a novel, fast, and efficient generative model built upon
scattering covariances, the most recent iteration of the scattering transforms
statistics. This model is designed to augment by several orders of magnitude
the number of map simulations in datasets of computationally expensive CMB
instrumental systematics simulations, including their non-Gaussian and
inhomogeneous features. Unlike conventional neural network-based algorithms,
this generative model requires only a minimal number of training samples,
making it highly compatible with the computational constraints of typical CMB
simulation campaigns. We validate the method using realistic simulations of CMB
systematics, which are particularly challenging to emulate, and perform
extensive statistical tests to confirm its ability to produce new statistically
independent approximate realizations. Remarkably, even when trained on as few
as 10 simulations, the emulator closely reproduces key summary statistics --
including the angular power spectrum, scattering coefficients, and Minkowski
functionals -- and provides pixel-to-pixel covariance estimates with
substantially reduced sample noise compared to those obtained without
augmentation. The proposed approach has the potential to shift the paradigm in
simulation campaign design. Instead of producing large numbers of low- or
medium-accuracy simulations, future pipelines can focus on generating a few
high-accuracy simulations that are then efficiently augmented using such
generative model. This promises significant benefits for current and
forthcoming cosmological surveys such as $Planck$, $LiteBIRD$, Simons
Observatory, CMB-S4, Euclid and Rubin-LSST. We make both the general framework
for scattering transform statistics available at
https://github.com/jmdelouis/HealpixML and the emulator available at
https://github.com/pcampeti/CMBSCAT.
|
2503.11646 | Siyuan Huang | Siyuan Huang, Yue Liao, Siyuan Feng, Shu Jiang, Si Liu, Hongsheng Li,
Maoqing Yao, Guanghui Ren | Adversarial Data Collection: Human-Collaborative Perturbations for
Efficient and Robust Robotic Imitation Learning | More information can be found on our project
page:https://sites.google.com/view/adc-robot | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | The pursuit of data efficiency, where quality outweighs quantity, has emerged
as a cornerstone in robotic manipulation, especially given the high costs
associated with real-world data collection. We propose that maximizing the
informational density of individual demonstrations can dramatically reduce
reliance on large-scale datasets while improving task performance. To this end,
we introduce Adversarial Data Collection, a Human-in-the-Loop (HiL) framework
that redefines robotic data acquisition through real-time, bidirectional
human-environment interactions. Unlike conventional pipelines that passively
record static demonstrations, ADC adopts a collaborative perturbation paradigm:
during a single episode, an adversarial operator dynamically alters object
states, environmental conditions, and linguistic commands, while the
tele-operator adaptively adjusts actions to overcome these evolving challenges.
This process compresses diverse failure-recovery behaviors, compositional task
variations, and environmental perturbations into minimal demonstrations. Our
experiments demonstrate that ADC-trained models achieve superior compositional
generalization to unseen task instructions, enhanced robustness to perceptual
perturbations, and emergent error recovery capabilities. Strikingly, models
trained with merely 20% of the demonstration volume collected through ADC
significantly outperform traditional approaches using full datasets. These
advances bridge the gap between data-centric learning paradigms and practical
robotic deployment, demonstrating that strategic data acquisition, not merely
post-hoc processing, is critical for scalable, real-world robot learning.
Additionally, we are curating a large-scale ADC-Robotics dataset comprising
real-world manipulation tasks with adversarial perturbations. This benchmark
will be open-sourced to facilitate advancements in robotic imitation learning.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:59:07 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Huang",
"Siyuan",
""
],
[
"Liao",
"Yue",
""
],
[
"Feng",
"Siyuan",
""
],
[
"Jiang",
"Shu",
""
],
[
"Liu",
"Si",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Yao",
"Maoqing",
""
],
[
"Ren",
"Guanghui",
""
]
] | TITLE: Adversarial Data Collection: Human-Collaborative Perturbations for
Efficient and Robust Robotic Imitation Learning
ABSTRACT: The pursuit of data efficiency, where quality outweighs quantity, has emerged
as a cornerstone in robotic manipulation, especially given the high costs
associated with real-world data collection. We propose that maximizing the
informational density of individual demonstrations can dramatically reduce
reliance on large-scale datasets while improving task performance. To this end,
we introduce Adversarial Data Collection, a Human-in-the-Loop (HiL) framework
that redefines robotic data acquisition through real-time, bidirectional
human-environment interactions. Unlike conventional pipelines that passively
record static demonstrations, ADC adopts a collaborative perturbation paradigm:
during a single episode, an adversarial operator dynamically alters object
states, environmental conditions, and linguistic commands, while the
tele-operator adaptively adjusts actions to overcome these evolving challenges.
This process compresses diverse failure-recovery behaviors, compositional task
variations, and environmental perturbations into minimal demonstrations. Our
experiments demonstrate that ADC-trained models achieve superior compositional
generalization to unseen task instructions, enhanced robustness to perceptual
perturbations, and emergent error recovery capabilities. Strikingly, models
trained with merely 20% of the demonstration volume collected through ADC
significantly outperform traditional approaches using full datasets. These
advances bridge the gap between data-centric learning paradigms and practical
robotic deployment, demonstrating that strategic data acquisition, not merely
post-hoc processing, is critical for scalable, real-world robot learning.
Additionally, we are curating a large-scale ADC-Robotics dataset comprising
real-world manipulation tasks with adversarial perturbations. This benchmark
will be open-sourced to facilitate advancements in robotic imitation learning.
|
2503.11647 | Jianhong Bai | Jianhong Bai, Menghan Xia, Xiao Fu, Xintao Wang, Lianrui Mu, Jinwen
Cao, Zuozhu Liu, Haoji Hu, Xiang Bai, Pengfei Wan, Di Zhang | ReCamMaster: Camera-Controlled Generative Rendering from A Single Video | Project page: https://jianhongbai.github.io/ReCamMaster/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Camera control has been actively studied in text or image conditioned video
generation tasks. However, altering camera trajectories of a given video
remains under-explored, despite its importance in the field of video creation.
It is non-trivial due to the extra constraints of maintaining multiple-frame
appearance and dynamic synchronization. To address this, we present
ReCamMaster, a camera-controlled generative video re-rendering framework that
reproduces the dynamic scene of an input video at novel camera trajectories.
The core innovation lies in harnessing the generative capabilities of
pre-trained text-to-video models through a simple yet powerful video
conditioning mechanism -- its capability often overlooked in current research.
To overcome the scarcity of qualified training data, we construct a
comprehensive multi-camera synchronized video dataset using Unreal Engine 5,
which is carefully curated to follow real-world filming characteristics,
covering diverse scenes and camera movements. It helps the model generalize to
in-the-wild videos. Lastly, we further improve the robustness to diverse inputs
through a meticulously designed training strategy. Extensive experiments tell
that our method substantially outperforms existing state-of-the-art approaches
and strong baselines. Our method also finds promising applications in video
stabilization, super-resolution, and outpainting. Project page:
https://jianhongbai.github.io/ReCamMaster/
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:59:31 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Bai",
"Jianhong",
""
],
[
"Xia",
"Menghan",
""
],
[
"Fu",
"Xiao",
""
],
[
"Wang",
"Xintao",
""
],
[
"Mu",
"Lianrui",
""
],
[
"Cao",
"Jinwen",
""
],
[
"Liu",
"Zuozhu",
""
],
[
"Hu",
"Haoji",
""
],
[
"Bai",
"Xiang",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Zhang",
"Di",
""
]
] | TITLE: ReCamMaster: Camera-Controlled Generative Rendering from A Single Video
ABSTRACT: Camera control has been actively studied in text or image conditioned video
generation tasks. However, altering camera trajectories of a given video
remains under-explored, despite its importance in the field of video creation.
It is non-trivial due to the extra constraints of maintaining multiple-frame
appearance and dynamic synchronization. To address this, we present
ReCamMaster, a camera-controlled generative video re-rendering framework that
reproduces the dynamic scene of an input video at novel camera trajectories.
The core innovation lies in harnessing the generative capabilities of
pre-trained text-to-video models through a simple yet powerful video
conditioning mechanism -- its capability often overlooked in current research.
To overcome the scarcity of qualified training data, we construct a
comprehensive multi-camera synchronized video dataset using Unreal Engine 5,
which is carefully curated to follow real-world filming characteristics,
covering diverse scenes and camera movements. It helps the model generalize to
in-the-wild videos. Lastly, we further improve the robustness to diverse inputs
through a meticulously designed training strategy. Extensive experiments tell
that our method substantially outperforms existing state-of-the-art approaches
and strong baselines. Our method also finds promising applications in video
stabilization, super-resolution, and outpainting. Project page:
https://jianhongbai.github.io/ReCamMaster/
|
2503.11652 | Vladislav Golyanik | Hiroyasu Akada and Jian Wang and Vladislav Golyanik and Christian
Theobalt | Bring Your Rear Cameras for Egocentric 3D Human Pose Estimation | Project page: https://4dqv.mpi-inf.mpg.de/EgoRear/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Egocentric 3D human pose estimation has been actively studied using cameras
installed in front of a head-mounted device (HMD). While frontal placement is
the optimal and the only option for some tasks, such as hand tracking, it
remains unclear if the same holds for full-body tracking due to self-occlusion
and limited field-of-view coverage. Notably, even the state-of-the-art methods
often fail to estimate accurate 3D poses in many scenarios, such as when HMD
users tilt their heads upward (a common motion in human activities). A key
limitation of existing HMD designs is their neglect of the back of the body,
despite its potential to provide crucial 3D reconstruction cues. Hence, this
paper investigates the usefulness of rear cameras in the HMD design for
full-body tracking. We also show that simply adding rear views to the frontal
inputs is not optimal for existing methods due to their dependence on
individual 2D joint detectors without effective multi-view integration. To
address this issue, we propose a new transformer-based method that refines 2D
joint heatmap estimation with multi-view information and heatmap uncertainty,
thereby improving 3D pose tracking. Moreover, we introduce two new large-scale
datasets, Ego4View-Syn and Ego4View-RW, for a rear-view evaluation. Our
experiments show that the new camera configurations with back views provide
superior support for 3D pose tracking compared to only frontal placements. The
proposed method achieves significant improvement over the current state of the
art (>10% on MPJPE). We will release the source code, trained models, and new
datasets on our project page https://4dqv.mpi-inf.mpg.de/EgoRear/.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 17:59:54 GMT"
}
] | 2025-03-17T00:00:00 | [
[
"Akada",
"Hiroyasu",
""
],
[
"Wang",
"Jian",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Theobalt",
"Christian",
""
]
] | TITLE: Bring Your Rear Cameras for Egocentric 3D Human Pose Estimation
ABSTRACT: Egocentric 3D human pose estimation has been actively studied using cameras
installed in front of a head-mounted device (HMD). While frontal placement is
the optimal and the only option for some tasks, such as hand tracking, it
remains unclear if the same holds for full-body tracking due to self-occlusion
and limited field-of-view coverage. Notably, even the state-of-the-art methods
often fail to estimate accurate 3D poses in many scenarios, such as when HMD
users tilt their heads upward (a common motion in human activities). A key
limitation of existing HMD designs is their neglect of the back of the body,
despite its potential to provide crucial 3D reconstruction cues. Hence, this
paper investigates the usefulness of rear cameras in the HMD design for
full-body tracking. We also show that simply adding rear views to the frontal
inputs is not optimal for existing methods due to their dependence on
individual 2D joint detectors without effective multi-view integration. To
address this issue, we propose a new transformer-based method that refines 2D
joint heatmap estimation with multi-view information and heatmap uncertainty,
thereby improving 3D pose tracking. Moreover, we introduce two new large-scale
datasets, Ego4View-Syn and Ego4View-RW, for a rear-view evaluation. Our
experiments show that the new camera configurations with back views provide
superior support for 3D pose tracking compared to only frontal placements. The
proposed method achieves significant improvement over the current state of the
art (>10% on MPJPE). We will release the source code, trained models, and new
datasets on our project page https://4dqv.mpi-inf.mpg.de/EgoRear/.
|
2202.05628 | Haimin Luo | Haimin Luo, Teng Xu, Yuheng Jiang, Chenglin Zhou, Qiwei Qiu, Yingliang
Zhang, Wei Yang, Lan Xu, Jingyi Yu | Artemis: Articulated Neural Pets with Appearance and Motion synthesis | Accepted to ACM SIGGRAPH 2022 (Journal track) | ACM Trans. Graph. 41, Article No. 164 (2022) 1-19 | 10.1145/3528223.3530086 | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We, humans, are entering into a virtual era and indeed want to bring animals
to the virtual world as well for companion. Yet, computer-generated (CGI) furry
animals are limited by tedious off-line rendering, let alone interactive motion
control. In this paper, we present ARTEMIS, a novel neural modeling and
rendering pipeline for generating ARTiculated neural pets with appEarance and
Motion synthesIS. Our ARTEMIS enables interactive motion control, real-time
animation, and photo-realistic rendering of furry animals. The core of our
ARTEMIS is a neural-generated (NGI) animal engine, which adopts an efficient
octree-based representation for animal animation and fur rendering. The
animation then becomes equivalent to voxel-level deformation based on explicit
skeletal warping. We further use a fast octree indexing and efficient
volumetric rendering scheme to generate appearance and density features maps.
Finally, we propose a novel shading network to generate high-fidelity details
of appearance and opacity under novel poses from appearance and density feature
maps. For the motion control module in ARTEMIS, we combine state-of-the-art
animal motion capture approach with recent neural character control scheme. We
introduce an effective optimization scheme to reconstruct the skeletal motion
of real animals captured by a multi-view RGB and Vicon camera array. We feed
all the captured motion into a neural character control scheme to generate
abstract control signals with motion styles. We further integrate ARTEMIS into
existing engines that support VR headsets, providing an unprecedented immersive
experience where a user can intimately interact with a variety of virtual
animals with vivid movements and photo-realistic appearance. We make available
our ARTEMIS model and dynamic furry animal dataset at
https://haiminluo.github.io/publication/artemis/.
| [
{
"version": "v1",
"created": "Fri, 11 Feb 2022 14:07:20 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 08:14:06 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Jun 2022 04:06:33 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Luo",
"Haimin",
""
],
[
"Xu",
"Teng",
""
],
[
"Jiang",
"Yuheng",
""
],
[
"Zhou",
"Chenglin",
""
],
[
"Qiu",
"Qiwei",
""
],
[
"Zhang",
"Yingliang",
""
],
[
"Yang",
"Wei",
""
],
[
"Xu",
"Lan",
""
],
[
"Yu",
"Jingyi",
""
]
] | TITLE: Artemis: Articulated Neural Pets with Appearance and Motion synthesis
ABSTRACT: We, humans, are entering into a virtual era and indeed want to bring animals
to the virtual world as well for companion. Yet, computer-generated (CGI) furry
animals are limited by tedious off-line rendering, let alone interactive motion
control. In this paper, we present ARTEMIS, a novel neural modeling and
rendering pipeline for generating ARTiculated neural pets with appEarance and
Motion synthesIS. Our ARTEMIS enables interactive motion control, real-time
animation, and photo-realistic rendering of furry animals. The core of our
ARTEMIS is a neural-generated (NGI) animal engine, which adopts an efficient
octree-based representation for animal animation and fur rendering. The
animation then becomes equivalent to voxel-level deformation based on explicit
skeletal warping. We further use a fast octree indexing and efficient
volumetric rendering scheme to generate appearance and density features maps.
Finally, we propose a novel shading network to generate high-fidelity details
of appearance and opacity under novel poses from appearance and density feature
maps. For the motion control module in ARTEMIS, we combine state-of-the-art
animal motion capture approach with recent neural character control scheme. We
introduce an effective optimization scheme to reconstruct the skeletal motion
of real animals captured by a multi-view RGB and Vicon camera array. We feed
all the captured motion into a neural character control scheme to generate
abstract control signals with motion styles. We further integrate ARTEMIS into
existing engines that support VR headsets, providing an unprecedented immersive
experience where a user can intimately interact with a variety of virtual
animals with vivid movements and photo-realistic appearance. We make available
our ARTEMIS model and dynamic furry animal dataset at
https://haiminluo.github.io/publication/artemis/.
|
2211.11403 | Hongyu Yu | Hongyu Yu, Boyu Liu, Yang Zhong, Liangliang Hong, Junyi Ji, Changsong
Xu, Xingao Gong, Hongjun Xiang | General time-reversal equivariant neural network potential for magnetic
materials | 27 pages,6 figures and 3 tables | Physical Review B 2024 | 10.1103/PhysRevB.110.104427 | Phys. Rev. B 110,104427 | cond-mat.mtrl-sci cs.LG physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | This study introduces time-reversal E(3)-equivariant neural network and
SpinGNN++ framework for constructing a comprehensive interatomic potential for
magnetic systems, encompassing spin-orbit coupling and noncollinear magnetic
moments. SpinGNN++ integrates multitask spin equivariant neural network with
explicit spin-lattice terms, including Heisenberg, Dzyaloshinskii-Moriya,
Kitaev, single-ion anisotropy, and biquadratic interactions, and employs
time-reversal equivariant neural network to learn high-order spin-lattice
interactions using time-reversal E(3)-equivariant convolutions. To validate
SpinGNN++, a complex magnetic model dataset is introduced as a benchmark and
employed to demonstrate its capabilities. SpinGNN++ provides accurate
descriptions of the complex spin-lattice coupling in monolayer CrI$_3$ and
CrTe$_2$, achieving sub-meV errors. Importantly, it facilitates large-scale
parallel spin-lattice dynamics, thereby enabling the exploration of associated
properties, including the magnetic ground state and phase transition.
Remarkably, SpinGNN++ identifies a new ferrimagnetic state as the ground
magnetic state for monolayer CrTe2, thereby enriching its phase diagram and
providing deeper insights into the distinct magnetic signals observed in
various experiments.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2022 12:25:58 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 07:20:51 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Jan 2024 12:45:12 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Yu",
"Hongyu",
""
],
[
"Liu",
"Boyu",
""
],
[
"Zhong",
"Yang",
""
],
[
"Hong",
"Liangliang",
""
],
[
"Ji",
"Junyi",
""
],
[
"Xu",
"Changsong",
""
],
[
"Gong",
"Xingao",
""
],
[
"Xiang",
"Hongjun",
""
]
] | TITLE: General time-reversal equivariant neural network potential for magnetic
materials
ABSTRACT: This study introduces time-reversal E(3)-equivariant neural network and
SpinGNN++ framework for constructing a comprehensive interatomic potential for
magnetic systems, encompassing spin-orbit coupling and noncollinear magnetic
moments. SpinGNN++ integrates multitask spin equivariant neural network with
explicit spin-lattice terms, including Heisenberg, Dzyaloshinskii-Moriya,
Kitaev, single-ion anisotropy, and biquadratic interactions, and employs
time-reversal equivariant neural network to learn high-order spin-lattice
interactions using time-reversal E(3)-equivariant convolutions. To validate
SpinGNN++, a complex magnetic model dataset is introduced as a benchmark and
employed to demonstrate its capabilities. SpinGNN++ provides accurate
descriptions of the complex spin-lattice coupling in monolayer CrI$_3$ and
CrTe$_2$, achieving sub-meV errors. Importantly, it facilitates large-scale
parallel spin-lattice dynamics, thereby enabling the exploration of associated
properties, including the magnetic ground state and phase transition.
Remarkably, SpinGNN++ identifies a new ferrimagnetic state as the ground
magnetic state for monolayer CrTe2, thereby enriching its phase diagram and
providing deeper insights into the distinct magnetic signals observed in
various experiments.
|
2303.17117 | Chengliang Liu | Chengliang Liu, Jie Wen, Yong Xu, Bob Zhang, Liqiang Nie, Min Zhang | Reliable Representation Learning for Incomplete Multi-View Missing
Multi-Label Classification | Accepted by TPAMI. Please contact me if you have any questions:
[email protected] | null | 10.1109/TPAMI.2025.3546356 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a cross-topic of multi-view learning and multi-label classification,
multi-view multi-label classification has gradually gained traction in recent
years. The application of multi-view contrastive learning has further
facilitated this process, however, the existing multi-view contrastive learning
methods crudely separate the so-called negative pair, which largely results in
the separation of samples belonging to the same category or similar ones.
Besides, plenty of multi-view multi-label learning methods ignore the possible
absence of views and labels. To address these issues, in this paper, we propose
an incomplete multi-view missing multi-label classification network named RANK.
In this network, a label-driven multi-view contrastive learning strategy is
proposed to leverage supervised information to preserve the intra-view
structure and perform the cross-view consistency alignment. Furthermore, we
break through the view-level weights inherent in existing methods and propose a
quality-aware sub-network to dynamically assign quality scores to each view of
each sample. The label correlation information is fully utilized in the final
multi-label cross-entropy classification loss, effectively improving the
discriminative power. Last but not least, our model is not only able to handle
complete multi-view multi-label data, but also works on datasets with missing
instances and labels. Extensive experiments confirm that our RANK outperforms
existing state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2023 03:09:25 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Aug 2024 03:22:08 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 09:20:24 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Liu",
"Chengliang",
""
],
[
"Wen",
"Jie",
""
],
[
"Xu",
"Yong",
""
],
[
"Zhang",
"Bob",
""
],
[
"Nie",
"Liqiang",
""
],
[
"Zhang",
"Min",
""
]
] | TITLE: Reliable Representation Learning for Incomplete Multi-View Missing
Multi-Label Classification
ABSTRACT: As a cross-topic of multi-view learning and multi-label classification,
multi-view multi-label classification has gradually gained traction in recent
years. The application of multi-view contrastive learning has further
facilitated this process, however, the existing multi-view contrastive learning
methods crudely separate the so-called negative pair, which largely results in
the separation of samples belonging to the same category or similar ones.
Besides, plenty of multi-view multi-label learning methods ignore the possible
absence of views and labels. To address these issues, in this paper, we propose
an incomplete multi-view missing multi-label classification network named RANK.
In this network, a label-driven multi-view contrastive learning strategy is
proposed to leverage supervised information to preserve the intra-view
structure and perform the cross-view consistency alignment. Furthermore, we
break through the view-level weights inherent in existing methods and propose a
quality-aware sub-network to dynamically assign quality scores to each view of
each sample. The label correlation information is fully utilized in the final
multi-label cross-entropy classification loss, effectively improving the
discriminative power. Last but not least, our model is not only able to handle
complete multi-view multi-label data, but also works on datasets with missing
instances and labels. Extensive experiments confirm that our RANK outperforms
existing state-of-the-art methods.
|
2307.03363 | Yuyuan Li | Yuyuan Li, Jiaming Zhang, Yixiu Liu, Chaochao Chen | Class-wise Federated Unlearning: Harnessing Active Forgetting with
Teacher-Student Memory Generation | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy concerns associated with machine learning models have driven research
into machine unlearning, which aims to erase the memory of specific target
training data from already trained models. This issue also arises in federated
learning, creating the need to address the federated unlearning problem.
However, federated unlearning remains a challenging task. On the one hand,
current research primarily focuses on unlearning all data from a client,
overlooking more fine-grained unlearning targets, e.g., class-wise and
sample-wise removal. On the other hand, existing methods suffer from imprecise
estimation of data influence and impose significant computational or storage
burden. To address these issues, we propose a neuro-inspired federated
unlearning framework based on active forgetting, which is independent of model
architectures and suitable for fine-grained unlearning targets. Our framework
distinguishes itself from existing methods by utilizing new memories to
overwrite old ones. These new memories are generated through teacher-student
learning. We further utilize refined elastic weight consolidation to mitigate
catastrophic forgetting of non-target data. Extensive experiments on benchmark
datasets demonstrate the efficiency and effectiveness of our method, achieving
satisfactory unlearning completeness against backdoor attacks.
| [
{
"version": "v1",
"created": "Fri, 7 Jul 2023 03:07:26 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 15:10:10 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Yuyuan",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Liu",
"Yixiu",
""
],
[
"Chen",
"Chaochao",
""
]
] | TITLE: Class-wise Federated Unlearning: Harnessing Active Forgetting with
Teacher-Student Memory Generation
ABSTRACT: Privacy concerns associated with machine learning models have driven research
into machine unlearning, which aims to erase the memory of specific target
training data from already trained models. This issue also arises in federated
learning, creating the need to address the federated unlearning problem.
However, federated unlearning remains a challenging task. On the one hand,
current research primarily focuses on unlearning all data from a client,
overlooking more fine-grained unlearning targets, e.g., class-wise and
sample-wise removal. On the other hand, existing methods suffer from imprecise
estimation of data influence and impose significant computational or storage
burden. To address these issues, we propose a neuro-inspired federated
unlearning framework based on active forgetting, which is independent of model
architectures and suitable for fine-grained unlearning targets. Our framework
distinguishes itself from existing methods by utilizing new memories to
overwrite old ones. These new memories are generated through teacher-student
learning. We further utilize refined elastic weight consolidation to mitigate
catastrophic forgetting of non-target data. Extensive experiments on benchmark
datasets demonstrate the efficiency and effectiveness of our method, achieving
satisfactory unlearning completeness against backdoor attacks.
|
2308.00137 | Hemn Abdalla | Hemn Barzan Abdalla, Awder Ahmed, Bahtiyar Mehmed, Mehdi Gheisari,
Maryam Cheraghy, Yang Liu | An Efficient Recommendation System in E-commerce using Passer learning
optimization based on Bi-LSTM | 22 pages, 5 figuers, 4 Tables | null | null | null | cs.MM cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online reviews play a crucial role in shaping consumer decisions, especially
in the context of e-commerce. However, the quality and reliability of these
reviews can vary significantly. Some reviews contain misleading or unhelpful
information, such as advertisements, fake content, or irrelevant details. These
issues pose significant challenges for recommendation systems, which rely on
user-generated reviews to provide personalized suggestions. This article
introduces a recommendation system based on Passer Learning
Optimization-enhanced Bi-LSTM classifier applicable to e-commerce
recommendation systems with improved accuracy and efficiency compared to
state-of-the-art models. It achieves as low as 1.24% MSE on the baby dataset.
This lifts it as high as 88.58%. Besides, there is also robust performance of
the system on digital music and patio lawn garden datasets at F1 of 88.46% and
92.51%, correspondingly. These results, made possible by advanced graph
embedding for effective knowledge extraction and fine-tuning of classifier
parameters, establish the suitability of the proposed model in various
e-commerce environments.
| [
{
"version": "v1",
"created": "Mon, 31 Jul 2023 20:09:25 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Aug 2023 07:34:05 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 14:43:36 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Abdalla",
"Hemn Barzan",
""
],
[
"Ahmed",
"Awder",
""
],
[
"Mehmed",
"Bahtiyar",
""
],
[
"Gheisari",
"Mehdi",
""
],
[
"Cheraghy",
"Maryam",
""
],
[
"Liu",
"Yang",
""
]
] | TITLE: An Efficient Recommendation System in E-commerce using Passer learning
optimization based on Bi-LSTM
ABSTRACT: Online reviews play a crucial role in shaping consumer decisions, especially
in the context of e-commerce. However, the quality and reliability of these
reviews can vary significantly. Some reviews contain misleading or unhelpful
information, such as advertisements, fake content, or irrelevant details. These
issues pose significant challenges for recommendation systems, which rely on
user-generated reviews to provide personalized suggestions. This article
introduces a recommendation system based on Passer Learning
Optimization-enhanced Bi-LSTM classifier applicable to e-commerce
recommendation systems with improved accuracy and efficiency compared to
state-of-the-art models. It achieves as low as 1.24% MSE on the baby dataset.
This lifts it as high as 88.58%. Besides, there is also robust performance of
the system on digital music and patio lawn garden datasets at F1 of 88.46% and
92.51%, correspondingly. These results, made possible by advanced graph
embedding for effective knowledge extraction and fine-tuning of classifier
parameters, establish the suitability of the proposed model in various
e-commerce environments.
|
2312.10052 | Zhongliang Zeng | Dongdong Li, Zhongliang Zeng, Zhe Wang, Hai Yang | ESTformer: Transformer Utilizing Spatiotemporal Dependencies for
Electroencaphalogram Super-resolution | Accepted by Knowledge-Based Systems | null | null | null | eess.SP cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Towards practical applications of Electroencephalography (EEG), lightweight
acquisition devices garner significant attention. However, EEG channel
selection methods are commonly data-sensitive and cannot establish a unified
sound paradigm for EEG acquisition devices. Through reverse conceptualisation,
we formulated EEG applications in an EEG super-resolution (SR) manner, but
suffered from high computation costs, extra interpolation bias, and few
insights into spatiotemporal dependency modelling. To this end, we propose
ESTformer, an EEG SR framework that utilises spatiotemporal dependencies based
on the transformer. ESTformer applies positional encoding methods and a
multihead self-attention mechanism to the space and time dimensions, which can
learn spatial structural correlations and temporal functional variations.
ESTformer, with the fixed mask strategy, adopts a mask token to upsample
low-resolution (LR) EEG data in the case of disturbance from mathematical
interpolation methods. On this basis, we designed various transformer blocks to
construct a spatial interpolation module (SIM) and a temporal reconstruction
module (TRM). Finally, ESTformer cascades the SIM and TRM to capture and model
the spatiotemporal dependencies for EEG SR with fidelity. Extensive
experimental results on two EEG datasets show the effectiveness of ESTformer
against previous state-of-the-art methods, demonstrating the versatility of the
Transformer for EEG SR tasks. The superiority of the SR data was verified in an
EEG-based person identification and emotion recognition task, achieving a 2% to
38% improvement compared with the LR data at different sampling scales.
| [
{
"version": "v1",
"created": "Sun, 3 Dec 2023 12:26:32 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 07:17:58 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Dongdong",
""
],
[
"Zeng",
"Zhongliang",
""
],
[
"Wang",
"Zhe",
""
],
[
"Yang",
"Hai",
""
]
] | TITLE: ESTformer: Transformer Utilizing Spatiotemporal Dependencies for
Electroencaphalogram Super-resolution
ABSTRACT: Towards practical applications of Electroencephalography (EEG), lightweight
acquisition devices garner significant attention. However, EEG channel
selection methods are commonly data-sensitive and cannot establish a unified
sound paradigm for EEG acquisition devices. Through reverse conceptualisation,
we formulated EEG applications in an EEG super-resolution (SR) manner, but
suffered from high computation costs, extra interpolation bias, and few
insights into spatiotemporal dependency modelling. To this end, we propose
ESTformer, an EEG SR framework that utilises spatiotemporal dependencies based
on the transformer. ESTformer applies positional encoding methods and a
multihead self-attention mechanism to the space and time dimensions, which can
learn spatial structural correlations and temporal functional variations.
ESTformer, with the fixed mask strategy, adopts a mask token to upsample
low-resolution (LR) EEG data in the case of disturbance from mathematical
interpolation methods. On this basis, we designed various transformer blocks to
construct a spatial interpolation module (SIM) and a temporal reconstruction
module (TRM). Finally, ESTformer cascades the SIM and TRM to capture and model
the spatiotemporal dependencies for EEG SR with fidelity. Extensive
experimental results on two EEG datasets show the effectiveness of ESTformer
against previous state-of-the-art methods, demonstrating the versatility of the
Transformer for EEG SR tasks. The superiority of the SR data was verified in an
EEG-based person identification and emotion recognition task, achieving a 2% to
38% improvement compared with the LR data at different sampling scales.
|
2401.16796 | Weibin Liao | Weibin Liao, Yinghao Zhu, Zhongji Zhang, Yuhang Wang, Zixiang Wang, Xu
Chu, Yasha Wang, Liantao Ma | Learnable Prompt as Pseudo-Imputation: Rethinking the Necessity of
Traditional EHR Data Imputation in Downstream Clinical Prediction | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analyzing the health status of patients based on Electronic Health Records
(EHR) is a fundamental research problem in medical informatics. The presence of
extensive missing values in EHR makes it challenging for deep neural networks
(DNNs) to directly model the patient's health status. Existing DNNs training
protocols, including Impute-then-Regress Procedure and Jointly Optimizing of
Impute-n-Regress Procedure, require the additional imputation models to
reconstruction missing values. However, Impute-then-Regress Procedure
introduces the risk of injecting imputed, non-real data into downstream
clinical prediction tasks, resulting in power loss, biased estimation, and
poorly performing models, while Jointly Optimizing of Impute-n-Regress
Procedure is also difficult to generalize due to the complex optimization space
and demanding data requirements. Inspired by the recent advanced literature of
learnable prompt in the fields of NLP and CV, in this work, we rethought the
necessity of the imputation model in downstream clinical tasks, and proposed
Learnable Prompt as Pseudo-Imputation (PAI) as a new training protocol to
assist EHR analysis. PAI no longer introduces any imputed data but constructs a
learnable prompt to model the implicit preferences of the downstream model for
missing values, resulting in a significant performance improvement for all
state-of-the-arts EHR analysis models on four real-world datasets across two
clinical prediction tasks. Further experimental analysis indicates that PAI
exhibits higher robustness in situations of data insufficiency and high missing
rates. More importantly, as a plug-and-play protocol, PAI can be easily
integrated into any existing or even imperceptible future EHR analysis models.
| [
{
"version": "v1",
"created": "Tue, 30 Jan 2024 07:19:36 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:17:29 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Liao",
"Weibin",
""
],
[
"Zhu",
"Yinghao",
""
],
[
"Zhang",
"Zhongji",
""
],
[
"Wang",
"Yuhang",
""
],
[
"Wang",
"Zixiang",
""
],
[
"Chu",
"Xu",
""
],
[
"Wang",
"Yasha",
""
],
[
"Ma",
"Liantao",
""
]
] | TITLE: Learnable Prompt as Pseudo-Imputation: Rethinking the Necessity of
Traditional EHR Data Imputation in Downstream Clinical Prediction
ABSTRACT: Analyzing the health status of patients based on Electronic Health Records
(EHR) is a fundamental research problem in medical informatics. The presence of
extensive missing values in EHR makes it challenging for deep neural networks
(DNNs) to directly model the patient's health status. Existing DNNs training
protocols, including Impute-then-Regress Procedure and Jointly Optimizing of
Impute-n-Regress Procedure, require the additional imputation models to
reconstruction missing values. However, Impute-then-Regress Procedure
introduces the risk of injecting imputed, non-real data into downstream
clinical prediction tasks, resulting in power loss, biased estimation, and
poorly performing models, while Jointly Optimizing of Impute-n-Regress
Procedure is also difficult to generalize due to the complex optimization space
and demanding data requirements. Inspired by the recent advanced literature of
learnable prompt in the fields of NLP and CV, in this work, we rethought the
necessity of the imputation model in downstream clinical tasks, and proposed
Learnable Prompt as Pseudo-Imputation (PAI) as a new training protocol to
assist EHR analysis. PAI no longer introduces any imputed data but constructs a
learnable prompt to model the implicit preferences of the downstream model for
missing values, resulting in a significant performance improvement for all
state-of-the-arts EHR analysis models on four real-world datasets across two
clinical prediction tasks. Further experimental analysis indicates that PAI
exhibits higher robustness in situations of data insufficiency and high missing
rates. More importantly, as a plug-and-play protocol, PAI can be easily
integrated into any existing or even imperceptible future EHR analysis models.
|
2402.04863 | Yingjie Mao | Xiaoqi Li, Yingjie Mao, Zexin Lu, Wenkai Li, Zongwei Li | SCLA: Automated Smart Contract Summarization via LLMs and Control Flow
Prompt | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Smart contract code summarization is crucial for efficient maintenance and
vulnerability mitigation. While many studies use Large Language Models (LLMs)
for summarization, their performance still falls short compared to fine-tuned
models like CodeT5+ and CodeBERT. Some approaches combine LLMs with data flow
analysis but fail to fully capture the hierarchy and control structures of the
code, leading to information loss and degraded summarization quality. We
propose SCLA, an LLM-based method that enhances summarization by integrating a
Control Flow Graph (CFG) and semantic facts from the code's control flow into a
semantically enriched prompt. SCLA uses a control flow extraction algorithm to
derive control flows from semantic nodes in the Abstract Syntax Tree (AST) and
constructs the corresponding CFG. Code semantic facts refer to both explicit
and implicit information within the AST that is relevant to smart contracts.
This method enables LLMs to better capture the structural and contextual
dependencies of the code. We validate the effectiveness of SCLA through
comprehensive experiments on a dataset of 40,000 real-world smart contracts.
The experiment shows that SCLA significantly improves summarization quality,
outperforming the SOTA baselines with improvements of 26.7%, 23.2%, 16.7%, and
14.7% in BLEU-4, METEOR, ROUGE-L, and BLEURT scores, respectively.
| [
{
"version": "v1",
"created": "Wed, 7 Feb 2024 13:58:26 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Feb 2024 06:09:16 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Feb 2024 14:18:32 GMT"
},
{
"version": "v4",
"created": "Sat, 17 Aug 2024 03:41:42 GMT"
},
{
"version": "v5",
"created": "Tue, 20 Aug 2024 02:34:56 GMT"
},
{
"version": "v6",
"created": "Thu, 13 Mar 2025 07:05:15 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Li",
"Xiaoqi",
""
],
[
"Mao",
"Yingjie",
""
],
[
"Lu",
"Zexin",
""
],
[
"Li",
"Wenkai",
""
],
[
"Li",
"Zongwei",
""
]
] | TITLE: SCLA: Automated Smart Contract Summarization via LLMs and Control Flow
Prompt
ABSTRACT: Smart contract code summarization is crucial for efficient maintenance and
vulnerability mitigation. While many studies use Large Language Models (LLMs)
for summarization, their performance still falls short compared to fine-tuned
models like CodeT5+ and CodeBERT. Some approaches combine LLMs with data flow
analysis but fail to fully capture the hierarchy and control structures of the
code, leading to information loss and degraded summarization quality. We
propose SCLA, an LLM-based method that enhances summarization by integrating a
Control Flow Graph (CFG) and semantic facts from the code's control flow into a
semantically enriched prompt. SCLA uses a control flow extraction algorithm to
derive control flows from semantic nodes in the Abstract Syntax Tree (AST) and
constructs the corresponding CFG. Code semantic facts refer to both explicit
and implicit information within the AST that is relevant to smart contracts.
This method enables LLMs to better capture the structural and contextual
dependencies of the code. We validate the effectiveness of SCLA through
comprehensive experiments on a dataset of 40,000 real-world smart contracts.
The experiment shows that SCLA significantly improves summarization quality,
outperforming the SOTA baselines with improvements of 26.7%, 23.2%, 16.7%, and
14.7% in BLEU-4, METEOR, ROUGE-L, and BLEURT scores, respectively.
|
2402.11057 | Shijia Feng | Shijia Feng, Michael Wray, Brian Sullivan, Youngkyoon Jang, Casimir
Ludwig, Iain Gilchrist, Walterio Mayol-Cuevas | Are you Struggling? Dataset and Baselines for Struggle Determination in
Assembly Videos | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Determining when people are struggling from video enables a finer-grained
understanding of actions and opens opportunities for building intelligent
support visual interfaces. In this paper, we present a new dataset with three
assembly activities and corresponding performance baselines for the
determination of struggle from video. Three real-world problem-solving
activities including assembling plumbing pipes (Pipes-Struggle), pitching
camping tents (Tent-Struggle) and solving the Tower of Hanoi puzzle
(Tower-Struggle) are introduced. Video segments were scored w.r.t. the level of
struggle as perceived by annotators using a forced choice 4-point scale. Each
video segment was annotated by a single expert annotator in addition to
crowd-sourced annotations. The dataset is the first struggle annotation dataset
and contains 5.1 hours of video and 725,100 frames from 73 participants in
total. We evaluate three decision-making tasks: struggle classification,
struggle level regression, and struggle label distribution learning. We provide
baseline results for each of the tasks utilising several mainstream deep neural
networks, along with an ablation study and visualisation of results. Our work
is motivated toward assistive systems that analyze struggle, support users
during manual activities and encourage learning, as well as other video
understanding competencies.
| [
{
"version": "v1",
"created": "Fri, 16 Feb 2024 20:12:33 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Feb 2024 16:42:12 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 03:46:20 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Mar 2025 14:08:10 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Feng",
"Shijia",
""
],
[
"Wray",
"Michael",
""
],
[
"Sullivan",
"Brian",
""
],
[
"Jang",
"Youngkyoon",
""
],
[
"Ludwig",
"Casimir",
""
],
[
"Gilchrist",
"Iain",
""
],
[
"Mayol-Cuevas",
"Walterio",
""
]
] | TITLE: Are you Struggling? Dataset and Baselines for Struggle Determination in
Assembly Videos
ABSTRACT: Determining when people are struggling from video enables a finer-grained
understanding of actions and opens opportunities for building intelligent
support visual interfaces. In this paper, we present a new dataset with three
assembly activities and corresponding performance baselines for the
determination of struggle from video. Three real-world problem-solving
activities including assembling plumbing pipes (Pipes-Struggle), pitching
camping tents (Tent-Struggle) and solving the Tower of Hanoi puzzle
(Tower-Struggle) are introduced. Video segments were scored w.r.t. the level of
struggle as perceived by annotators using a forced choice 4-point scale. Each
video segment was annotated by a single expert annotator in addition to
crowd-sourced annotations. The dataset is the first struggle annotation dataset
and contains 5.1 hours of video and 725,100 frames from 73 participants in
total. We evaluate three decision-making tasks: struggle classification,
struggle level regression, and struggle label distribution learning. We provide
baseline results for each of the tasks utilising several mainstream deep neural
networks, along with an ablation study and visualisation of results. Our work
is motivated toward assistive systems that analyze struggle, support users
during manual activities and encourage learning, as well as other video
understanding competencies.
|
2402.14327 | Delong Chen | Delong Chen, Samuel Cahyawijaya, Jianfeng Liu, Baoyuan Wang, Pascale
Fung | Subobject-level Image Tokenization | null | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | Patch-based image tokenization ignores the morphology of the visual world,
limiting effective and efficient learning of image understanding. Inspired by
subword tokenization, we introduce subobject-level adaptive token segmentation
and explore several approaches, including superpixel, SAM, and a proposed
Efficient and PanOptiC (EPOC) image tokenizer. Our EPOC combines boundary
detection -- a simple task that can be handled well by a compact model -- with
watershed segmentation, which inherently guarantees no pixels are left
unsegmented. Intrinsic evaluations across 5 datasets demonstrate that EPOC's
segmentation aligns well with human annotations of both object- and part-level
visual morphology, producing more monosemantic tokens and offering substantial
efficiency advantages. For extrinsic evaluation, we designed a token embedding
that handles arbitrary-shaped tokens, and trained VLMs with different
tokenizers on 4 datasets of object recognition and detailed captioning. The
results reveal that subobject tokenization enables faster convergence and
better generalization while using fewer visual tokens.
| [
{
"version": "v1",
"created": "Thu, 22 Feb 2024 06:47:44 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Apr 2024 13:41:47 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 18:22:25 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Chen",
"Delong",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"Liu",
"Jianfeng",
""
],
[
"Wang",
"Baoyuan",
""
],
[
"Fung",
"Pascale",
""
]
] | TITLE: Subobject-level Image Tokenization
ABSTRACT: Patch-based image tokenization ignores the morphology of the visual world,
limiting effective and efficient learning of image understanding. Inspired by
subword tokenization, we introduce subobject-level adaptive token segmentation
and explore several approaches, including superpixel, SAM, and a proposed
Efficient and PanOptiC (EPOC) image tokenizer. Our EPOC combines boundary
detection -- a simple task that can be handled well by a compact model -- with
watershed segmentation, which inherently guarantees no pixels are left
unsegmented. Intrinsic evaluations across 5 datasets demonstrate that EPOC's
segmentation aligns well with human annotations of both object- and part-level
visual morphology, producing more monosemantic tokens and offering substantial
efficiency advantages. For extrinsic evaluation, we designed a token embedding
that handles arbitrary-shaped tokens, and trained VLMs with different
tokenizers on 4 datasets of object recognition and detailed captioning. The
results reveal that subobject tokenization enables faster convergence and
better generalization while using fewer visual tokens.
|
2403.02523 | Gabriel Turinici | Pierre Brugiere and Gabriel Turinici | Transformer for Times Series: an Application to the S&P500 | null | In: Arai, K. (eds) Advances in Information and Communication. FICC
2025. Lecture Notes in Networks and Systems, vol 1285. Springer, Cham | 10.1007/978-3-031-84460-7_33 | null | cs.AI q-fin.PM q-fin.ST stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The transformer models have been extensively used with good results in a wide
area of machine learning applications including Large Language Models and image
generation. Here, we inquire on the applicability of this approach to financial
time series. We first describe the dataset construction for two prototypical
situations: a mean reverting synthetic Ornstein-Uhlenbeck process on one hand
and real S&P500 data on the other hand. Then, we present in detail the proposed
Transformer architecture and finally we discuss some encouraging results. For
the synthetic data we predict rather accurately the next move, and for the
S&P500 we get some interesting results related to quadratic variation and
volatility prediction.
| [
{
"version": "v1",
"created": "Mon, 4 Mar 2024 22:27:11 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Brugiere",
"Pierre",
""
],
[
"Turinici",
"Gabriel",
""
]
] | TITLE: Transformer for Times Series: an Application to the S&P500
ABSTRACT: The transformer models have been extensively used with good results in a wide
area of machine learning applications including Large Language Models and image
generation. Here, we inquire on the applicability of this approach to financial
time series. We first describe the dataset construction for two prototypical
situations: a mean reverting synthetic Ornstein-Uhlenbeck process on one hand
and real S&P500 data on the other hand. Then, we present in detail the proposed
Transformer architecture and finally we discuss some encouraging results. For
the synthetic data we predict rather accurately the next move, and for the
S&P500 we get some interesting results related to quadratic variation and
volatility prediction.
|
2403.08277 | MinSoo Kim | Minsoo Kim, Min-Cheol Sagong, Gi Pyo Nam, Junghyun Cho, and Ig-Jae Kim | VIGFace: Virtual Identity Generation for Privacy-Free Face Recognition | Please refer to version 3 if you are citing this paper. Major
updates: (1)Test utilities updated: use AdaFace code. (2)Training method
updated: AdaFace+IR-SE50 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning-based face recognition continues to face challenges due to its
reliance on huge datasets obtained from web crawling, which can be costly to
gather and raise significant real-world privacy concerns. To address this
issue, we propose VIGFace, a novel framework capable of generating synthetic
facial images. Our idea originates from pre-assigning virtual identities in the
feature space. Initially, we train the face recognition model using a real face
dataset and create a feature space for both real and virtual identities, where
virtual prototypes are orthogonal to other prototypes. Subsequently, we train
the diffusion model based on the established feature space, enabling it to
generate authentic human face images from real prototypes and synthesize
virtual face images from virtual prototypes. Our proposed framework provides
two significant benefits. Firstly, it shows clear separability between existing
individuals and virtual face images, allowing one to create synthetic images
with confidence and without concerns about privacy and portrait rights.
Secondly, it ensures improved performance through data augmentation by
incorporating real existing images. Extensive experiments demonstrate the
superiority of our virtual face dataset and framework, outperforming the
previous state-of-the-art on various face recognition benchmarks.
| [
{
"version": "v1",
"created": "Wed, 13 Mar 2024 06:11:41 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Dec 2024 02:15:40 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 08:06:24 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Kim",
"Minsoo",
""
],
[
"Sagong",
"Min-Cheol",
""
],
[
"Nam",
"Gi Pyo",
""
],
[
"Cho",
"Junghyun",
""
],
[
"Kim",
"Ig-Jae",
""
]
] | TITLE: VIGFace: Virtual Identity Generation for Privacy-Free Face Recognition
ABSTRACT: Deep learning-based face recognition continues to face challenges due to its
reliance on huge datasets obtained from web crawling, which can be costly to
gather and raise significant real-world privacy concerns. To address this
issue, we propose VIGFace, a novel framework capable of generating synthetic
facial images. Our idea originates from pre-assigning virtual identities in the
feature space. Initially, we train the face recognition model using a real face
dataset and create a feature space for both real and virtual identities, where
virtual prototypes are orthogonal to other prototypes. Subsequently, we train
the diffusion model based on the established feature space, enabling it to
generate authentic human face images from real prototypes and synthesize
virtual face images from virtual prototypes. Our proposed framework provides
two significant benefits. Firstly, it shows clear separability between existing
individuals and virtual face images, allowing one to create synthetic images
with confidence and without concerns about privacy and portrait rights.
Secondly, it ensures improved performance through data augmentation by
incorporating real existing images. Extensive experiments demonstrate the
superiority of our virtual face dataset and framework, outperforming the
previous state-of-the-art on various face recognition benchmarks.
|
2403.17916 | Jiachen Li | Zehao Wang, Yuping Wang, Zhuoyuan Wu, Hengbo Ma, Zhaowei Li, Hang Qiu,
Jiachen Li | CMP: Cooperative Motion Prediction with Multi-Agent Communication | IEEE Robotics and Automation Letters; Project website:
https://cmp-cooperative-prediction.github.io/ | null | 10.1109/LRA.2025.3546862 | null | cs.RO cs.AI cs.CV cs.LG cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The confluence of the advancement of Autonomous Vehicles (AVs) and the
maturity of Vehicle-to-Everything (V2X) communication has enabled the
capability of cooperative connected and automated vehicles (CAVs). Building on
top of cooperative perception, this paper explores the feasibility and
effectiveness of cooperative motion prediction. Our method, CMP, takes LiDAR
signals as model input to enhance tracking and prediction capabilities. Unlike
previous work that focuses separately on either cooperative perception or
motion prediction, our framework, to the best of our knowledge, is the first to
address the unified problem where CAVs share information in both perception and
prediction modules. Incorporated into our design is the unique capability to
tolerate realistic V2X transmission delays, while dealing with bulky perception
representations. We also propose a prediction aggregation module, which unifies
the predictions obtained by different CAVs and generates the final prediction.
Through extensive experiments and ablation studies on the OPV2V and V2V4Real
datasets, we demonstrate the effectiveness of our method in cooperative
perception, tracking, and motion prediction. In particular, CMP reduces the
average prediction error by 12.3% compared with the strongest baseline. Our
work marks a significant step forward in the cooperative capabilities of CAVs,
showcasing enhanced performance in complex scenarios. More details can be found
on the project website: https://cmp-cooperative-prediction.github.io.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 17:53:27 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Oct 2024 17:59:25 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 19:03:13 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Wang",
"Zehao",
""
],
[
"Wang",
"Yuping",
""
],
[
"Wu",
"Zhuoyuan",
""
],
[
"Ma",
"Hengbo",
""
],
[
"Li",
"Zhaowei",
""
],
[
"Qiu",
"Hang",
""
],
[
"Li",
"Jiachen",
""
]
] | TITLE: CMP: Cooperative Motion Prediction with Multi-Agent Communication
ABSTRACT: The confluence of the advancement of Autonomous Vehicles (AVs) and the
maturity of Vehicle-to-Everything (V2X) communication has enabled the
capability of cooperative connected and automated vehicles (CAVs). Building on
top of cooperative perception, this paper explores the feasibility and
effectiveness of cooperative motion prediction. Our method, CMP, takes LiDAR
signals as model input to enhance tracking and prediction capabilities. Unlike
previous work that focuses separately on either cooperative perception or
motion prediction, our framework, to the best of our knowledge, is the first to
address the unified problem where CAVs share information in both perception and
prediction modules. Incorporated into our design is the unique capability to
tolerate realistic V2X transmission delays, while dealing with bulky perception
representations. We also propose a prediction aggregation module, which unifies
the predictions obtained by different CAVs and generates the final prediction.
Through extensive experiments and ablation studies on the OPV2V and V2V4Real
datasets, we demonstrate the effectiveness of our method in cooperative
perception, tracking, and motion prediction. In particular, CMP reduces the
average prediction error by 12.3% compared with the strongest baseline. Our
work marks a significant step forward in the cooperative capabilities of CAVs,
showcasing enhanced performance in complex scenarios. More details can be found
on the project website: https://cmp-cooperative-prediction.github.io.
|
2404.14977 | Kashif Ahmad | Muhammad Asif Auyb, Muhammad Tayyab Zamir, Imran Khan, Hannia Naseem,
Nasir Ahmad, Kashif Ahmad | Social Media and Artificial Intelligence for Sustainable Cities and
Societies: A Water Quality Analysis Use-case | 11 pages, 6 figures, and 3 tables | null | 10.69709/CAIC.2024.133109 | null | cs.SI cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper focuses on a very important societal challenge of water quality
analysis. Being one of the key factors in the economic and social development
of society, the provision of water and ensuring its quality has always remained
one of the top priorities of public authorities. To ensure the quality of
water, different methods for monitoring and assessing the water networks, such
as offline and online surveys, are used. However, these surveys have several
limitations, such as the limited number of participants and low frequency due
to the labor involved in conducting such surveys. In this paper, we propose a
Natural Language Processing (NLP) framework to automatically collect and
analyze water-related posts from social media for data-driven decisions. The
proposed framework is composed of two components, namely (i) text
classification, and (ii) topic modeling. For text classification, we propose a
merit-fusion-based framework incorporating several Large Language Models (LLMs)
where different weight selection and optimization methods are employed to
assign weights to the LLMs. In topic modeling, we employed the BERTopic library
to discover the hidden topic patterns in the water-related tweets. We also
analyzed relevant tweets originating from different regions and countries to
explore global, regional, and country-specific issues and water-related
concerns. We also collected and manually annotated a large-scale dataset, which
is expected to facilitate future research on the topic.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 12:33:14 GMT"
}
] | 2025-03-14T00:00:00 | [
[
"Auyb",
"Muhammad Asif",
""
],
[
"Zamir",
"Muhammad Tayyab",
""
],
[
"Khan",
"Imran",
""
],
[
"Naseem",
"Hannia",
""
],
[
"Ahmad",
"Nasir",
""
],
[
"Ahmad",
"Kashif",
""
]
] | TITLE: Social Media and Artificial Intelligence for Sustainable Cities and
Societies: A Water Quality Analysis Use-case
ABSTRACT: This paper focuses on a very important societal challenge of water quality
analysis. Being one of the key factors in the economic and social development
of society, the provision of water and ensuring its quality has always remained
one of the top priorities of public authorities. To ensure the quality of
water, different methods for monitoring and assessing the water networks, such
as offline and online surveys, are used. However, these surveys have several
limitations, such as the limited number of participants and low frequency due
to the labor involved in conducting such surveys. In this paper, we propose a
Natural Language Processing (NLP) framework to automatically collect and
analyze water-related posts from social media for data-driven decisions. The
proposed framework is composed of two components, namely (i) text
classification, and (ii) topic modeling. For text classification, we propose a
merit-fusion-based framework incorporating several Large Language Models (LLMs)
where different weight selection and optimization methods are employed to
assign weights to the LLMs. In topic modeling, we employed the BERTopic library
to discover the hidden topic patterns in the water-related tweets. We also
analyzed relevant tweets originating from different regions and countries to
explore global, regional, and country-specific issues and water-related
concerns. We also collected and manually annotated a large-scale dataset, which
is expected to facilitate future research on the topic.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.