Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.23081 | Anastasiia Fadeeva | Anastasiia Fadeeva, Vincent Coriou, Diego Antognini, Claudiu Musat,
Andrii Maksai | InkFM: A Foundational Model for Full-Page Online Handwritten Note
Understanding | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Tablets and styluses are increasingly popular for taking notes. To optimize
this experience and ensure a smooth and efficient workflow, it's important to
develop methods for accurately interpreting and understanding the content of
handwritten digital notes. We introduce a foundational model called InkFM for
analyzing full pages of handwritten content. Trained on a diverse mixture of
tasks, this model offers a unique combination of capabilities: recognizing text
in 28 different scripts, mathematical expressions recognition, and segmenting
pages into distinct elements like text and drawings. Our results demonstrate
that these tasks can be effectively unified within a single model, achieving
SoTA text line segmentation out-of-the-box quality surpassing public baselines
like docTR. Fine- or LoRA-tuning our base model on public datasets further
improves the quality of page segmentation, achieves state-of the art text
recognition (DeepWriting, CASIA, SCUT, and Mathwriting datasets) and sketch
classification (QuickDraw). This adaptability of InkFM provides a powerful
starting point for developing applications with handwritten input.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 13:45:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Fadeeva",
"Anastasiia",
""
],
[
"Coriou",
"Vincent",
""
],
[
"Antognini",
"Diego",
""
],
[
"Musat",
"Claudiu",
""
],
[
"Maksai",
"Andrii",
""
]
] | TITLE: InkFM: A Foundational Model for Full-Page Online Handwritten Note
Understanding
ABSTRACT: Tablets and styluses are increasingly popular for taking notes. To optimize
this experience and ensure a smooth and efficient workflow, it's important to
develop methods for accurately interpreting and understanding the content of
handwritten digital notes. We introduce a foundational model called InkFM for
analyzing full pages of handwritten content. Trained on a diverse mixture of
tasks, this model offers a unique combination of capabilities: recognizing text
in 28 different scripts, mathematical expressions recognition, and segmenting
pages into distinct elements like text and drawings. Our results demonstrate
that these tasks can be effectively unified within a single model, achieving
SoTA text line segmentation out-of-the-box quality surpassing public baselines
like docTR. Fine- or LoRA-tuning our base model on public datasets further
improves the quality of page segmentation, achieves state-of the art text
recognition (DeepWriting, CASIA, SCUT, and Mathwriting datasets) and sketch
classification (QuickDraw). This adaptability of InkFM provides a powerful
starting point for developing applications with handwritten input.
|
2503.23083 | Ali J. Ghandour | Hasan Moughnieh, Mohamad Chalhoub, Hasan Nasrallah, Cristiano Nattero,
Paolo Campanella, Ali J. Ghandour | Efficient Adaptation For Remote Sensing Visual Grounding | null | null | null | null | cs.CV cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Foundation models have revolutionized artificial intelligence (AI), offering
remarkable capabilities across multi-modal domains. Their ability to precisely
locate objects in complex aerial and satellite images, using rich contextual
information and detailed object descriptions, is essential for remote sensing
(RS). These models can associate textual descriptions with object positions
through the Visual Grounding (VG) task, but due to domain-specific challenges,
their direct application to RS produces sub-optimal results. To address this,
we applied Parameter Efficient Fine Tuning (PEFT) techniques to adapt these
models for RS-specific VG tasks. Specifically, we evaluated LoRA placement
across different modules in Grounding DINO and used BitFit and adapters to
fine-tune the OFA foundation model pre-trained on general-purpose VG datasets.
This approach achieved performance comparable to or surpassing current State Of
The Art (SOTA) models while significantly reducing computational costs. This
study highlights the potential of PEFT techniques to advance efficient and
precise multi-modal analysis in RS, offering a practical and cost-effective
alternative to full model training.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 13:49:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Moughnieh",
"Hasan",
""
],
[
"Chalhoub",
"Mohamad",
""
],
[
"Nasrallah",
"Hasan",
""
],
[
"Nattero",
"Cristiano",
""
],
[
"Campanella",
"Paolo",
""
],
[
"Ghandour",
"Ali J.",
""
]
] | TITLE: Efficient Adaptation For Remote Sensing Visual Grounding
ABSTRACT: Foundation models have revolutionized artificial intelligence (AI), offering
remarkable capabilities across multi-modal domains. Their ability to precisely
locate objects in complex aerial and satellite images, using rich contextual
information and detailed object descriptions, is essential for remote sensing
(RS). These models can associate textual descriptions with object positions
through the Visual Grounding (VG) task, but due to domain-specific challenges,
their direct application to RS produces sub-optimal results. To address this,
we applied Parameter Efficient Fine Tuning (PEFT) techniques to adapt these
models for RS-specific VG tasks. Specifically, we evaluated LoRA placement
across different modules in Grounding DINO and used BitFit and adapters to
fine-tune the OFA foundation model pre-trained on general-purpose VG datasets.
This approach achieved performance comparable to or surpassing current State Of
The Art (SOTA) models while significantly reducing computational costs. This
study highlights the potential of PEFT techniques to advance efficient and
precise multi-modal analysis in RS, offering a practical and cost-effective
alternative to full model training.
|
2503.23088 | Himanshu Beniwal | Himanshu Beniwal, Reddybathuni Venkat, Rohit Kumar, Birudugadda
Srivibhav, Daksh Jain, Pavan Doddi, Eshwar Dhande, Adithya Ananth, Kuldeep,
Heer Kubadia, Pratham Sharda, Mayank Singh | UNITYAI-GUARD: Pioneering Toxicity Detection Across Low-Resource Indian
Languages | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This work introduces UnityAI-Guard, a framework for binary toxicity
classification targeting low-resource Indian languages. While existing systems
predominantly cater to high-resource languages, UnityAI-Guard addresses this
critical gap by developing state-of-the-art models for identifying toxic
content across diverse Brahmic/Indic scripts. Our approach achieves an
impressive average F1-score of 84.23% across seven languages, leveraging a
dataset of 888k training instances and 35k manually verified test instances. By
advancing multilingual content moderation for linguistically diverse regions,
UnityAI-Guard also provides public API access to foster broader adoption and
application.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 14:20:13 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Beniwal",
"Himanshu",
""
],
[
"Venkat",
"Reddybathuni",
""
],
[
"Kumar",
"Rohit",
""
],
[
"Srivibhav",
"Birudugadda",
""
],
[
"Jain",
"Daksh",
""
],
[
"Doddi",
"Pavan",
""
],
[
"Dhande",
"Eshwar",
""
],
[
"Ananth",
"Adithya",
""
],
[
"Kuldeep",
"",
""
],
[
"Kubadia",
"Heer",
""
],
[
"Sharda",
"Pratham",
""
],
[
"Singh",
"Mayank",
""
]
] | TITLE: UNITYAI-GUARD: Pioneering Toxicity Detection Across Low-Resource Indian
Languages
ABSTRACT: This work introduces UnityAI-Guard, a framework for binary toxicity
classification targeting low-resource Indian languages. While existing systems
predominantly cater to high-resource languages, UnityAI-Guard addresses this
critical gap by developing state-of-the-art models for identifying toxic
content across diverse Brahmic/Indic scripts. Our approach achieves an
impressive average F1-score of 84.23% across seven languages, leveraging a
dataset of 888k training instances and 35k manually verified test instances. By
advancing multilingual content moderation for linguistically diverse regions,
UnityAI-Guard also provides public API access to foster broader adoption and
application.
|
2503.23094 | Andrea Boscolo Camiletto | Andrea Boscolo Camiletto, Jian Wang, Eduardo Alvarado, Rishabh Dabral,
Thabo Beeler, Marc Habermann, Christian Theobalt | FRAME: Floor-aligned Representation for Avatar Motion from Egocentric
Video | Accepted at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Egocentric motion capture with a head-mounted body-facing stereo camera is
crucial for VR and AR applications but presents significant challenges such as
heavy occlusions and limited annotated real-world data. Existing methods rely
on synthetic pretraining and struggle to generate smooth and accurate
predictions in real-world settings, particularly for lower limbs. Our work
addresses these limitations by introducing a lightweight VR-based data
collection setup with on-board, real-time 6D pose tracking. Using this setup,
we collected the most extensive real-world dataset for ego-facing ego-mounted
cameras to date in size and motion variability. Effectively integrating this
multimodal input -- device pose and camera feeds -- is challenging due to the
differing characteristics of each data source. To address this, we propose
FRAME, a simple yet effective architecture that combines device pose and camera
feeds for state-of-the-art body pose prediction through geometrically sound
multimodal integration and can run at 300 FPS on modern hardware. Lastly, we
showcase a novel training strategy to enhance the model's generalization
capabilities. Our approach exploits the problem's geometric properties,
yielding high-quality motion capture free from common artifacts in prior works.
Qualitative and quantitative evaluations, along with extensive comparisons,
demonstrate the effectiveness of our method. Data, code, and CAD designs will
be available at https://vcai.mpi-inf.mpg.de/projects/FRAME/
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 14:26:06 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Camiletto",
"Andrea Boscolo",
""
],
[
"Wang",
"Jian",
""
],
[
"Alvarado",
"Eduardo",
""
],
[
"Dabral",
"Rishabh",
""
],
[
"Beeler",
"Thabo",
""
],
[
"Habermann",
"Marc",
""
],
[
"Theobalt",
"Christian",
""
]
] | TITLE: FRAME: Floor-aligned Representation for Avatar Motion from Egocentric
Video
ABSTRACT: Egocentric motion capture with a head-mounted body-facing stereo camera is
crucial for VR and AR applications but presents significant challenges such as
heavy occlusions and limited annotated real-world data. Existing methods rely
on synthetic pretraining and struggle to generate smooth and accurate
predictions in real-world settings, particularly for lower limbs. Our work
addresses these limitations by introducing a lightweight VR-based data
collection setup with on-board, real-time 6D pose tracking. Using this setup,
we collected the most extensive real-world dataset for ego-facing ego-mounted
cameras to date in size and motion variability. Effectively integrating this
multimodal input -- device pose and camera feeds -- is challenging due to the
differing characteristics of each data source. To address this, we propose
FRAME, a simple yet effective architecture that combines device pose and camera
feeds for state-of-the-art body pose prediction through geometrically sound
multimodal integration and can run at 300 FPS on modern hardware. Lastly, we
showcase a novel training strategy to enhance the model's generalization
capabilities. Our approach exploits the problem's geometric properties,
yielding high-quality motion capture free from common artifacts in prior works.
Qualitative and quantitative evaluations, along with extensive comparisons,
demonstrate the effectiveness of our method. Data, code, and CAD designs will
be available at https://vcai.mpi-inf.mpg.de/projects/FRAME/
|
2503.23106 | Dandan Zhong | Chao Tao, Dandan Zhong, Weiliang Mu, Zhuofei Du, and Haiyang Wu | A large-scale image-text dataset benchmark for farmland segmentation | null | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The traditional deep learning paradigm that solely relies on labeled data has
limitations in representing the spatial relationships between farmland elements
and the surrounding environment.It struggles to effectively model the dynamic
temporal evolution and spatial heterogeneity of farmland. Language,as a
structured knowledge carrier,can explicitly express the spatiotemporal
characteristics of farmland, such as its shape, distribution,and surrounding
environmental information.Therefore,a language-driven learning paradigm can
effectively alleviate the challenges posed by the spatiotemporal heterogeneity
of farmland.However,in the field of remote sensing imagery of farmland,there is
currently no comprehensive benchmark dataset to support this research
direction.To fill this gap,we introduced language based descriptions of
farmland and developed FarmSeg-VL dataset,the first fine-grained image-text
dataset designed for spatiotemporal farmland segmentation.Firstly, this article
proposed a semi-automatic annotation method that can accurately assign caption
to each image, ensuring high data quality and semantic richness while improving
the efficiency of dataset construction.Secondly,the FarmSeg-VL exhibits
significant spatiotemporal characteristics.In terms of the temporal
dimension,it covers all four seasons.In terms of the spatial dimension,it
covers eight typical agricultural regions across China.In addition, in terms of
captions,FarmSeg-VL covers rich spatiotemporal characteristics of
farmland,including its inherent properties,phenological characteristics,
spatial distribution,topographic and geomorphic features,and the distribution
of surrounding environments.Finally,we present a performance analysis of VLMs
and the deep learning models that rely solely on labels trained on the
FarmSeg-VL,demonstrating its potential as a standard benchmark for farmland
segmentation.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 14:55:46 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tao",
"Chao",
""
],
[
"Zhong",
"Dandan",
""
],
[
"Mu",
"Weiliang",
""
],
[
"Du",
"Zhuofei",
""
],
[
"Wu",
"Haiyang",
""
]
] | TITLE: A large-scale image-text dataset benchmark for farmland segmentation
ABSTRACT: The traditional deep learning paradigm that solely relies on labeled data has
limitations in representing the spatial relationships between farmland elements
and the surrounding environment.It struggles to effectively model the dynamic
temporal evolution and spatial heterogeneity of farmland. Language,as a
structured knowledge carrier,can explicitly express the spatiotemporal
characteristics of farmland, such as its shape, distribution,and surrounding
environmental information.Therefore,a language-driven learning paradigm can
effectively alleviate the challenges posed by the spatiotemporal heterogeneity
of farmland.However,in the field of remote sensing imagery of farmland,there is
currently no comprehensive benchmark dataset to support this research
direction.To fill this gap,we introduced language based descriptions of
farmland and developed FarmSeg-VL dataset,the first fine-grained image-text
dataset designed for spatiotemporal farmland segmentation.Firstly, this article
proposed a semi-automatic annotation method that can accurately assign caption
to each image, ensuring high data quality and semantic richness while improving
the efficiency of dataset construction.Secondly,the FarmSeg-VL exhibits
significant spatiotemporal characteristics.In terms of the temporal
dimension,it covers all four seasons.In terms of the spatial dimension,it
covers eight typical agricultural regions across China.In addition, in terms of
captions,FarmSeg-VL covers rich spatiotemporal characteristics of
farmland,including its inherent properties,phenological characteristics,
spatial distribution,topographic and geomorphic features,and the distribution
of surrounding environments.Finally,we present a performance analysis of VLMs
and the deep learning models that rely solely on labels trained on the
FarmSeg-VL,demonstrating its potential as a standard benchmark for farmland
segmentation.
|
2503.23109 | Xiaolu Liu | Xiaolu Liu, Ruizi Yang, Song Wang, Wentong Li, Junbo Chen, Jianke Zhu | Uncertainty-Instructed Structure Injection for Generalizable HD Map
Construction | 17 pages, 10 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable high-definition (HD) map construction is crucial for the driving
safety of autonomous vehicles. Although recent studies demonstrate improved
performance, their generalization capability across unfamiliar driving scenes
remains unexplored. To tackle this issue, we propose UIGenMap, an
uncertainty-instructed structure injection approach for generalizable HD map
vectorization, which concerns the uncertainty resampling in statistical
distribution and employs explicit instance features to reduce excessive
reliance on training data. Specifically, we introduce the perspective-view (PV)
detection branch to obtain explicit structural features, in which the
uncertainty-aware decoder is designed to dynamically sample probability
distributions considering the difference in scenes. With probabilistic
embedding and selection, UI2DPrompt is proposed to construct PV-learnable
prompts. These PV prompts are integrated into the map decoder by designed
hybrid injection to compensate for neglected instance structures. To ensure
real-time inference, a lightweight Mimic Query Distillation is designed to
learn from PV prompts, which can serve as an efficient alternative to the flow
of PV branches. Extensive experiments on challenging geographically disjoint
(geo-based) data splits demonstrate that our UIGenMap achieves superior
performance, with +5.7 mAP improvement on the nuScenes dataset. Source code
will be available at https://github.com/xiaolul2/UIGenMap.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 15:01:38 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Xiaolu",
""
],
[
"Yang",
"Ruizi",
""
],
[
"Wang",
"Song",
""
],
[
"Li",
"Wentong",
""
],
[
"Chen",
"Junbo",
""
],
[
"Zhu",
"Jianke",
""
]
] | TITLE: Uncertainty-Instructed Structure Injection for Generalizable HD Map
Construction
ABSTRACT: Reliable high-definition (HD) map construction is crucial for the driving
safety of autonomous vehicles. Although recent studies demonstrate improved
performance, their generalization capability across unfamiliar driving scenes
remains unexplored. To tackle this issue, we propose UIGenMap, an
uncertainty-instructed structure injection approach for generalizable HD map
vectorization, which concerns the uncertainty resampling in statistical
distribution and employs explicit instance features to reduce excessive
reliance on training data. Specifically, we introduce the perspective-view (PV)
detection branch to obtain explicit structural features, in which the
uncertainty-aware decoder is designed to dynamically sample probability
distributions considering the difference in scenes. With probabilistic
embedding and selection, UI2DPrompt is proposed to construct PV-learnable
prompts. These PV prompts are integrated into the map decoder by designed
hybrid injection to compensate for neglected instance structures. To ensure
real-time inference, a lightweight Mimic Query Distillation is designed to
learn from PV prompts, which can serve as an efficient alternative to the flow
of PV branches. Extensive experiments on challenging geographically disjoint
(geo-based) data splits demonstrate that our UIGenMap achieves superior
performance, with +5.7 mAP improvement on the nuScenes dataset. Source code
will be available at https://github.com/xiaolul2/UIGenMap.
|
2503.23121 | Ling-An Zeng | Guohong Huang, Ling-An Zeng, Zexin Zheng, Shengbo Gu, Wei-Shi Zheng | Efficient Explicit Joint-level Interaction Modeling with Mamba for
Text-guided HOI Generation | Accepted to ICME 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach for generating text-guided human-object
interactions (HOIs) that achieves explicit joint-level interaction modeling in
a computationally efficient manner. Previous methods represent the entire human
body as a single token, making it difficult to capture fine-grained joint-level
interactions and resulting in unrealistic HOIs. However, treating each
individual joint as a token would yield over twenty times more tokens,
increasing computational overhead. To address these challenges, we introduce an
Efficient Explicit Joint-level Interaction Model (EJIM). EJIM features a
Dual-branch HOI Mamba that separately and efficiently models spatiotemporal HOI
information, as well as a Dual-branch Condition Injector for integrating text
semantics and object geometry into human and object motions. Furthermore, we
design a Dynamic Interaction Block and a progressive masking mechanism to
iteratively filter out irrelevant joints, ensuring accurate and nuanced
interaction modeling. Extensive quantitative and qualitative evaluations on
public datasets demonstrate that EJIM surpasses previous works by a large
margin while using only 5\% of the inference time. Code is available
\href{https://github.com/Huanggh531/EJIM}{here}.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 15:23:21 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Huang",
"Guohong",
""
],
[
"Zeng",
"Ling-An",
""
],
[
"Zheng",
"Zexin",
""
],
[
"Gu",
"Shengbo",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] | TITLE: Efficient Explicit Joint-level Interaction Modeling with Mamba for
Text-guided HOI Generation
ABSTRACT: We propose a novel approach for generating text-guided human-object
interactions (HOIs) that achieves explicit joint-level interaction modeling in
a computationally efficient manner. Previous methods represent the entire human
body as a single token, making it difficult to capture fine-grained joint-level
interactions and resulting in unrealistic HOIs. However, treating each
individual joint as a token would yield over twenty times more tokens,
increasing computational overhead. To address these challenges, we introduce an
Efficient Explicit Joint-level Interaction Model (EJIM). EJIM features a
Dual-branch HOI Mamba that separately and efficiently models spatiotemporal HOI
information, as well as a Dual-branch Condition Injector for integrating text
semantics and object geometry into human and object motions. Furthermore, we
design a Dynamic Interaction Block and a progressive masking mechanism to
iteratively filter out irrelevant joints, ensuring accurate and nuanced
interaction modeling. Extensive quantitative and qualitative evaluations on
public datasets demonstrate that EJIM surpasses previous works by a large
margin while using only 5\% of the inference time. Code is available
\href{https://github.com/Huanggh531/EJIM}{here}.
|
2503.23131 | Jiaming Zhang | Alexander Vogel, Omar Moured, Yufan Chen, Jiaming Zhang, Rainer
Stiefelhagen | RefChartQA: Grounding Visual Answer on Chart Images through Instruction
Tuning | All models and code will be publicly available at
https://github.com/moured/RefChartQA | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Vision Language Models (VLMs) have increasingly emphasized document
visual grounding to achieve better human-computer interaction, accessibility,
and detailed understanding. However, its application to visualizations such as
charts remains under-explored due to the inherent complexity of interleaved
visual-numerical relationships in chart images. Existing chart understanding
methods primarily focus on answering questions without explicitly identifying
the visual elements that support their predictions. To bridge this gap, we
introduce RefChartQA, a novel benchmark that integrates Chart Question
Answering (ChartQA) with visual grounding, enabling models to refer elements at
multiple granularities within chart images. Furthermore, we conduct a
comprehensive evaluation by instruction-tuning 5 state-of-the-art VLMs across
different categories. Our experiments demonstrate that incorporating spatial
awareness via grounding improves response accuracy by over 15%, reducing
hallucinations, and improving model reliability. Additionally, we identify key
factors influencing text-spatial alignment, such as architectural improvements
in TinyChart, which leverages a token-merging module for enhanced feature
fusion. Our dataset is open-sourced for community development and further
advancements. All models and code will be publicly available at
https://github.com/moured/RefChartQA.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 15:50:08 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Vogel",
"Alexander",
""
],
[
"Moured",
"Omar",
""
],
[
"Chen",
"Yufan",
""
],
[
"Zhang",
"Jiaming",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] | TITLE: RefChartQA: Grounding Visual Answer on Chart Images through Instruction
Tuning
ABSTRACT: Recently, Vision Language Models (VLMs) have increasingly emphasized document
visual grounding to achieve better human-computer interaction, accessibility,
and detailed understanding. However, its application to visualizations such as
charts remains under-explored due to the inherent complexity of interleaved
visual-numerical relationships in chart images. Existing chart understanding
methods primarily focus on answering questions without explicitly identifying
the visual elements that support their predictions. To bridge this gap, we
introduce RefChartQA, a novel benchmark that integrates Chart Question
Answering (ChartQA) with visual grounding, enabling models to refer elements at
multiple granularities within chart images. Furthermore, we conduct a
comprehensive evaluation by instruction-tuning 5 state-of-the-art VLMs across
different categories. Our experiments demonstrate that incorporating spatial
awareness via grounding improves response accuracy by over 15%, reducing
hallucinations, and improving model reliability. Additionally, we identify key
factors influencing text-spatial alignment, such as architectural improvements
in TinyChart, which leverages a token-merging module for enhanced feature
fusion. Our dataset is open-sourced for community development and further
advancements. All models and code will be publicly available at
https://github.com/moured/RefChartQA.
|
2503.23162 | Zhenyu Tang | Zhenyu Tang, Chaoran Feng, Xinhua Cheng, Wangbo Yu, Junwu Zhang, Yuan
Liu, Xiaoxiao Long, Wenping Wang, Li Yuan | NeuralGS: Bridging Neural Fields and 3D Gaussian Splatting for Compact
3D Representations | Project page: https://pku-yuangroup.github.io/NeuralGS/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D Gaussian Splatting (3DGS) demonstrates superior quality and rendering
speed, but with millions of 3D Gaussians and significant storage and
transmission costs. Recent 3DGS compression methods mainly concentrate on
compressing Scaffold-GS, achieving impressive performance but with an
additional voxel structure and a complex encoding and quantization strategy. In
this paper, we aim to develop a simple yet effective method called NeuralGS
that explores in another way to compress the original 3DGS into a compact
representation without the voxel structure and complex quantization strategies.
Our observation is that neural fields like NeRF can represent complex 3D scenes
with Multi-Layer Perceptron (MLP) neural networks using only a few megabytes.
Thus, NeuralGS effectively adopts the neural field representation to encode the
attributes of 3D Gaussians with MLPs, only requiring a small storage size even
for a large-scale scene. To achieve this, we adopt a clustering strategy and
fit the Gaussians with different tiny MLPs for each cluster, based on
importance scores of Gaussians as fitting weights. We experiment on multiple
datasets, achieving a 45-times average model size reduction without harming the
visual quality. The compression performance of our method on original 3DGS is
comparable to the dedicated Scaffold-GS-based compression methods, which
demonstrate the huge potential of directly compressing original 3DGS with
neural fields.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 17:36:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tang",
"Zhenyu",
""
],
[
"Feng",
"Chaoran",
""
],
[
"Cheng",
"Xinhua",
""
],
[
"Yu",
"Wangbo",
""
],
[
"Zhang",
"Junwu",
""
],
[
"Liu",
"Yuan",
""
],
[
"Long",
"Xiaoxiao",
""
],
[
"Wang",
"Wenping",
""
],
[
"Yuan",
"Li",
""
]
] | TITLE: NeuralGS: Bridging Neural Fields and 3D Gaussian Splatting for Compact
3D Representations
ABSTRACT: 3D Gaussian Splatting (3DGS) demonstrates superior quality and rendering
speed, but with millions of 3D Gaussians and significant storage and
transmission costs. Recent 3DGS compression methods mainly concentrate on
compressing Scaffold-GS, achieving impressive performance but with an
additional voxel structure and a complex encoding and quantization strategy. In
this paper, we aim to develop a simple yet effective method called NeuralGS
that explores in another way to compress the original 3DGS into a compact
representation without the voxel structure and complex quantization strategies.
Our observation is that neural fields like NeRF can represent complex 3D scenes
with Multi-Layer Perceptron (MLP) neural networks using only a few megabytes.
Thus, NeuralGS effectively adopts the neural field representation to encode the
attributes of 3D Gaussians with MLPs, only requiring a small storage size even
for a large-scale scene. To achieve this, we adopt a clustering strategy and
fit the Gaussians with different tiny MLPs for each cluster, based on
importance scores of Gaussians as fitting weights. We experiment on multiple
datasets, achieving a 45-times average model size reduction without harming the
visual quality. The compression performance of our method on original 3DGS is
comparable to the dedicated Scaffold-GS-based compression methods, which
demonstrate the huge potential of directly compressing original 3DGS with
neural fields.
|
2503.23163 | Yuxin Lu | Yuxin Lu, Yu-Ying Chuang, and R.Harald Baayen | The realization of tones in spontaneous spoken Taiwan Mandarin: a
corpus-based survey and theory-driven computational modeling | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | A growing body of literature has demonstrated that semantics can co-determine
fine phonetic detail. However, the complex interplay between phonetic
realization and semantics remains understudied, particularly in pitch
realization. The current study investigates the tonal realization of Mandarin
disyllabic words with all 20 possible combinations of two tones, as found in a
corpus of Taiwan Mandarin spontaneous speech. We made use of Generalized
Additive Mixed Models (GAMs) to model f0 contours as a function of a series of
predictors, including gender, tonal context, tone pattern, speech rate, word
position, bigram probability, speaker and word. In the GAM analysis, word and
sense emerged as crucial predictors of f0 contours, with effect sizes that
exceed those of tone pattern. For each word token in our dataset, we then
obtained a contextualized embedding by applying the GPT-2 large language model
to the context of that token in the corpus. We show that the pitch contours of
word tokens can be predicted to a considerable extent from these contextualized
embeddings, which approximate token-specific meanings in contexts of use. The
results of our corpus study show that meaning in context and phonetic
realization are far more entangled than standard linguistic theory predicts.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 17:39:55 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lu",
"Yuxin",
""
],
[
"Chuang",
"Yu-Ying",
""
],
[
"Baayen",
"R. Harald",
""
]
] | TITLE: The realization of tones in spontaneous spoken Taiwan Mandarin: a
corpus-based survey and theory-driven computational modeling
ABSTRACT: A growing body of literature has demonstrated that semantics can co-determine
fine phonetic detail. However, the complex interplay between phonetic
realization and semantics remains understudied, particularly in pitch
realization. The current study investigates the tonal realization of Mandarin
disyllabic words with all 20 possible combinations of two tones, as found in a
corpus of Taiwan Mandarin spontaneous speech. We made use of Generalized
Additive Mixed Models (GAMs) to model f0 contours as a function of a series of
predictors, including gender, tonal context, tone pattern, speech rate, word
position, bigram probability, speaker and word. In the GAM analysis, word and
sense emerged as crucial predictors of f0 contours, with effect sizes that
exceed those of tone pattern. For each word token in our dataset, we then
obtained a contextualized embedding by applying the GPT-2 large language model
to the context of that token in the corpus. We show that the pitch contours of
word tokens can be predicted to a considerable extent from these contextualized
embeddings, which approximate token-specific meanings in contexts of use. The
results of our corpus study show that meaning in context and phonetic
realization are far more entangled than standard linguistic theory predicts.
|
2503.23168 | Xiaoqing Zhang | Ziming Chen and Xiaoqing Zhang | A Novel Transformed Fibered Rank Approximation with Total Variation
Regularization for Tensor Completion | null | null | null | null | math.NA cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, tensor fibered rank has demonstrated impressive performance by
effectively leveraging the global low-rank property in all directions for
low-rank tensor completion (LRTC). However, it still has some limitations.
Firstly, the typical tensor fibered rank approximation based on tensor nuclear
norm (TNN) processes fixed and data-independent transformation, which may not
be optimal for the underlying tensor structure. Secondly, it ignores the local
piecewise smoothness of the dataset. To address these limitations, we present a
nonconvex learnable transformed fibered nuclear norm (NLTFNN) model for
LRTC,which uses a learnable transformed fibered nuclear norm with
Log-Determinant (LTFNNLog) as tensor fibered rank approximation, and employs a
total variation (TV) regularization to explore local piecewise smoothness. An
efficient algorithm based on the alternating direction method of multipliers
(ADMM) is developed to solve NLTFNN and the convergence of the algorithm is
proved theoretically. Experiments on various datasets show the superiority of
NLTFNN over several existing methods.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 17:51:24 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Ziming",
""
],
[
"Zhang",
"Xiaoqing",
""
]
] | TITLE: A Novel Transformed Fibered Rank Approximation with Total Variation
Regularization for Tensor Completion
ABSTRACT: Recently, tensor fibered rank has demonstrated impressive performance by
effectively leveraging the global low-rank property in all directions for
low-rank tensor completion (LRTC). However, it still has some limitations.
Firstly, the typical tensor fibered rank approximation based on tensor nuclear
norm (TNN) processes fixed and data-independent transformation, which may not
be optimal for the underlying tensor structure. Secondly, it ignores the local
piecewise smoothness of the dataset. To address these limitations, we present a
nonconvex learnable transformed fibered nuclear norm (NLTFNN) model for
LRTC,which uses a learnable transformed fibered nuclear norm with
Log-Determinant (LTFNNLog) as tensor fibered rank approximation, and employs a
total variation (TV) regularization to explore local piecewise smoothness. An
efficient algorithm based on the alternating direction method of multipliers
(ADMM) is developed to solve NLTFNN and the convergence of the algorithm is
proved theoretically. Experiments on various datasets show the superiority of
NLTFNN over several existing methods.
|
2503.23175 | Emanuele Mezzi | Emanuele Mezzi, Fabio Massacci and Katja Tuma | Large Language Models are Unreliable for Cyber Threat Intelligence | null | null | null | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several recent works have argued that Large Language Models (LLMs) can be
used to tame the data deluge in the cybersecurity field, by improving the
automation of Cyber Threat Intelligence (CTI) tasks. This work presents an
evaluation methodology that other than allowing to test LLMs on CTI tasks when
using zero-shot learning, few-shot learning and fine-tuning, also allows to
quantify their consistency and their confidence level. We run experiments with
three state-of-the-art LLMs and a dataset of 350 threat intelligence reports
and present new evidence of potential security risks in relying on LLMs for
CTI. We show how LLMs cannot guarantee sufficient performance on real-size
reports while also being inconsistent and overconfident. Few-shot learning and
fine-tuning only partially improve the results, thus posing doubts about the
possibility of using LLMs for CTI scenarios, where labelled datasets are
lacking and where confidence is a fundamental factor.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 18:09:36 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Mezzi",
"Emanuele",
""
],
[
"Massacci",
"Fabio",
""
],
[
"Tuma",
"Katja",
""
]
] | TITLE: Large Language Models are Unreliable for Cyber Threat Intelligence
ABSTRACT: Several recent works have argued that Large Language Models (LLMs) can be
used to tame the data deluge in the cybersecurity field, by improving the
automation of Cyber Threat Intelligence (CTI) tasks. This work presents an
evaluation methodology that other than allowing to test LLMs on CTI tasks when
using zero-shot learning, few-shot learning and fine-tuning, also allows to
quantify their consistency and their confidence level. We run experiments with
three state-of-the-art LLMs and a dataset of 350 threat intelligence reports
and present new evidence of potential security risks in relying on LLMs for
CTI. We show how LLMs cannot guarantee sufficient performance on real-size
reports while also being inconsistent and overconfident. Few-shot learning and
fine-tuning only partially improve the results, thus posing doubts about the
possibility of using LLMs for CTI scenarios, where labelled datasets are
lacking and where confidence is a fundamental factor.
|
2503.23181 | Sunoh Kim | Sunoh Kim, Daeho Um | Enhancing Weakly Supervised Video Grounding via Diverse Inference
Strategies for Boundary and Prediction Selection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Weakly supervised video grounding aims to localize temporal boundaries
relevant to a given query without explicit ground-truth temporal boundaries.
While existing methods primarily use Gaussian-based proposals, they overlook
the importance of (1) boundary prediction and (2) top-1 prediction selection
during inference. In their boundary prediction, boundaries are simply set at
half a standard deviation away from a Gaussian mean on both sides, which may
not accurately capture the optimal boundaries. In the top-1 prediction process,
these existing methods rely heavily on intersections with other proposals,
without considering the varying quality of each proposal. To address these
issues, we explore various inference strategies by introducing (1) novel
boundary prediction methods to capture diverse boundaries from multiple
Gaussians and (2) new selection methods that take proposal quality into
account. Extensive experiments on the ActivityNet Captions and Charades-STA
datasets validate the effectiveness of our inference strategies, demonstrating
performance improvements without requiring additional training.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 18:33:58 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kim",
"Sunoh",
""
],
[
"Um",
"Daeho",
""
]
] | TITLE: Enhancing Weakly Supervised Video Grounding via Diverse Inference
Strategies for Boundary and Prediction Selection
ABSTRACT: Weakly supervised video grounding aims to localize temporal boundaries
relevant to a given query without explicit ground-truth temporal boundaries.
While existing methods primarily use Gaussian-based proposals, they overlook
the importance of (1) boundary prediction and (2) top-1 prediction selection
during inference. In their boundary prediction, boundaries are simply set at
half a standard deviation away from a Gaussian mean on both sides, which may
not accurately capture the optimal boundaries. In the top-1 prediction process,
these existing methods rely heavily on intersections with other proposals,
without considering the varying quality of each proposal. To address these
issues, we explore various inference strategies by introducing (1) novel
boundary prediction methods to capture diverse boundaries from multiple
Gaussians and (2) new selection methods that take proposal quality into
account. Extensive experiments on the ActivityNet Captions and Charades-STA
datasets validate the effectiveness of our inference strategies, demonstrating
performance improvements without requiring additional training.
|
2503.23186 | Vishnu Vardhan Baligodugula | Vishnu Vardhan Baligodugula, Fathi Amsaad | Optimizing Distributed Training Approaches for Scaling Neural Networks | null | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | This paper presents a comparative analysis of distributed training strategies
for large-scale neural networks, focusing on data parallelism, model
parallelism, and hybrid approaches. We evaluate these strategies on image
classification tasks using the CIFAR-100 dataset, measuring training time,
convergence rate, and model accuracy. Our experimental results demonstrate that
hybrid parallelism achieves a 3.2x speedup compared to single-device training
while maintaining comparable accuracy. We propose an adaptive scheduling
algorithm that dynamically switches between parallelism strategies based on
network characteristics and available computational resources, resulting in an
additional 18% improvement in training efficiency.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 18:51:56 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Baligodugula",
"Vishnu Vardhan",
""
],
[
"Amsaad",
"Fathi",
""
]
] | TITLE: Optimizing Distributed Training Approaches for Scaling Neural Networks
ABSTRACT: This paper presents a comparative analysis of distributed training strategies
for large-scale neural networks, focusing on data parallelism, model
parallelism, and hybrid approaches. We evaluate these strategies on image
classification tasks using the CIFAR-100 dataset, measuring training time,
convergence rate, and model accuracy. Our experimental results demonstrate that
hybrid parallelism achieves a 3.2x speedup compared to single-device training
while maintaining comparable accuracy. We propose an adaptive scheduling
algorithm that dynamically switches between parallelism strategies based on
network characteristics and available computational resources, resulting in an
additional 18% improvement in training efficiency.
|
2503.23204 | Aden Haussmann | Aden Haussmann | The Challenge of Achieving Attributability in Multilingual Table-to-Text
Generation with Question-Answer Blueprints | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Multilingual Natural Language Generation (NLG) is challenging due to the lack
of training data for low-resource languages. However, some low-resource
languages have up to tens of millions of speakers globally, making it important
to improve NLG tools for them. Table-to-Text NLG is an excellent measure of
models' reasoning abilities but is very challenging in the multilingual
setting. System outputs are often not attributable, or faithful, to the data in
the source table. Intermediate planning techniques like Question-Answer (QA)
blueprints have been shown to improve attributability on summarisation tasks.
This work explores whether QA blueprints make multilingual Table-to-Text
outputs more attributable to the input tables. This paper extends the
challenging multilingual Table-to-Text dataset, TaTA, which includes African
languages, with QA blueprints. Sequence-to-sequence language models are then
finetuned on this dataset, with and without blueprints. Results show that QA
blueprints improve performance for models finetuned and evaluated only on
English examples, but do not demonstrate gains in the multilingual setting.
This is due to inaccuracies in machine translating the blueprints from English
into target languages when generating the training data, and models failing to
rely closely on the blueprints they generate. An in-depth analysis is conducted
on why this is challenging.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 20:04:00 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Haussmann",
"Aden",
""
]
] | TITLE: The Challenge of Achieving Attributability in Multilingual Table-to-Text
Generation with Question-Answer Blueprints
ABSTRACT: Multilingual Natural Language Generation (NLG) is challenging due to the lack
of training data for low-resource languages. However, some low-resource
languages have up to tens of millions of speakers globally, making it important
to improve NLG tools for them. Table-to-Text NLG is an excellent measure of
models' reasoning abilities but is very challenging in the multilingual
setting. System outputs are often not attributable, or faithful, to the data in
the source table. Intermediate planning techniques like Question-Answer (QA)
blueprints have been shown to improve attributability on summarisation tasks.
This work explores whether QA blueprints make multilingual Table-to-Text
outputs more attributable to the input tables. This paper extends the
challenging multilingual Table-to-Text dataset, TaTA, which includes African
languages, with QA blueprints. Sequence-to-sequence language models are then
finetuned on this dataset, with and without blueprints. Results show that QA
blueprints improve performance for models finetuned and evaluated only on
English examples, but do not demonstrate gains in the multilingual setting.
This is due to inaccuracies in machine translating the blueprints from English
into target languages when generating the training data, and models failing to
rely closely on the blueprints they generate. An in-depth analysis is conducted
on why this is challenging.
|
2503.23205 | Jianfang Chen | Jianfang Chen, Kai Zhang, Aoran Gan, Shiwei Tong, Shuanghong Shen, Qi
Liu | Enhancing Knowledge Graph Completion with Entity Neighborhood and
Relation Context | null | null | null | null | cs.CL cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Graph Completion (KGC) aims to infer missing information in
Knowledge Graphs (KGs) to address their inherent incompleteness. Traditional
structure-based KGC methods, while effective, face significant computational
demands and scalability challenges due to the need for dense embedding learning
and scoring all entities in the KG for each prediction. Recent text-based
approaches using language models like T5 and BERT have mitigated these issues
by converting KG triples into text for reasoning. However, they often fail to
fully utilize contextual information, focusing mainly on the neighborhood of
the entity and neglecting the context of the relation. To address this issue,
we propose KGC-ERC, a framework that integrates both types of context to enrich
the input of generative language models and enhance their reasoning
capabilities. Additionally, we introduce a sampling strategy to effectively
select relevant context within input token constraints, which optimizes the
utilization of contextual information and potentially improves model
performance. Experiments on the Wikidata5M, Wiki27K, and FB15K-237-N datasets
show that KGC-ERC outperforms or matches state-of-the-art baselines in
predictive performance and scalability.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 20:04:50 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Jianfang",
""
],
[
"Zhang",
"Kai",
""
],
[
"Gan",
"Aoran",
""
],
[
"Tong",
"Shiwei",
""
],
[
"Shen",
"Shuanghong",
""
],
[
"Liu",
"Qi",
""
]
] | TITLE: Enhancing Knowledge Graph Completion with Entity Neighborhood and
Relation Context
ABSTRACT: Knowledge Graph Completion (KGC) aims to infer missing information in
Knowledge Graphs (KGs) to address their inherent incompleteness. Traditional
structure-based KGC methods, while effective, face significant computational
demands and scalability challenges due to the need for dense embedding learning
and scoring all entities in the KG for each prediction. Recent text-based
approaches using language models like T5 and BERT have mitigated these issues
by converting KG triples into text for reasoning. However, they often fail to
fully utilize contextual information, focusing mainly on the neighborhood of
the entity and neglecting the context of the relation. To address this issue,
we propose KGC-ERC, a framework that integrates both types of context to enrich
the input of generative language models and enhance their reasoning
capabilities. Additionally, we introduce a sampling strategy to effectively
select relevant context within input token constraints, which optimizes the
utilization of contextual information and potentially improves model
performance. Experiments on the Wikidata5M, Wiki27K, and FB15K-237-N datasets
show that KGC-ERC outperforms or matches state-of-the-art baselines in
predictive performance and scalability.
|
2503.23213 | Diana Bolanos | Diana Bolanos, Mohammadmehdi Ataei, Daniele Grandi, Kosa
Goucher-Lambert | RECALL-MM: A Multimodal Dataset of Consumer Product Recalls for Risk
Analysis using Computational Methods and Large Language Models | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Product recalls provide valuable insights into potential risks and hazards
within the engineering design process, yet their full potential remains
underutilized. In this study, we curate data from the United States Consumer
Product Safety Commission (CPSC) recalls database to develop a multimodal
dataset, RECALL-MM, that informs data-driven risk assessment using historical
information, and augment it using generative methods. Patterns in the dataset
highlight specific areas where improved safety measures could have significant
impact. We extend our analysis by demonstrating interactive clustering maps
that embed all recalls into a shared latent space based on recall descriptions
and product names. Leveraging these data-driven tools, we explore three case
studies to demonstrate the dataset's utility in identifying product risks and
guiding safer design decisions. The first two case studies illustrate how
designers can visualize patterns across recalled products and situate new
product ideas within the broader recall landscape to proactively anticipate
hazards. In the third case study, we extend our approach by employing a large
language model (LLM) to predict potential hazards based solely on product
images. This demonstrates the model's ability to leverage visual context to
identify risk factors, revealing strong alignment with historical recall data
across many hazard categories. However, the analysis also highlights areas
where hazard prediction remains challenging, underscoring the importance of
risk awareness throughout the design process. Collectively, this work aims to
bridge the gap between historical recall data and future product safety,
presenting a scalable, data-driven approach to safer engineering design.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 20:27:28 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Bolanos",
"Diana",
""
],
[
"Ataei",
"Mohammadmehdi",
""
],
[
"Grandi",
"Daniele",
""
],
[
"Goucher-Lambert",
"Kosa",
""
]
] | TITLE: RECALL-MM: A Multimodal Dataset of Consumer Product Recalls for Risk
Analysis using Computational Methods and Large Language Models
ABSTRACT: Product recalls provide valuable insights into potential risks and hazards
within the engineering design process, yet their full potential remains
underutilized. In this study, we curate data from the United States Consumer
Product Safety Commission (CPSC) recalls database to develop a multimodal
dataset, RECALL-MM, that informs data-driven risk assessment using historical
information, and augment it using generative methods. Patterns in the dataset
highlight specific areas where improved safety measures could have significant
impact. We extend our analysis by demonstrating interactive clustering maps
that embed all recalls into a shared latent space based on recall descriptions
and product names. Leveraging these data-driven tools, we explore three case
studies to demonstrate the dataset's utility in identifying product risks and
guiding safer design decisions. The first two case studies illustrate how
designers can visualize patterns across recalled products and situate new
product ideas within the broader recall landscape to proactively anticipate
hazards. In the third case study, we extend our approach by employing a large
language model (LLM) to predict potential hazards based solely on product
images. This demonstrates the model's ability to leverage visual context to
identify risk factors, revealing strong alignment with historical recall data
across many hazard categories. However, the analysis also highlights areas
where hazard prediction remains challenging, underscoring the importance of
risk awareness throughout the design process. Collectively, this work aims to
bridge the gap between historical recall data and future product safety,
presenting a scalable, data-driven approach to safer engineering design.
|
2503.23214 | Vincent Gbouna Zakka Mr | Vincent Gbouna Zakka, Zhuangzhuang Dai, Luis J. Manso | Action Recognition in Real-World Ambient Assisted Living Environment | null | null | 10.26599/BDMA.2025.9020003 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The growing ageing population and their preference to maintain independence
by living in their own homes require proactive strategies to ensure safety and
support. Ambient Assisted Living (AAL) technologies have emerged to facilitate
ageing in place by offering continuous monitoring and assistance within the
home. Within AAL technologies, action recognition plays a crucial role in
interpreting human activities and detecting incidents like falls, mobility
decline, or unusual behaviours that may signal worsening health conditions.
However, action recognition in practical AAL applications presents challenges,
including occlusions, noisy data, and the need for real-time performance. While
advancements have been made in accuracy, robustness to noise, and computation
efficiency, achieving a balance among them all remains a challenge. To address
this challenge, this paper introduces the Robust and Efficient Temporal
Convolution network (RE-TCN), which comprises three main elements: Adaptive
Temporal Weighting (ATW), Depthwise Separable Convolutions (DSC), and data
augmentation techniques. These elements aim to enhance the model's accuracy,
robustness against noise and occlusion, and computational efficiency within
real-world AAL contexts. RE-TCN outperforms existing models in terms of
accuracy, noise and occlusion robustness, and has been validated on four
benchmark datasets: NTU RGB+D 60, Northwestern-UCLA, SHREC'17, and DHG-14/28.
The code is publicly available at: https://github.com/Gbouna/RE-TCN
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 20:32:22 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zakka",
"Vincent Gbouna",
""
],
[
"Dai",
"Zhuangzhuang",
""
],
[
"Manso",
"Luis J.",
""
]
] | TITLE: Action Recognition in Real-World Ambient Assisted Living Environment
ABSTRACT: The growing ageing population and their preference to maintain independence
by living in their own homes require proactive strategies to ensure safety and
support. Ambient Assisted Living (AAL) technologies have emerged to facilitate
ageing in place by offering continuous monitoring and assistance within the
home. Within AAL technologies, action recognition plays a crucial role in
interpreting human activities and detecting incidents like falls, mobility
decline, or unusual behaviours that may signal worsening health conditions.
However, action recognition in practical AAL applications presents challenges,
including occlusions, noisy data, and the need for real-time performance. While
advancements have been made in accuracy, robustness to noise, and computation
efficiency, achieving a balance among them all remains a challenge. To address
this challenge, this paper introduces the Robust and Efficient Temporal
Convolution network (RE-TCN), which comprises three main elements: Adaptive
Temporal Weighting (ATW), Depthwise Separable Convolutions (DSC), and data
augmentation techniques. These elements aim to enhance the model's accuracy,
robustness against noise and occlusion, and computational efficiency within
real-world AAL contexts. RE-TCN outperforms existing models in terms of
accuracy, noise and occlusion robustness, and has been validated on four
benchmark datasets: NTU RGB+D 60, Northwestern-UCLA, SHREC'17, and DHG-14/28.
The code is publicly available at: https://github.com/Gbouna/RE-TCN
|
2503.23215 | Vishnu Vardhan Baligodugula | Vishnu Vardhan Baligodugula, Fathi Amsaad | Unsupervised Learning: Comparative Analysis of Clustering Techniques on
High-Dimensional Data | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | This paper presents a comprehensive comparative analysis of prominent
clustering algorithms K-means, DBSCAN, and Spectral Clustering on
high-dimensional datasets. We introduce a novel evaluation framework that
assesses clustering performance across multiple dimensionality reduction
techniques (PCA, t-SNE, and UMAP) using diverse quantitative metrics.
Experiments conducted on MNIST, Fashion-MNIST, and UCI HAR datasets reveal that
preprocessing with UMAP consistently improves clustering quality across all
algorithms, with Spectral Clustering demonstrating superior performance on
complex manifold structures. Our findings show that algorithm selection should
be guided by data characteristics, with Kmeans excelling in computational
efficiency, DBSCAN in handling irregular clusters, and Spectral Clustering in
capturing complex relationships. This research contributes a systematic
approach for evaluating and selecting clustering techniques for high
dimensional data applications.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 20:38:04 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Baligodugula",
"Vishnu Vardhan",
""
],
[
"Amsaad",
"Fathi",
""
]
] | TITLE: Unsupervised Learning: Comparative Analysis of Clustering Techniques on
High-Dimensional Data
ABSTRACT: This paper presents a comprehensive comparative analysis of prominent
clustering algorithms K-means, DBSCAN, and Spectral Clustering on
high-dimensional datasets. We introduce a novel evaluation framework that
assesses clustering performance across multiple dimensionality reduction
techniques (PCA, t-SNE, and UMAP) using diverse quantitative metrics.
Experiments conducted on MNIST, Fashion-MNIST, and UCI HAR datasets reveal that
preprocessing with UMAP consistently improves clustering quality across all
algorithms, with Spectral Clustering demonstrating superior performance on
complex manifold structures. Our findings show that algorithm selection should
be guided by data characteristics, with Kmeans excelling in computational
efficiency, DBSCAN in handling irregular clusters, and Spectral Clustering in
capturing complex relationships. This research contributes a systematic
approach for evaluating and selecting clustering techniques for high
dimensional data applications.
|
2503.23220 | Marc-Antoine Lavoie | Marc-Antoine Lavoie, Anas Mahmoud, Steven L. Waslander | Large Self-Supervised Models Bridge the Gap in Domain Adaptive Object
Detection | 16 pages (8 main), 5 figures, accepted at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The current state-of-the-art methods in domain adaptive object detection
(DAOD) use Mean Teacher self-labelling, where a teacher model, directly derived
as an exponential moving average of the student model, is used to generate
labels on the target domain which are then used to improve both models in a
positive loop. This couples learning and generating labels on the target
domain, and other recent works also leverage the generated labels to add
additional domain alignment losses. We believe this coupling is brittle and
excessively constrained: there is no guarantee that a student trained only on
source data can generate accurate target domain labels and initiate the
positive feedback loop, and much better target domain labels can likely be
generated by using a large pretrained network that has been exposed to much
more data. Vision foundational models are exactly such models, and they have
shown impressive task generalization capabilities even when frozen. We want to
leverage these models for DAOD and introduce DINO Teacher, which consists of
two components. First, we train a new labeller on source data only using a
large frozen DINOv2 backbone and show it generates more accurate labels than
Mean Teacher. Next, we align the student's source and target image patch
features with those from a DINO encoder, driving source and target
representations closer to the generalizable DINO representation. We obtain
state-of-the-art performance on multiple DAOD datasets. Code available at
https://github.com/TRAILab/DINO_Teacher
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 20:46:38 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lavoie",
"Marc-Antoine",
""
],
[
"Mahmoud",
"Anas",
""
],
[
"Waslander",
"Steven L.",
""
]
] | TITLE: Large Self-Supervised Models Bridge the Gap in Domain Adaptive Object
Detection
ABSTRACT: The current state-of-the-art methods in domain adaptive object detection
(DAOD) use Mean Teacher self-labelling, where a teacher model, directly derived
as an exponential moving average of the student model, is used to generate
labels on the target domain which are then used to improve both models in a
positive loop. This couples learning and generating labels on the target
domain, and other recent works also leverage the generated labels to add
additional domain alignment losses. We believe this coupling is brittle and
excessively constrained: there is no guarantee that a student trained only on
source data can generate accurate target domain labels and initiate the
positive feedback loop, and much better target domain labels can likely be
generated by using a large pretrained network that has been exposed to much
more data. Vision foundational models are exactly such models, and they have
shown impressive task generalization capabilities even when frozen. We want to
leverage these models for DAOD and introduce DINO Teacher, which consists of
two components. First, we train a new labeller on source data only using a
large frozen DINOv2 backbone and show it generates more accurate labels than
Mean Teacher. Next, we align the student's source and target image patch
features with those from a DINO encoder, driving source and target
representations closer to the generalizable DINO representation. We obtain
state-of-the-art performance on multiple DAOD datasets. Code available at
https://github.com/TRAILab/DINO_Teacher
|
2503.23226 | Kushal Agrawal | Kushal Agrawal, Romi Banerjee | Synthetic Art Generation and DeepFake Detection A Study on Jamini Roy
Inspired Dataset | 13 pages, 7 figures, 6 tables | null | 10.36227/techrxiv.174119231.19482547/v1 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | The intersection of generative AI and art is a fascinating area that brings
both exciting opportunities and significant challenges, especially when it
comes to identifying synthetic artworks. This study takes a unique approach by
examining diffusion-based generative models in the context of Indian art,
specifically focusing on the distinctive style of Jamini Roy. To explore this,
we fine-tuned Stable Diffusion 3 and used techniques like ControlNet and
IPAdapter to generate realistic images. This allowed us to create a new dataset
that includes both real and AI-generated artworks, which is essential for a
detailed analysis of what these models can produce. We employed various
qualitative and quantitative methods, such as Fourier domain assessments and
autocorrelation metrics, to uncover subtle differences between synthetic images
and authentic pieces. A key takeaway from recent research is that existing
methods for detecting deepfakes face considerable challenges, especially when
the deepfakes are of high quality and tailored to specific cultural contexts.
This highlights a critical gap in current detection technologies, particularly
in light of the challenges identified above, where high-quality and culturally
specific deepfakes are difficult to detect. This work not only sheds light on
the increasing complexity of generative models but also sets a crucial
foundation for future research aimed at effective detection of synthetic art.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 21:12:16 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Agrawal",
"Kushal",
""
],
[
"Banerjee",
"Romi",
""
]
] | TITLE: Synthetic Art Generation and DeepFake Detection A Study on Jamini Roy
Inspired Dataset
ABSTRACT: The intersection of generative AI and art is a fascinating area that brings
both exciting opportunities and significant challenges, especially when it
comes to identifying synthetic artworks. This study takes a unique approach by
examining diffusion-based generative models in the context of Indian art,
specifically focusing on the distinctive style of Jamini Roy. To explore this,
we fine-tuned Stable Diffusion 3 and used techniques like ControlNet and
IPAdapter to generate realistic images. This allowed us to create a new dataset
that includes both real and AI-generated artworks, which is essential for a
detailed analysis of what these models can produce. We employed various
qualitative and quantitative methods, such as Fourier domain assessments and
autocorrelation metrics, to uncover subtle differences between synthetic images
and authentic pieces. A key takeaway from recent research is that existing
methods for detecting deepfakes face considerable challenges, especially when
the deepfakes are of high quality and tailored to specific cultural contexts.
This highlights a critical gap in current detection technologies, particularly
in light of the challenges identified above, where high-quality and culturally
specific deepfakes are difficult to detect. This work not only sheds light on
the increasing complexity of generative models but also sets a crucial
foundation for future research aimed at effective detection of synthetic art.
|
2503.23239 | Reza Esfandiarpoor | Reza Esfandiarpoor, George Zerveas, Ruochen Zhang, Macton Mgonzo,
Carsten Eickhoff, Stephen H. Bach | Beyond Contrastive Learning: Synthetic Data Enables List-wise Training
with Multiple Levels of Relevance | Code: https://github.com/BatsResearch/sycl | null | null | null | cs.IR cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in large language models (LLMs) have allowed the
augmentation of information retrieval (IR) pipelines with synthetic data in
various ways. Yet, the main training paradigm remains: contrastive learning
with binary relevance labels and the InfoNCE loss, where one positive document
is compared against one or more negatives. This objective treats all documents
that are not explicitly annotated as relevant on an equally negative footing,
regardless of their actual degree of relevance, thus (a) missing subtle nuances
that are useful for ranking and (b) being susceptible to annotation noise. To
overcome this limitation, in this work we forgo real training documents and
annotations altogether and use open-source LLMs to directly generate synthetic
documents that answer real user queries according to several different levels
of relevance. This fully synthetic ranking context of graduated relevance,
together with an appropriate list-wise loss (Wasserstein distance), enables us
to train dense retrievers in a way that better captures the ranking task.
Experiments on various IR datasets show that our proposed approach outperforms
conventional training with InfoNCE by a large margin. Without using any real
documents for training, our dense retriever significantly outperforms the same
retriever trained through self-supervision. More importantly, it matches the
performance of the same retriever trained on real, labeled training documents
of the same dataset, while being more robust to distribution shift and clearly
outperforming it when evaluated zero-shot on the BEIR dataset collection.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 22:33:22 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Esfandiarpoor",
"Reza",
""
],
[
"Zerveas",
"George",
""
],
[
"Zhang",
"Ruochen",
""
],
[
"Mgonzo",
"Macton",
""
],
[
"Eickhoff",
"Carsten",
""
],
[
"Bach",
"Stephen H.",
""
]
] | TITLE: Beyond Contrastive Learning: Synthetic Data Enables List-wise Training
with Multiple Levels of Relevance
ABSTRACT: Recent advancements in large language models (LLMs) have allowed the
augmentation of information retrieval (IR) pipelines with synthetic data in
various ways. Yet, the main training paradigm remains: contrastive learning
with binary relevance labels and the InfoNCE loss, where one positive document
is compared against one or more negatives. This objective treats all documents
that are not explicitly annotated as relevant on an equally negative footing,
regardless of their actual degree of relevance, thus (a) missing subtle nuances
that are useful for ranking and (b) being susceptible to annotation noise. To
overcome this limitation, in this work we forgo real training documents and
annotations altogether and use open-source LLMs to directly generate synthetic
documents that answer real user queries according to several different levels
of relevance. This fully synthetic ranking context of graduated relevance,
together with an appropriate list-wise loss (Wasserstein distance), enables us
to train dense retrievers in a way that better captures the ranking task.
Experiments on various IR datasets show that our proposed approach outperforms
conventional training with InfoNCE by a large margin. Without using any real
documents for training, our dense retriever significantly outperforms the same
retriever trained through self-supervision. More importantly, it matches the
performance of the same retriever trained on real, labeled training documents
of the same dataset, while being more robust to distribution shift and clearly
outperforming it when evaluated zero-shot on the BEIR dataset collection.
|
2503.23242 | Dominik Macko | Dominik Macko, Aashish Anantha Ramakrishnan, Jason Samuel Lucas,
Robert Moro, Ivan Srba, Adaku Uchendu, Dongwon Lee | Beyond speculation: Measuring the growing presence of LLM-generated
texts in multilingual disinformation | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Increased sophistication of large language models (LLMs) and the consequent
quality of generated multilingual text raises concerns about potential
disinformation misuse. While humans struggle to distinguish LLM-generated
content from human-written texts, the scholarly debate about their impact
remains divided. Some argue that heightened fears are overblown due to natural
ecosystem limitations, while others contend that specific "longtail" contexts
face overlooked risks. Our study bridges this debate by providing the first
empirical evidence of LLM presence in the latest real-world disinformation
datasets, documenting the increase of machine-generated content following
ChatGPT's release, and revealing crucial patterns across languages, platforms,
and time periods.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 22:47:53 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Macko",
"Dominik",
""
],
[
"Ramakrishnan",
"Aashish Anantha",
""
],
[
"Lucas",
"Jason Samuel",
""
],
[
"Moro",
"Robert",
""
],
[
"Srba",
"Ivan",
""
],
[
"Uchendu",
"Adaku",
""
],
[
"Lee",
"Dongwon",
""
]
] | TITLE: Beyond speculation: Measuring the growing presence of LLM-generated
texts in multilingual disinformation
ABSTRACT: Increased sophistication of large language models (LLMs) and the consequent
quality of generated multilingual text raises concerns about potential
disinformation misuse. While humans struggle to distinguish LLM-generated
content from human-written texts, the scholarly debate about their impact
remains divided. Some argue that heightened fears are overblown due to natural
ecosystem limitations, while others contend that specific "longtail" contexts
face overlooked risks. Our study bridges this debate by providing the first
empirical evidence of LLM presence in the latest real-world disinformation
datasets, documenting the increase of machine-generated content following
ChatGPT's release, and revealing crucial patterns across languages, platforms,
and time periods.
|
2503.23243 | Megan Brown | Megan A. Brown, Shubham Atreja, Libby Hemphill, Patrick Y. Wu | Evaluating how LLM annotations represent diverse views on contentious
topics | null | null | null | null | cs.CL cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Researchers have proposed the use of generative large language models (LLMs)
to label data for both research and applied settings. This literature
emphasizes the improved performance of LLMs relative to other natural language
models, noting that LLMs typically outperform other models on standard metrics
such as accuracy, precision, recall, and F1 score. However, previous literature
has also highlighted the bias embedded in language models, particularly around
contentious topics such as potentially toxic content. This bias could result in
labels applied by LLMs that disproportionately align with majority groups over
a more diverse set of viewpoints. In this paper, we evaluate how LLMs represent
diverse viewpoints on these contentious tasks. Across four annotation tasks on
four datasets, we show that LLMs do not show substantial disagreement with
annotators on the basis of demographics. Instead, the model, prompt, and
disagreement between human annotators on the labeling task are far more
predictive of LLM agreement. Our findings suggest that when using LLMs to
annotate data, under-representing the views of particular groups is not a
substantial concern. We conclude with a discussion of the implications for
researchers and practitioners.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 22:53:15 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Brown",
"Megan A.",
""
],
[
"Atreja",
"Shubham",
""
],
[
"Hemphill",
"Libby",
""
],
[
"Wu",
"Patrick Y.",
""
]
] | TITLE: Evaluating how LLM annotations represent diverse views on contentious
topics
ABSTRACT: Researchers have proposed the use of generative large language models (LLMs)
to label data for both research and applied settings. This literature
emphasizes the improved performance of LLMs relative to other natural language
models, noting that LLMs typically outperform other models on standard metrics
such as accuracy, precision, recall, and F1 score. However, previous literature
has also highlighted the bias embedded in language models, particularly around
contentious topics such as potentially toxic content. This bias could result in
labels applied by LLMs that disproportionately align with majority groups over
a more diverse set of viewpoints. In this paper, we evaluate how LLMs represent
diverse viewpoints on these contentious tasks. Across four annotation tasks on
four datasets, we show that LLMs do not show substantial disagreement with
annotators on the basis of demographics. Instead, the model, prompt, and
disagreement between human annotators on the labeling task are far more
predictive of LLM agreement. Our findings suggest that when using LLMs to
annotate data, under-representing the views of particular groups is not a
substantial concern. We conclude with a discussion of the implications for
researchers and practitioners.
|
2503.23265 | Bj\"orn M\"oller | Bj\"orn M\"oller, Lucas G\"ornhardt, Tim Fingscheidt | A Lightweight Image Super-Resolution Transformer Trained on
Low-Resolution Images Only | null | null | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Transformer architectures prominently lead single-image super-resolution
(SISR) benchmarks, reconstructing high-resolution (HR) images from their
low-resolution (LR) counterparts. Their strong representative power, however,
comes with a higher demand for training data compared to convolutional neural
networks (CNNs). For many real-world SR applications, the availability of
high-quality HR training images is not given, sparking interest in LR-only
training methods. The LR-only SISR benchmark mimics this condition by allowing
only low-resolution (LR) images for model training. For a 4x super-resolution,
this effectively reduces the amount of available training data to 6.25% of the
HR image pixels, which puts the employment of a data-hungry transformer model
into question. In this work, we are the first to utilize a lightweight vision
transformer model with LR-only training methods addressing the unsupervised
SISR LR-only benchmark. We adopt and configure a recent LR-only training method
from microscopy image super-resolution to macroscopic real-world data,
resulting in our multi-scale training method for bicubic degradation (MSTbic).
Furthermore, we compare it with reference methods and prove its effectiveness
both for a transformer and a CNN model. We evaluate on the classic SR benchmark
datasets Set5, Set14, BSD100, Urban100, and Manga109, and show superior
performance over state-of-the-art (so far: CNN-based) LR-only SISR methods. The
code is available on GitHub:
https://github.com/ifnspaml/SuperResolutionMultiscaleTraining.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 00:52:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Möller",
"Björn",
""
],
[
"Görnhardt",
"Lucas",
""
],
[
"Fingscheidt",
"Tim",
""
]
] | TITLE: A Lightweight Image Super-Resolution Transformer Trained on
Low-Resolution Images Only
ABSTRACT: Transformer architectures prominently lead single-image super-resolution
(SISR) benchmarks, reconstructing high-resolution (HR) images from their
low-resolution (LR) counterparts. Their strong representative power, however,
comes with a higher demand for training data compared to convolutional neural
networks (CNNs). For many real-world SR applications, the availability of
high-quality HR training images is not given, sparking interest in LR-only
training methods. The LR-only SISR benchmark mimics this condition by allowing
only low-resolution (LR) images for model training. For a 4x super-resolution,
this effectively reduces the amount of available training data to 6.25% of the
HR image pixels, which puts the employment of a data-hungry transformer model
into question. In this work, we are the first to utilize a lightweight vision
transformer model with LR-only training methods addressing the unsupervised
SISR LR-only benchmark. We adopt and configure a recent LR-only training method
from microscopy image super-resolution to macroscopic real-world data,
resulting in our multi-scale training method for bicubic degradation (MSTbic).
Furthermore, we compare it with reference methods and prove its effectiveness
both for a transformer and a CNN model. We evaluate on the classic SR benchmark
datasets Set5, Set14, BSD100, Urban100, and Manga109, and show superior
performance over state-of-the-art (so far: CNN-based) LR-only SISR methods. The
code is available on GitHub:
https://github.com/ifnspaml/SuperResolutionMultiscaleTraining.
|
2503.23266 | Jinlu Zhang | Shihao Cheng, Jinlu Zhang, Yue Liu, Zhigang Tu | OwlSight: A Robust Illumination Adaptation Framework for Dark Video
Human Action Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Human action recognition in low-light environments is crucial for various
real-world applications. However, the existing approaches overlook the full
utilization of brightness information throughout the training phase, leading to
suboptimal performance. To address this limitation, we propose OwlSight, a
biomimetic-inspired framework with whole-stage illumination enhancement to
interact with action classification for accurate dark video human action
recognition. Specifically, OwlSight incorporates a Time-Consistency Module
(TCM) to capture shallow spatiotemporal features meanwhile maintaining temporal
coherence, which are then processed by a Luminance Adaptation Module (LAM) to
dynamically adjust the brightness based on the input luminance distribution.
Furthermore, a Reflect Augmentation Module (RAM) is presented to maximize
illumination utilization and simultaneously enhance action recognition via two
interactive paths. Additionally, we build Dark-101, a large-scale dataset
comprising 18,310 dark videos across 101 action categories, significantly
surpassing existing datasets (e.g., ARID1.5 and Dark-48) in scale and
diversity. Extensive experiments demonstrate that the proposed OwlSight
achieves state-of-the-art performance across four low-light action recognition
benchmarks. Notably, it outperforms previous best approaches by 5.36% on
ARID1.5 and 1.72% on Dark-101, highlighting its effectiveness in challenging
dark environments.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 00:54:22 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Cheng",
"Shihao",
""
],
[
"Zhang",
"Jinlu",
""
],
[
"Liu",
"Yue",
""
],
[
"Tu",
"Zhigang",
""
]
] | TITLE: OwlSight: A Robust Illumination Adaptation Framework for Dark Video
Human Action Recognition
ABSTRACT: Human action recognition in low-light environments is crucial for various
real-world applications. However, the existing approaches overlook the full
utilization of brightness information throughout the training phase, leading to
suboptimal performance. To address this limitation, we propose OwlSight, a
biomimetic-inspired framework with whole-stage illumination enhancement to
interact with action classification for accurate dark video human action
recognition. Specifically, OwlSight incorporates a Time-Consistency Module
(TCM) to capture shallow spatiotemporal features meanwhile maintaining temporal
coherence, which are then processed by a Luminance Adaptation Module (LAM) to
dynamically adjust the brightness based on the input luminance distribution.
Furthermore, a Reflect Augmentation Module (RAM) is presented to maximize
illumination utilization and simultaneously enhance action recognition via two
interactive paths. Additionally, we build Dark-101, a large-scale dataset
comprising 18,310 dark videos across 101 action categories, significantly
surpassing existing datasets (e.g., ARID1.5 and Dark-48) in scale and
diversity. Extensive experiments demonstrate that the proposed OwlSight
achieves state-of-the-art performance across four low-light action recognition
benchmarks. Notably, it outperforms previous best approaches by 5.36% on
ARID1.5 and 1.72% on Dark-101, highlighting its effectiveness in challenging
dark environments.
|
2503.23271 | Haonan Chen | Haonan Chen, Jiaming Xu, Lily Sheng, Tianchen Ji, Shuijing Liu, Yunzhu
Li, Katherine Driggs-Campbell | Learning Coordinated Bimanual Manipulation Policies using State
Diffusion and Inverse Dynamics Models | Project Page: https://haonan16.github.io/coord_bimanual_page/. 12
pages, 12 figures, Accepted at ICRA 2025 | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | When performing tasks like laundry, humans naturally coordinate both hands to
manipulate objects and anticipate how their actions will change the state of
the clothes. However, achieving such coordination in robotics remains
challenging due to the need to model object movement, predict future states,
and generate precise bimanual actions. In this work, we address these
challenges by infusing the predictive nature of human manipulation strategies
into robot imitation learning. Specifically, we disentangle task-related state
transitions from agent-specific inverse dynamics modeling to enable effective
bimanual coordination. Using a demonstration dataset, we train a diffusion
model to predict future states given historical observations, envisioning how
the scene evolves. Then, we use an inverse dynamics model to compute robot
actions that achieve the predicted states. Our key insight is that modeling
object movement can help learning policies for bimanual coordination
manipulation tasks. Evaluating our framework across diverse simulation and
real-world manipulation setups, including multimodal goal configurations,
bimanual manipulation, deformable objects, and multi-object setups, we find
that it consistently outperforms state-of-the-art state-to-action mapping
policies. Our method demonstrates a remarkable capacity to navigate multimodal
goal configurations and action distributions, maintain stability across
different control modes, and synthesize a broader range of behaviors than those
present in the demonstration dataset.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 01:25:35 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Haonan",
""
],
[
"Xu",
"Jiaming",
""
],
[
"Sheng",
"Lily",
""
],
[
"Ji",
"Tianchen",
""
],
[
"Liu",
"Shuijing",
""
],
[
"Li",
"Yunzhu",
""
],
[
"Driggs-Campbell",
"Katherine",
""
]
] | TITLE: Learning Coordinated Bimanual Manipulation Policies using State
Diffusion and Inverse Dynamics Models
ABSTRACT: When performing tasks like laundry, humans naturally coordinate both hands to
manipulate objects and anticipate how their actions will change the state of
the clothes. However, achieving such coordination in robotics remains
challenging due to the need to model object movement, predict future states,
and generate precise bimanual actions. In this work, we address these
challenges by infusing the predictive nature of human manipulation strategies
into robot imitation learning. Specifically, we disentangle task-related state
transitions from agent-specific inverse dynamics modeling to enable effective
bimanual coordination. Using a demonstration dataset, we train a diffusion
model to predict future states given historical observations, envisioning how
the scene evolves. Then, we use an inverse dynamics model to compute robot
actions that achieve the predicted states. Our key insight is that modeling
object movement can help learning policies for bimanual coordination
manipulation tasks. Evaluating our framework across diverse simulation and
real-world manipulation setups, including multimodal goal configurations,
bimanual manipulation, deformable objects, and multi-object setups, we find
that it consistently outperforms state-of-the-art state-to-action mapping
policies. Our method demonstrates a remarkable capacity to navigate multimodal
goal configurations and action distributions, maintain stability across
different control modes, and synthesize a broader range of behaviors than those
present in the demonstration dataset.
|
2503.23275 | Deeksha Arun | Deeksha Arun, Kagan Ozturk, Kevin W. Bowyer, Patrick Flynn | Improved Ear Verification with Vision Transformers and Overlapping
Patches | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ear recognition has emerged as a promising biometric modality due to the
relative stability in appearance during adulthood. Although Vision Transformers
(ViTs) have been widely used in image recognition tasks, their efficiency in
ear recognition has been hampered by a lack of attention to overlapping
patches, which is crucial for capturing intricate ear features. In this study,
we evaluate ViT-Tiny (ViT-T), ViT-Small (ViT-S), ViT-Base (ViT-B) and ViT-Large
(ViT-L) configurations on a diverse set of datasets (OPIB, AWE, WPUT, and
EarVN1.0), using an overlapping patch selection strategy. Results demonstrate
the critical importance of overlapping patches, yielding superior performance
in 44 of 48 experiments in a structured study. Moreover, upon comparing the
results of the overlapping patches with the non-overlapping configurations, the
increase is significant, reaching up to 10% for the EarVN1.0 dataset. In terms
of model performance, the ViT-T model consistently outperformed the ViT-S,
ViT-B, and ViT-L models on the AWE, WPUT, and EarVN1.0 datasets. The highest
scores were achieved in a configuration with a patch size of 28x28 and a stride
of 14 pixels. This patch-stride configuration represents 25% of the normalized
image area (112x112 pixels) for the patch size and 12.5% of the row or column
size for the stride. This study confirms that transformer architectures with
overlapping patch selection can serve as an efficient and high-performing
option for ear-based biometric recognition tasks in verification scenarios.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 01:50:21 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Arun",
"Deeksha",
""
],
[
"Ozturk",
"Kagan",
""
],
[
"Bowyer",
"Kevin W.",
""
],
[
"Flynn",
"Patrick",
""
]
] | TITLE: Improved Ear Verification with Vision Transformers and Overlapping
Patches
ABSTRACT: Ear recognition has emerged as a promising biometric modality due to the
relative stability in appearance during adulthood. Although Vision Transformers
(ViTs) have been widely used in image recognition tasks, their efficiency in
ear recognition has been hampered by a lack of attention to overlapping
patches, which is crucial for capturing intricate ear features. In this study,
we evaluate ViT-Tiny (ViT-T), ViT-Small (ViT-S), ViT-Base (ViT-B) and ViT-Large
(ViT-L) configurations on a diverse set of datasets (OPIB, AWE, WPUT, and
EarVN1.0), using an overlapping patch selection strategy. Results demonstrate
the critical importance of overlapping patches, yielding superior performance
in 44 of 48 experiments in a structured study. Moreover, upon comparing the
results of the overlapping patches with the non-overlapping configurations, the
increase is significant, reaching up to 10% for the EarVN1.0 dataset. In terms
of model performance, the ViT-T model consistently outperformed the ViT-S,
ViT-B, and ViT-L models on the AWE, WPUT, and EarVN1.0 datasets. The highest
scores were achieved in a configuration with a patch size of 28x28 and a stride
of 14 pixels. This patch-stride configuration represents 25% of the normalized
image area (112x112 pixels) for the patch size and 12.5% of the row or column
size for the stride. This study confirms that transformer architectures with
overlapping patch selection can serve as an efficient and high-performing
option for ear-based biometric recognition tasks in verification scenarios.
|
2503.23282 | Felix Wimbauer | Felix Wimbauer, Weirong Chen, Dominik Muhle, Christian Rupprecht,
Daniel Cremers | AnyCam: Learning to Recover Camera Poses and Intrinsics from Casual
Videos | CVPR 2025 - For more details and code, please check out our project
page under https://fwmb.github.io/anycam | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating camera motion and intrinsics from casual videos is a core
challenge in computer vision. Traditional bundle-adjustment based methods, such
as SfM and SLAM, struggle to perform reliably on arbitrary data. Although
specialized SfM approaches have been developed for handling dynamic scenes,
they either require intrinsics or computationally expensive test-time
optimization and often fall short in performance. Recently, methods like Dust3r
have reformulated the SfM problem in a more data-driven way. While such
techniques show promising results, they are still 1) not robust towards dynamic
objects and 2) require labeled data for supervised training. As an alternative,
we propose AnyCam, a fast transformer model that directly estimates camera
poses and intrinsics from a dynamic video sequence in feed-forward fashion. Our
intuition is that such a network can learn strong priors over realistic camera
poses. To scale up our training, we rely on an uncertainty-based loss
formulation and pre-trained depth and flow networks instead of motion or
trajectory supervision. This allows us to use diverse, unlabelled video
datasets obtained mostly from YouTube. Additionally, we ensure that the
predicted trajectory does not accumulate drift over time through a lightweight
trajectory refinement step. We test AnyCam on established datasets, where it
delivers accurate camera poses and intrinsics both qualitatively and
quantitatively. Furthermore, even with trajectory refinement, AnyCam is
significantly faster than existing works for SfM in dynamic settings. Finally,
by combining camera information, uncertainty, and depth, our model can produce
high-quality 4D pointclouds.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 02:22:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wimbauer",
"Felix",
""
],
[
"Chen",
"Weirong",
""
],
[
"Muhle",
"Dominik",
""
],
[
"Rupprecht",
"Christian",
""
],
[
"Cremers",
"Daniel",
""
]
] | TITLE: AnyCam: Learning to Recover Camera Poses and Intrinsics from Casual
Videos
ABSTRACT: Estimating camera motion and intrinsics from casual videos is a core
challenge in computer vision. Traditional bundle-adjustment based methods, such
as SfM and SLAM, struggle to perform reliably on arbitrary data. Although
specialized SfM approaches have been developed for handling dynamic scenes,
they either require intrinsics or computationally expensive test-time
optimization and often fall short in performance. Recently, methods like Dust3r
have reformulated the SfM problem in a more data-driven way. While such
techniques show promising results, they are still 1) not robust towards dynamic
objects and 2) require labeled data for supervised training. As an alternative,
we propose AnyCam, a fast transformer model that directly estimates camera
poses and intrinsics from a dynamic video sequence in feed-forward fashion. Our
intuition is that such a network can learn strong priors over realistic camera
poses. To scale up our training, we rely on an uncertainty-based loss
formulation and pre-trained depth and flow networks instead of motion or
trajectory supervision. This allows us to use diverse, unlabelled video
datasets obtained mostly from YouTube. Additionally, we ensure that the
predicted trajectory does not accumulate drift over time through a lightweight
trajectory refinement step. We test AnyCam on established datasets, where it
delivers accurate camera poses and intrinsics both qualitatively and
quantitatively. Furthermore, even with trajectory refinement, AnyCam is
significantly faster than existing works for SfM in dynamic settings. Finally,
by combining camera information, uncertainty, and depth, our model can produce
high-quality 4D pointclouds.
|
2503.23283 | Lu Yu | Lu Yu, Haoyu Han, Zhe Tao, Hantao Yao, Changsheng Xu | Language Guided Concept Bottleneck Models for Interpretable Continual
Learning | CVPR 2025; Project Page: https://github.com/FisherCats/CLG-CBM | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Continual learning (CL) aims to enable learning systems to acquire new
knowledge constantly without forgetting previously learned information. CL
faces the challenge of mitigating catastrophic forgetting while maintaining
interpretability across tasks. Most existing CL methods focus primarily on
preserving learned knowledge to improve model performance. However, as new
information is introduced, the interpretability of the learning process becomes
crucial for understanding the evolving decision-making process, yet it is
rarely explored. In this paper, we introduce a novel framework that integrates
language-guided Concept Bottleneck Models (CBMs) to address both challenges.
Our approach leverages the Concept Bottleneck Layer, aligning semantic
consistency with CLIP models to learn human-understandable concepts that can
generalize across tasks. By focusing on interpretable concepts, our method not
only enhances the models ability to retain knowledge over time but also
provides transparent decision-making insights. We demonstrate the effectiveness
of our approach by achieving superior performance on several datasets,
outperforming state-of-the-art methods with an improvement of up to 3.06% in
final average accuracy on ImageNet-subset. Additionally, we offer concept
visualizations for model predictions, further advancing the understanding of
interpretable continual learning.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 02:41:55 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yu",
"Lu",
""
],
[
"Han",
"Haoyu",
""
],
[
"Tao",
"Zhe",
""
],
[
"Yao",
"Hantao",
""
],
[
"Xu",
"Changsheng",
""
]
] | TITLE: Language Guided Concept Bottleneck Models for Interpretable Continual
Learning
ABSTRACT: Continual learning (CL) aims to enable learning systems to acquire new
knowledge constantly without forgetting previously learned information. CL
faces the challenge of mitigating catastrophic forgetting while maintaining
interpretability across tasks. Most existing CL methods focus primarily on
preserving learned knowledge to improve model performance. However, as new
information is introduced, the interpretability of the learning process becomes
crucial for understanding the evolving decision-making process, yet it is
rarely explored. In this paper, we introduce a novel framework that integrates
language-guided Concept Bottleneck Models (CBMs) to address both challenges.
Our approach leverages the Concept Bottleneck Layer, aligning semantic
consistency with CLIP models to learn human-understandable concepts that can
generalize across tasks. By focusing on interpretable concepts, our method not
only enhances the models ability to retain knowledge over time but also
provides transparent decision-making insights. We demonstrate the effectiveness
of our approach by achieving superior performance on several datasets,
outperforming state-of-the-art methods with an improvement of up to 3.06% in
final average accuracy on ImageNet-subset. Additionally, we offer concept
visualizations for model predictions, further advancing the understanding of
interpretable continual learning.
|
2503.23290 | Junlong Chen | Junlong Chen, Jiawen Kang, Minrui Xu, Fan Wu, Hongliang Zhang, Huawei
Huang, Dusit Niyato, Shiwen Mao | Efficient Twin Migration in Vehicular Metaverses: Multi-Agent Split Deep
Reinforcement Learning with Spatio-Temporal Trajectory Generation | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vehicle Twins (VTs) as digital representations of vehicles can provide users
with immersive experiences in vehicular metaverse applications, e.g., Augmented
Reality (AR) navigation and embodied intelligence. VT migration is an effective
way that migrates the VT when the locations of physical entities keep changing
to maintain seamless immersive VT services. However, an efficient VT migration
is challenging due to the rapid movement of vehicles, dynamic workloads of
Roadside Units (RSUs), and heterogeneous resources of the RSUs. To achieve
efficient migration decisions and a minimum latency for the VT migration, we
propose a multi-agent split Deep Reinforcement Learning (DRL) framework
combined with spatio-temporal trajectory generation. In this framework,
multiple split DRL agents utilize split architecture to efficiently determine
VT migration decisions. Furthermore, we propose a spatio-temporal trajectory
generation algorithm based on trajectory datasets and road network data to
simulate vehicle trajectories, enhancing the generalization of the proposed
scheme for managing VT migration in dynamic network environments. Finally,
experimental results demonstrate that the proposed scheme not only enhances the
Quality of Experience (QoE) by 29% but also reduces the computational parameter
count by approximately 25% while maintaining similar performances, enhancing
users' immersive experiences in vehicular metaverses.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 03:00:01 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Junlong",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Xu",
"Minrui",
""
],
[
"Wu",
"Fan",
""
],
[
"Zhang",
"Hongliang",
""
],
[
"Huang",
"Huawei",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Mao",
"Shiwen",
""
]
] | TITLE: Efficient Twin Migration in Vehicular Metaverses: Multi-Agent Split Deep
Reinforcement Learning with Spatio-Temporal Trajectory Generation
ABSTRACT: Vehicle Twins (VTs) as digital representations of vehicles can provide users
with immersive experiences in vehicular metaverse applications, e.g., Augmented
Reality (AR) navigation and embodied intelligence. VT migration is an effective
way that migrates the VT when the locations of physical entities keep changing
to maintain seamless immersive VT services. However, an efficient VT migration
is challenging due to the rapid movement of vehicles, dynamic workloads of
Roadside Units (RSUs), and heterogeneous resources of the RSUs. To achieve
efficient migration decisions and a minimum latency for the VT migration, we
propose a multi-agent split Deep Reinforcement Learning (DRL) framework
combined with spatio-temporal trajectory generation. In this framework,
multiple split DRL agents utilize split architecture to efficiently determine
VT migration decisions. Furthermore, we propose a spatio-temporal trajectory
generation algorithm based on trajectory datasets and road network data to
simulate vehicle trajectories, enhancing the generalization of the proposed
scheme for managing VT migration in dynamic network environments. Finally,
experimental results demonstrate that the proposed scheme not only enhances the
Quality of Experience (QoE) by 29% but also reduces the computational parameter
count by approximately 25% while maintaining similar performances, enhancing
users' immersive experiences in vehicular metaverses.
|
2503.23294 | Jianzong Wang | Wei Tao, Bin Zhang, Xiaoyang Qu, Jiguang Wan, Jianzong Wang | Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context
LLM Inference | Accepted by the Design, Automation, and Test in Europe 2025 (DATE
2025) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, large language models (LLMs) have been able to handle longer and
longer contexts. However, a context that is too long may cause intolerant
inference latency and GPU memory usage. Existing methods propose
mixed-precision quantization to the key-value (KV) cache in LLMs based on token
granularity, which is time-consuming in the search process and hardware
inefficient during computation. This paper introduces a novel approach called
Cocktail, which employs chunk-adaptive mixed-precision quantization to optimize
the KV cache. Cocktail consists of two modules: chunk-level quantization search
and chunk-level KV cache computation. Chunk-level quantization search
determines the optimal bitwidth configuration of the KV cache chunks quickly
based on the similarity scores between the corresponding context chunks and the
query, maintaining the model accuracy. Furthermore, chunk-level KV cache
computation reorders the KV cache chunks before quantization, avoiding the
hardware inefficiency caused by mixed-precision quantization in inference
computation. Extensive experiments demonstrate that Cocktail outperforms
state-of-the-art KV cache quantization methods on various models and datasets.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 03:20:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tao",
"Wei",
""
],
[
"Zhang",
"Bin",
""
],
[
"Qu",
"Xiaoyang",
""
],
[
"Wan",
"Jiguang",
""
],
[
"Wang",
"Jianzong",
""
]
] | TITLE: Cocktail: Chunk-Adaptive Mixed-Precision Quantization for Long-Context
LLM Inference
ABSTRACT: Recently, large language models (LLMs) have been able to handle longer and
longer contexts. However, a context that is too long may cause intolerant
inference latency and GPU memory usage. Existing methods propose
mixed-precision quantization to the key-value (KV) cache in LLMs based on token
granularity, which is time-consuming in the search process and hardware
inefficient during computation. This paper introduces a novel approach called
Cocktail, which employs chunk-adaptive mixed-precision quantization to optimize
the KV cache. Cocktail consists of two modules: chunk-level quantization search
and chunk-level KV cache computation. Chunk-level quantization search
determines the optimal bitwidth configuration of the KV cache chunks quickly
based on the similarity scores between the corresponding context chunks and the
query, maintaining the model accuracy. Furthermore, chunk-level KV cache
computation reorders the KV cache chunks before quantization, avoiding the
hardware inefficiency caused by mixed-precision quantization in inference
computation. Extensive experiments demonstrate that Cocktail outperforms
state-of-the-art KV cache quantization methods on various models and datasets.
|
2503.23295 | Mikhail Krasitskii | Mikhail Krasitskii, Olga Kolesnikova, Liliana Chanona Hernandez,
Grigori Sidorov, Alexander Gelbukh | Advancing Sentiment Analysis in Tamil-English Code-Mixed Texts:
Challenges and Transformer-Based Solutions | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The sentiment analysis task in Tamil-English code-mixed texts has been
explored using advanced transformer-based models. Challenges from grammatical
inconsistencies, orthographic variations, and phonetic ambiguities have been
addressed. The limitations of existing datasets and annotation gaps have been
examined, emphasizing the need for larger and more diverse corpora. Transformer
architectures, including XLM-RoBERTa, mT5, IndicBERT, and RemBERT, have been
evaluated in low-resource, code-mixed environments. Performance metrics have
been analyzed, highlighting the effectiveness of specific models in handling
multilingual sentiment classification. The findings suggest that further
advancements in data augmentation, phonetic normalization, and hybrid modeling
approaches are required to enhance accuracy. Future research directions for
improving sentiment analysis in code-mixed texts have been proposed.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 03:27:41 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Krasitskii",
"Mikhail",
""
],
[
"Kolesnikova",
"Olga",
""
],
[
"Hernandez",
"Liliana Chanona",
""
],
[
"Sidorov",
"Grigori",
""
],
[
"Gelbukh",
"Alexander",
""
]
] | TITLE: Advancing Sentiment Analysis in Tamil-English Code-Mixed Texts:
Challenges and Transformer-Based Solutions
ABSTRACT: The sentiment analysis task in Tamil-English code-mixed texts has been
explored using advanced transformer-based models. Challenges from grammatical
inconsistencies, orthographic variations, and phonetic ambiguities have been
addressed. The limitations of existing datasets and annotation gaps have been
examined, emphasizing the need for larger and more diverse corpora. Transformer
architectures, including XLM-RoBERTa, mT5, IndicBERT, and RemBERT, have been
evaluated in low-resource, code-mixed environments. Performance metrics have
been analyzed, highlighting the effectiveness of specific models in handling
multilingual sentiment classification. The findings suggest that further
advancements in data augmentation, phonetic normalization, and hybrid modeling
approaches are required to enhance accuracy. Future research directions for
improving sentiment analysis in code-mixed texts have been proposed.
|
2503.23297 | Zhenyang Liu | Zhenyang Liu, Yikai Wang, Sixiao Zheng, Tongying Pan, Longfei Liang,
Yanwei Fu, Xiangyang Xue | ReasonGrounder: LVLM-Guided Hierarchical Feature Splatting for
Open-Vocabulary 3D Visual Grounding and Reasoning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-vocabulary 3D visual grounding and reasoning aim to localize objects in
a scene based on implicit language descriptions, even when they are occluded.
This ability is crucial for tasks such as vision-language navigation and
autonomous robotics. However, current methods struggle because they rely
heavily on fine-tuning with 3D annotations and mask proposals, which limits
their ability to handle diverse semantics and common knowledge required for
effective reasoning. In this work, we propose ReasonGrounder, an LVLM-guided
framework that uses hierarchical 3D feature Gaussian fields for adaptive
grouping based on physical scale, enabling open-vocabulary 3D grounding and
reasoning. ReasonGrounder interprets implicit instructions using large
vision-language models (LVLM) and localizes occluded objects through 3D
Gaussian splatting. By incorporating 2D segmentation masks from the SAM and
multi-view CLIP embeddings, ReasonGrounder selects Gaussian groups based on
object scale, enabling accurate localization through both explicit and implicit
language understanding, even in novel, occluded views. We also contribute
ReasoningGD, a new dataset containing over 10K scenes and 2 million annotations
for evaluating open-vocabulary 3D grounding and amodal perception under
occlusion. Experiments show that ReasonGrounder significantly improves 3D
grounding accuracy in real-world scenarios.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 03:40:35 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Zhenyang",
""
],
[
"Wang",
"Yikai",
""
],
[
"Zheng",
"Sixiao",
""
],
[
"Pan",
"Tongying",
""
],
[
"Liang",
"Longfei",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Xue",
"Xiangyang",
""
]
] | TITLE: ReasonGrounder: LVLM-Guided Hierarchical Feature Splatting for
Open-Vocabulary 3D Visual Grounding and Reasoning
ABSTRACT: Open-vocabulary 3D visual grounding and reasoning aim to localize objects in
a scene based on implicit language descriptions, even when they are occluded.
This ability is crucial for tasks such as vision-language navigation and
autonomous robotics. However, current methods struggle because they rely
heavily on fine-tuning with 3D annotations and mask proposals, which limits
their ability to handle diverse semantics and common knowledge required for
effective reasoning. In this work, we propose ReasonGrounder, an LVLM-guided
framework that uses hierarchical 3D feature Gaussian fields for adaptive
grouping based on physical scale, enabling open-vocabulary 3D grounding and
reasoning. ReasonGrounder interprets implicit instructions using large
vision-language models (LVLM) and localizes occluded objects through 3D
Gaussian splatting. By incorporating 2D segmentation masks from the SAM and
multi-view CLIP embeddings, ReasonGrounder selects Gaussian groups based on
object scale, enabling accurate localization through both explicit and implicit
language understanding, even in novel, occluded views. We also contribute
ReasoningGD, a new dataset containing over 10K scenes and 2 million annotations
for evaluating open-vocabulary 3D grounding and amodal perception under
occlusion. Experiments show that ReasonGrounder significantly improves 3D
grounding accuracy in real-world scenarios.
|
2503.23300 | Wenqi Jia | Wenqi Jia, Bolin Lai, Miao Liu, Danfei Xu, James M. Rehg | Learning Predictive Visuomotor Coordination | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding and predicting human visuomotor coordination is crucial for
applications in robotics, human-computer interaction, and assistive
technologies. This work introduces a forecasting-based task for visuomotor
modeling, where the goal is to predict head pose, gaze, and upper-body motion
from egocentric visual and kinematic observations. We propose a
\textit{Visuomotor Coordination Representation} (VCR) that learns structured
temporal dependencies across these multimodal signals. We extend a
diffusion-based motion modeling framework that integrates egocentric vision and
kinematic sequences, enabling temporally coherent and accurate visuomotor
predictions. Our approach is evaluated on the large-scale EgoExo4D dataset,
demonstrating strong generalization across diverse real-world activities. Our
results highlight the importance of multimodal integration in understanding
visuomotor coordination, contributing to research in visuomotor learning and
human behavior modeling.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 03:46:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jia",
"Wenqi",
""
],
[
"Lai",
"Bolin",
""
],
[
"Liu",
"Miao",
""
],
[
"Xu",
"Danfei",
""
],
[
"Rehg",
"James M.",
""
]
] | TITLE: Learning Predictive Visuomotor Coordination
ABSTRACT: Understanding and predicting human visuomotor coordination is crucial for
applications in robotics, human-computer interaction, and assistive
technologies. This work introduces a forecasting-based task for visuomotor
modeling, where the goal is to predict head pose, gaze, and upper-body motion
from egocentric visual and kinematic observations. We propose a
\textit{Visuomotor Coordination Representation} (VCR) that learns structured
temporal dependencies across these multimodal signals. We extend a
diffusion-based motion modeling framework that integrates egocentric vision and
kinematic sequences, enabling temporally coherent and accurate visuomotor
predictions. Our approach is evaluated on the large-scale EgoExo4D dataset,
demonstrating strong generalization across diverse real-world activities. Our
results highlight the importance of multimodal integration in understanding
visuomotor coordination, contributing to research in visuomotor learning and
human behavior modeling.
|
2503.23307 | Cong Wei | Cong Wei, Bo Sun, Haoyu Ma, Ji Hou, Felix Juefei-Xu, Zecheng He,
Xiaoliang Dai, Luxin Zhang, Kunpeng Li, Tingbo Hou, Animesh Sinha, Peter
Vajda, Wenhu Chen | MoCha: Towards Movie-Grade Talking Character Synthesis | https://congwei1230.github.io/MoCha/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent advancements in video generation have achieved impressive motion
realism, yet they often overlook character-driven storytelling, a crucial task
for automated film, animation generation. We introduce Talking Characters, a
more realistic task to generate talking character animations directly from
speech and text. Unlike talking head, Talking Characters aims at generating the
full portrait of one or more characters beyond the facial region. In this
paper, we propose MoCha, the first of its kind to generate talking characters.
To ensure precise synchronization between video and speech, we propose a
speech-video window attention mechanism that effectively aligns speech and
video tokens. To address the scarcity of large-scale speech-labeled video
datasets, we introduce a joint training strategy that leverages both
speech-labeled and text-labeled video data, significantly improving
generalization across diverse character actions. We also design structured
prompt templates with character tags, enabling, for the first time,
multi-character conversation with turn-based dialogue-allowing AI-generated
characters to engage in context-aware conversations with cinematic coherence.
Extensive qualitative and quantitative evaluations, including human preference
studies and benchmark comparisons, demonstrate that MoCha sets a new standard
for AI-generated cinematic storytelling, achieving superior realism,
expressiveness, controllability and generalization.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 04:22:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wei",
"Cong",
""
],
[
"Sun",
"Bo",
""
],
[
"Ma",
"Haoyu",
""
],
[
"Hou",
"Ji",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"He",
"Zecheng",
""
],
[
"Dai",
"Xiaoliang",
""
],
[
"Zhang",
"Luxin",
""
],
[
"Li",
"Kunpeng",
""
],
[
"Hou",
"Tingbo",
""
],
[
"Sinha",
"Animesh",
""
],
[
"Vajda",
"Peter",
""
],
[
"Chen",
"Wenhu",
""
]
] | TITLE: MoCha: Towards Movie-Grade Talking Character Synthesis
ABSTRACT: Recent advancements in video generation have achieved impressive motion
realism, yet they often overlook character-driven storytelling, a crucial task
for automated film, animation generation. We introduce Talking Characters, a
more realistic task to generate talking character animations directly from
speech and text. Unlike talking head, Talking Characters aims at generating the
full portrait of one or more characters beyond the facial region. In this
paper, we propose MoCha, the first of its kind to generate talking characters.
To ensure precise synchronization between video and speech, we propose a
speech-video window attention mechanism that effectively aligns speech and
video tokens. To address the scarcity of large-scale speech-labeled video
datasets, we introduce a joint training strategy that leverages both
speech-labeled and text-labeled video data, significantly improving
generalization across diverse character actions. We also design structured
prompt templates with character tags, enabling, for the first time,
multi-character conversation with turn-based dialogue-allowing AI-generated
characters to engage in context-aware conversations with cinematic coherence.
Extensive qualitative and quantitative evaluations, including human preference
studies and benchmark comparisons, demonstrate that MoCha sets a new standard
for AI-generated cinematic storytelling, achieving superior realism,
expressiveness, controllability and generalization.
|
2503.23312 | Hyunsik Jeon | Hyunsik Jeon, Satoshi Koide, Yu Wang, Zhankui He, Julian McAuley | LaViC: Adapting Large Vision-Language Models to Visually-Aware
Conversational Recommendation | null | null | null | null | cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Conversational recommender systems engage users in dialogues to refine their
needs and provide more personalized suggestions. Although textual information
suffices for many domains, visually driven categories such as fashion or home
decor potentially require detailed visual information related to color, style,
or design. To address this challenge, we propose LaViC (Large Vision-Language
Conversational Recommendation Framework), a novel approach that integrates
compact image representations into dialogue-based recommendation systems. LaViC
leverages a large vision-language model in a two-stage process: (1) visual
knowledge self-distillation, which condenses product images from hundreds of
tokens into a small set of visual tokens in a self-distillation manner,
significantly reducing computational overhead, and (2) recommendation prompt
tuning, which enables the model to incorporate both dialogue context and
distilled visual tokens, providing a unified mechanism for capturing textual
and visual features. To support rigorous evaluation of visually-aware
conversational recommendation, we construct a new dataset by aligning Reddit
conversations with Amazon product listings across multiple visually oriented
categories (e.g., fashion, beauty, and home). This dataset covers realistic
user queries and product appearances in domains where visual details are
crucial. Extensive experiments demonstrate that LaViC significantly outperforms
text-only conversational recommendation methods and open-source vision-language
baselines. Moreover, LaViC achieves competitive or superior accuracy compared
to prominent proprietary baselines (e.g., GPT-3.5-turbo, GPT-4o-mini, and
GPT-4o), demonstrating the necessity of explicitly using visual data for
capturing product attributes and showing the effectiveness of our
vision-language integration. Our code and dataset are available at
https://github.com/jeon185/LaViC.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 04:44:13 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jeon",
"Hyunsik",
""
],
[
"Koide",
"Satoshi",
""
],
[
"Wang",
"Yu",
""
],
[
"He",
"Zhankui",
""
],
[
"McAuley",
"Julian",
""
]
] | TITLE: LaViC: Adapting Large Vision-Language Models to Visually-Aware
Conversational Recommendation
ABSTRACT: Conversational recommender systems engage users in dialogues to refine their
needs and provide more personalized suggestions. Although textual information
suffices for many domains, visually driven categories such as fashion or home
decor potentially require detailed visual information related to color, style,
or design. To address this challenge, we propose LaViC (Large Vision-Language
Conversational Recommendation Framework), a novel approach that integrates
compact image representations into dialogue-based recommendation systems. LaViC
leverages a large vision-language model in a two-stage process: (1) visual
knowledge self-distillation, which condenses product images from hundreds of
tokens into a small set of visual tokens in a self-distillation manner,
significantly reducing computational overhead, and (2) recommendation prompt
tuning, which enables the model to incorporate both dialogue context and
distilled visual tokens, providing a unified mechanism for capturing textual
and visual features. To support rigorous evaluation of visually-aware
conversational recommendation, we construct a new dataset by aligning Reddit
conversations with Amazon product listings across multiple visually oriented
categories (e.g., fashion, beauty, and home). This dataset covers realistic
user queries and product appearances in domains where visual details are
crucial. Extensive experiments demonstrate that LaViC significantly outperforms
text-only conversational recommendation methods and open-source vision-language
baselines. Moreover, LaViC achieves competitive or superior accuracy compared
to prominent proprietary baselines (e.g., GPT-3.5-turbo, GPT-4o-mini, and
GPT-4o), demonstrating the necessity of explicitly using visual data for
capturing product attributes and showing the effectiveness of our
vision-language integration. Our code and dataset are available at
https://github.com/jeon185/LaViC.
|
2503.23314 | Wonduk Seo | Wonduk Seo, Juhyeon Lee, Yi Bu | SPIO: Ensemble and Selective Strategies via LLM-Based Multi-Agent
Planning in Automated Data Science | Under Review | null | null | null | cs.AI cs.CL cs.LG cs.MA | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) have revolutionized automated data analytics and
machine learning by enabling dynamic reasoning and adaptability. While recent
approaches have advanced multi-stage pipelines through multi-agent systems,
they typically rely on rigid, single-path workflows that limit the exploration
and integration of diverse strategies, often resulting in suboptimal
predictions. To address these challenges, we propose SPIO (Sequential Plan
Integration and Optimization), a novel framework that leverages LLM-driven
decision-making to orchestrate multi-agent planning across four key modules:
data preprocessing, feature engineering, modeling, and hyperparameter tuning.
In each module, dedicated planning agents independently generate candidate
strategies that cascade into subsequent stages, fostering comprehensive
exploration. A plan optimization agent refines these strategies by suggesting
several optimized plans. We further introduce two variants: SPIO-S, which
selects a single best solution path as determined by the LLM, and SPIO-E, which
selects the top k candidate plans and ensembles them to maximize predictive
performance. Extensive experiments on Kaggle and OpenML datasets demonstrate
that SPIO significantly outperforms state-of-the-art methods, providing a
robust and scalable solution for automated data science task.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 04:45:32 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Seo",
"Wonduk",
""
],
[
"Lee",
"Juhyeon",
""
],
[
"Bu",
"Yi",
""
]
] | TITLE: SPIO: Ensemble and Selective Strategies via LLM-Based Multi-Agent
Planning in Automated Data Science
ABSTRACT: Large Language Models (LLMs) have revolutionized automated data analytics and
machine learning by enabling dynamic reasoning and adaptability. While recent
approaches have advanced multi-stage pipelines through multi-agent systems,
they typically rely on rigid, single-path workflows that limit the exploration
and integration of diverse strategies, often resulting in suboptimal
predictions. To address these challenges, we propose SPIO (Sequential Plan
Integration and Optimization), a novel framework that leverages LLM-driven
decision-making to orchestrate multi-agent planning across four key modules:
data preprocessing, feature engineering, modeling, and hyperparameter tuning.
In each module, dedicated planning agents independently generate candidate
strategies that cascade into subsequent stages, fostering comprehensive
exploration. A plan optimization agent refines these strategies by suggesting
several optimized plans. We further introduce two variants: SPIO-S, which
selects a single best solution path as determined by the LLM, and SPIO-E, which
selects the top k candidate plans and ensembles them to maximize predictive
performance. Extensive experiments on Kaggle and OpenML datasets demonstrate
that SPIO significantly outperforms state-of-the-art methods, providing a
robust and scalable solution for automated data science task.
|
2503.23329 | Hui Li | Hui Li, Ante Wang, kunquan li, Zhihao Wang, Liang Zhang, Delai Qiu,
Qingsong Liu, Jinsong Su | A Multi-Agent Framework with Automated Decision Rule Optimization for
Cross-Domain Misinformation Detection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Misinformation spans various domains, but detection methods trained on
specific domains often perform poorly when applied to others. With the rapid
development of Large Language Models (LLMs), researchers have begun to utilize
LLMs for cross-domain misinformation detection. However, existing LLM-based
methods often fail to adequately analyze news in the target domain, limiting
their detection capabilities. More importantly, these methods typically rely on
manually designed decision rules, which are limited by domain knowledge and
expert experience, thus limiting the generalizability of decision rules to
different domains. To address these issues, we propose a MultiAgent Framework
for cross-domain misinformation detection with Automated Decision Rule
Optimization (MARO). Under this framework, we first employs multiple expert
agents to analyze target-domain news. Subsequently, we introduce a
question-reflection mechanism that guides expert agents to facilitate
higherquality analysis. Furthermore, we propose a decision rule optimization
approach based on carefully-designed cross-domain validation tasks to
iteratively enhance the effectiveness of decision rules in different domains.
Experimental results and in-depth analysis on commonlyused datasets demonstrate
that MARO achieves significant improvements over existing methods.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 06:08:33 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Hui",
""
],
[
"Wang",
"Ante",
""
],
[
"li",
"kunquan",
""
],
[
"Wang",
"Zhihao",
""
],
[
"Zhang",
"Liang",
""
],
[
"Qiu",
"Delai",
""
],
[
"Liu",
"Qingsong",
""
],
[
"Su",
"Jinsong",
""
]
] | TITLE: A Multi-Agent Framework with Automated Decision Rule Optimization for
Cross-Domain Misinformation Detection
ABSTRACT: Misinformation spans various domains, but detection methods trained on
specific domains often perform poorly when applied to others. With the rapid
development of Large Language Models (LLMs), researchers have begun to utilize
LLMs for cross-domain misinformation detection. However, existing LLM-based
methods often fail to adequately analyze news in the target domain, limiting
their detection capabilities. More importantly, these methods typically rely on
manually designed decision rules, which are limited by domain knowledge and
expert experience, thus limiting the generalizability of decision rules to
different domains. To address these issues, we propose a MultiAgent Framework
for cross-domain misinformation detection with Automated Decision Rule
Optimization (MARO). Under this framework, we first employs multiple expert
agents to analyze target-domain news. Subsequently, we introduce a
question-reflection mechanism that guides expert agents to facilitate
higherquality analysis. Furthermore, we propose a decision rule optimization
approach based on carefully-designed cross-domain validation tasks to
iteratively enhance the effectiveness of decision rules in different domains.
Experimental results and in-depth analysis on commonlyused datasets demonstrate
that MARO achieves significant improvements over existing methods.
|
2503.23330 | Jihao Yin | Hongxiang Jiang, Jihao Yin, Qixiong Wang, Jiaqi Feng, Guo Chen | EagleVision: Object-level Attribute Multimodal LLM for Remote Sensing | Under Review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in multimodal large language models (MLLMs) have demonstrated
impressive results in various visual tasks. However, in remote sensing (RS),
high resolution and small proportion of objects pose challenges to existing
MLLMs, which struggle with object-centric tasks, particularly in precise
localization and fine-grained attribute description for each object. These RS
MLLMs have not yet surpassed classical visual perception models, as they only
provide coarse image understanding, leading to limited gains in real-world
scenarios. To address this gap, we establish EagleVision, an MLLM tailored for
remote sensing that excels in object detection and attribute comprehension.
Equipped with the Attribute Disentangle module, EagleVision learns
disentanglement vision tokens to express distinct attributes. To support
object-level visual-language alignment, we construct EVAttrs-95K, the first
large-scale object attribute understanding dataset in RS for instruction
tuning, along with a novel evaluation benchmark, EVBench. EagleVision achieves
state-of-the-art performance on both fine-grained object detection and object
attribute understanding tasks, highlighting the mutual promotion between
detection and understanding capabilities in MLLMs. The code, model, data, and
demo will be available at https://github.com/XiangTodayEatsWhat/EagleVision.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 06:13:13 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jiang",
"Hongxiang",
""
],
[
"Yin",
"Jihao",
""
],
[
"Wang",
"Qixiong",
""
],
[
"Feng",
"Jiaqi",
""
],
[
"Chen",
"Guo",
""
]
] | TITLE: EagleVision: Object-level Attribute Multimodal LLM for Remote Sensing
ABSTRACT: Recent advances in multimodal large language models (MLLMs) have demonstrated
impressive results in various visual tasks. However, in remote sensing (RS),
high resolution and small proportion of objects pose challenges to existing
MLLMs, which struggle with object-centric tasks, particularly in precise
localization and fine-grained attribute description for each object. These RS
MLLMs have not yet surpassed classical visual perception models, as they only
provide coarse image understanding, leading to limited gains in real-world
scenarios. To address this gap, we establish EagleVision, an MLLM tailored for
remote sensing that excels in object detection and attribute comprehension.
Equipped with the Attribute Disentangle module, EagleVision learns
disentanglement vision tokens to express distinct attributes. To support
object-level visual-language alignment, we construct EVAttrs-95K, the first
large-scale object attribute understanding dataset in RS for instruction
tuning, along with a novel evaluation benchmark, EVBench. EagleVision achieves
state-of-the-art performance on both fine-grained object detection and object
attribute understanding tasks, highlighting the mutual promotion between
detection and understanding capabilities in MLLMs. The code, model, data, and
demo will be available at https://github.com/XiangTodayEatsWhat/EagleVision.
|
2503.23335 | Loc Hoang Tran | Loc Hoang Tran | Solve sparse PCA problem by employing Hamiltonian system and leapfrog
method | 2 tables | null | null | null | cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | Principal Component Analysis (PCA) is a widely utilized technique for
dimensionality reduction; however, its inherent lack of
interpretability-stemming from dense linear combinations of all feature-limits
its applicability in many domains. In this paper, we propose a novel sparse PCA
algorithm that imposes sparsity through a smooth L1 penalty and leverages a
Hamiltonian formulation solved via geometric integration techniques.
Specifically, we implement two distinct numerical methods-one based on the
Proximal Gradient (ISTA) approach and another employing a leapfrog
(fourth-order Runge-Kutta) scheme-to minimize the energy function that balances
variance maximization with sparsity enforcement. To extract a subset of sparse
principal components, we further incorporate a deflation technique and
subsequently transform the original high-dimensional face data into a
lower-dimensional feature space. Experimental evaluations on a face recognition
dataset-using both k-nearest neighbor and kernel ridge regression
classifiers-demonstrate that the proposed sparse PCA methods consistently
achieve higher classification accuracy than conventional PCA. Future research
will extend this framework to integrate sparse PCA with modern deep learning
architectures for multimodal recognition tasks.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 06:39:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tran",
"Loc Hoang",
""
]
] | TITLE: Solve sparse PCA problem by employing Hamiltonian system and leapfrog
method
ABSTRACT: Principal Component Analysis (PCA) is a widely utilized technique for
dimensionality reduction; however, its inherent lack of
interpretability-stemming from dense linear combinations of all feature-limits
its applicability in many domains. In this paper, we propose a novel sparse PCA
algorithm that imposes sparsity through a smooth L1 penalty and leverages a
Hamiltonian formulation solved via geometric integration techniques.
Specifically, we implement two distinct numerical methods-one based on the
Proximal Gradient (ISTA) approach and another employing a leapfrog
(fourth-order Runge-Kutta) scheme-to minimize the energy function that balances
variance maximization with sparsity enforcement. To extract a subset of sparse
principal components, we further incorporate a deflation technique and
subsequently transform the original high-dimensional face data into a
lower-dimensional feature space. Experimental evaluations on a face recognition
dataset-using both k-nearest neighbor and kernel ridge regression
classifiers-demonstrate that the proposed sparse PCA methods consistently
achieve higher classification accuracy than conventional PCA. Future research
will extend this framework to integrate sparse PCA with modern deep learning
architectures for multimodal recognition tasks.
|
2503.23358 | Miaomiao Cai | Miaomiao Cai, Lei Chen, Yifan Wang, Zhiyong Cheng, Min Zhang, Meng
Wang | Graph-Structured Driven Dual Adaptation for Mitigating Popularity Bias | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Popularity bias challenges recommender systems by causing uneven
recommendation performance and amplifying the Matthew effect. Limited user-item
interactions confine unpopular items within embedding neighborhoods of few
users, leading to representation collapse and reduced model generalization.
Existing supervised alignment and reweighting methods mitigate this bias but
have key limitations: (1) ignoring inherent variability across Graph
Convolutional Networks (GCNs) layers, causing negative effects in deeper
layers; (2) reliance on fixed hyperparameters to balance item popularity,
restricting adaptability and increasing complexity.
To address these issues, we propose the Graph-Structured Dual Adaptation
Framework (GSDA). Our theoretical analysis identifies a crucial limitation of
supervised alignment methods caused by over-smoothing in GCNs. As GCN layers
deepen, popular and unpopular items increasingly lose distinctiveness,
quantified by reduced conditional entropy. This diminished distinctiveness
weakens supervised alignment effectiveness in mitigating popularity bias.
Motivated by this, GSDA captures structural and distribution characteristics
from the adjacency matrix through a dual adaptive strategy. First, a
hierarchical adaptive alignment mechanism uses the adjacency matrix's Frobenius
norm for layer-specific weight decay, countering conditional entropy reduction
effects at deeper layers. Second, a distribution-aware dynamic contrast
weighting strategy, guided by a real-time Gini coefficient, removes dependence
on fixed hyperparameters, enabling adaptability to diverse data. Experiments on
three benchmark datasets demonstrate GSDA significantly alleviates popularity
bias and consistently outperforms state-of-the-art recommendation methods.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 08:26:29 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Cai",
"Miaomiao",
""
],
[
"Chen",
"Lei",
""
],
[
"Wang",
"Yifan",
""
],
[
"Cheng",
"Zhiyong",
""
],
[
"Zhang",
"Min",
""
],
[
"Wang",
"Meng",
""
]
] | TITLE: Graph-Structured Driven Dual Adaptation for Mitigating Popularity Bias
ABSTRACT: Popularity bias challenges recommender systems by causing uneven
recommendation performance and amplifying the Matthew effect. Limited user-item
interactions confine unpopular items within embedding neighborhoods of few
users, leading to representation collapse and reduced model generalization.
Existing supervised alignment and reweighting methods mitigate this bias but
have key limitations: (1) ignoring inherent variability across Graph
Convolutional Networks (GCNs) layers, causing negative effects in deeper
layers; (2) reliance on fixed hyperparameters to balance item popularity,
restricting adaptability and increasing complexity.
To address these issues, we propose the Graph-Structured Dual Adaptation
Framework (GSDA). Our theoretical analysis identifies a crucial limitation of
supervised alignment methods caused by over-smoothing in GCNs. As GCN layers
deepen, popular and unpopular items increasingly lose distinctiveness,
quantified by reduced conditional entropy. This diminished distinctiveness
weakens supervised alignment effectiveness in mitigating popularity bias.
Motivated by this, GSDA captures structural and distribution characteristics
from the adjacency matrix through a dual adaptive strategy. First, a
hierarchical adaptive alignment mechanism uses the adjacency matrix's Frobenius
norm for layer-specific weight decay, countering conditional entropy reduction
effects at deeper layers. Second, a distribution-aware dynamic contrast
weighting strategy, guided by a real-time Gini coefficient, removes dependence
on fixed hyperparameters, enabling adaptability to diverse data. Experiments on
three benchmark datasets demonstrate GSDA significantly alleviates popularity
bias and consistently outperforms state-of-the-art recommendation methods.
|
2503.23359 | Linfeng Tang | Linfeng Tang, Yeda Wang, Meiqi Gong, Zizhuo Li, Yuxin Deng, Xunpeng
Yi, Chunyu Li, Han Xu, Hao Zhang, Jiayi Ma | VideoFusion: A Spatio-Temporal Collaborative Network for Mutli-modal
Video Fusion and Restoration | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compared to images, videos better align with real-world acquisition scenarios
and possess valuable temporal cues. However, existing multi-sensor fusion
research predominantly integrates complementary context from multiple images
rather than videos. This primarily stems from two factors: 1) the scarcity of
large-scale multi-sensor video datasets, limiting research in video fusion, and
2) the inherent difficulty of jointly modeling spatial and temporal
dependencies in a unified framework. This paper proactively compensates for the
dilemmas. First, we construct M3SVD, a benchmark dataset with $220$ temporally
synchronized and spatially registered infrared-visible video pairs comprising
153,797 frames, filling the data gap for the video fusion community. Secondly,
we propose VideoFusion, a multi-modal video fusion model that fully exploits
cross-modal complementarity and temporal dynamics to generate spatio-temporally
coherent videos from (potentially degraded) multi-modal inputs. Specifically,
1) a differential reinforcement module is developed for cross-modal information
interaction and enhancement, 2) a complete modality-guided fusion strategy is
employed to adaptively integrate multi-modal features, and 3) a bi-temporal
co-attention mechanism is devised to dynamically aggregate forward-backward
temporal contexts to reinforce cross-frame feature representations. Extensive
experiments reveal that VideoFusion outperforms existing image-oriented fusion
paradigms in sequential scenarios, effectively mitigating temporal
inconsistency and interference.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 08:27:18 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tang",
"Linfeng",
""
],
[
"Wang",
"Yeda",
""
],
[
"Gong",
"Meiqi",
""
],
[
"Li",
"Zizhuo",
""
],
[
"Deng",
"Yuxin",
""
],
[
"Yi",
"Xunpeng",
""
],
[
"Li",
"Chunyu",
""
],
[
"Xu",
"Han",
""
],
[
"Zhang",
"Hao",
""
],
[
"Ma",
"Jiayi",
""
]
] | TITLE: VideoFusion: A Spatio-Temporal Collaborative Network for Mutli-modal
Video Fusion and Restoration
ABSTRACT: Compared to images, videos better align with real-world acquisition scenarios
and possess valuable temporal cues. However, existing multi-sensor fusion
research predominantly integrates complementary context from multiple images
rather than videos. This primarily stems from two factors: 1) the scarcity of
large-scale multi-sensor video datasets, limiting research in video fusion, and
2) the inherent difficulty of jointly modeling spatial and temporal
dependencies in a unified framework. This paper proactively compensates for the
dilemmas. First, we construct M3SVD, a benchmark dataset with $220$ temporally
synchronized and spatially registered infrared-visible video pairs comprising
153,797 frames, filling the data gap for the video fusion community. Secondly,
we propose VideoFusion, a multi-modal video fusion model that fully exploits
cross-modal complementarity and temporal dynamics to generate spatio-temporally
coherent videos from (potentially degraded) multi-modal inputs. Specifically,
1) a differential reinforcement module is developed for cross-modal information
interaction and enhancement, 2) a complete modality-guided fusion strategy is
employed to adaptively integrate multi-modal features, and 3) a bi-temporal
co-attention mechanism is devised to dynamically aggregate forward-backward
temporal contexts to reinforce cross-frame feature representations. Extensive
experiments reveal that VideoFusion outperforms existing image-oriented fusion
paradigms in sequential scenarios, effectively mitigating temporal
inconsistency and interference.
|
2503.23360 | Guanhua Chen | Guanhua Chen, Yutong Yao, Ci-Jun Gao, Lidia S. Chao, Feng Wan, Derek
F. Wong | Not All LoRA Parameters Are Essential: Insights on Inference Necessity | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current research on LoRA primarily focuses on minimizing the number of
fine-tuned parameters or optimizing its architecture. However, the necessity of
all fine-tuned LoRA layers during inference remains underexplored. In this
paper, we investigate the contribution of each LoRA layer to the model's
ability to predict the ground truth and hypothesize that lower-layer LoRA
modules play a more critical role in model reasoning and understanding. To
address this, we propose a simple yet effective method to enhance the
performance of large language models (LLMs) fine-tuned with LoRA. Specifically,
we identify a ``boundary layer'' that distinguishes essential LoRA layers by
analyzing a small set of validation samples. During inference, we drop all LoRA
layers beyond this boundary. We evaluate our approach on three strong baselines
across four widely-used text generation datasets. Our results demonstrate
consistent and significant improvements, underscoring the effectiveness of
selectively retaining critical LoRA layers during inference.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 08:33:04 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Guanhua",
""
],
[
"Yao",
"Yutong",
""
],
[
"Gao",
"Ci-Jun",
""
],
[
"Chao",
"Lidia S.",
""
],
[
"Wan",
"Feng",
""
],
[
"Wong",
"Derek F.",
""
]
] | TITLE: Not All LoRA Parameters Are Essential: Insights on Inference Necessity
ABSTRACT: Current research on LoRA primarily focuses on minimizing the number of
fine-tuned parameters or optimizing its architecture. However, the necessity of
all fine-tuned LoRA layers during inference remains underexplored. In this
paper, we investigate the contribution of each LoRA layer to the model's
ability to predict the ground truth and hypothesize that lower-layer LoRA
modules play a more critical role in model reasoning and understanding. To
address this, we propose a simple yet effective method to enhance the
performance of large language models (LLMs) fine-tuned with LoRA. Specifically,
we identify a ``boundary layer'' that distinguishes essential LoRA layers by
analyzing a small set of validation samples. During inference, we drop all LoRA
layers beyond this boundary. We evaluate our approach on three strong baselines
across four widely-used text generation datasets. Our results demonstrate
consistent and significant improvements, underscoring the effectiveness of
selectively retaining critical LoRA layers during inference.
|
2503.23362 | Jia-Chen Zhang | Jia-Chen Zhang, Yu-Jie Xiong, Xi-He Qiu, Chun-Ming Xia and Fei Dai | Mixture of Routers | 10 pages,4 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Supervised fine-tuning (SFT) is a milestone in aligning large language models
with human instructions and adapting them to downstream tasks. In particular,
Low-Rank Adaptation (LoRA) has gained widespread attention due to its parameter
efficiency. However, its impact on improving the performance of large models
remains limited. Recent studies suggest that combining LoRA with
Mixture-of-Experts (MoE) can significantly enhance fine-tuning performance. MoE
adapts to the diversity and complexity of datasets by dynamically selecting the
most suitable experts, thereby improving task accuracy and efficiency. Despite
impressive results, recent studies reveal issues in the MoE routing mechanism,
such as incorrect assignments and imbalanced expert allocation. Inspired by the
principles of Redundancy and Fault Tolerance Theory. We innovatively integrate
the concept of Mixture of Experts into the routing mechanism and propose an
efficient fine-tuning method called Mixture of Routers (MoR). It employs
multiple sub-routers for joint selection and uses a learnable main router to
determine the weights of the sub-routers. The results show that MoR outperforms
baseline models on most tasks, achieving an average performance improvement of
1%. MoR can serve as a plug-and-play, parameter-efficient fine-tuning method
suitable for a wide range of applications. Our code is available here:
https://anonymous.4open.science/r/MoR-DFC6.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 08:39:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhang",
"Jia-Chen",
""
],
[
"Xiong",
"Yu-Jie",
""
],
[
"Qiu",
"Xi-He",
""
],
[
"Xia",
"Chun-Ming",
""
],
[
"Dai",
"Fei",
""
]
] | TITLE: Mixture of Routers
ABSTRACT: Supervised fine-tuning (SFT) is a milestone in aligning large language models
with human instructions and adapting them to downstream tasks. In particular,
Low-Rank Adaptation (LoRA) has gained widespread attention due to its parameter
efficiency. However, its impact on improving the performance of large models
remains limited. Recent studies suggest that combining LoRA with
Mixture-of-Experts (MoE) can significantly enhance fine-tuning performance. MoE
adapts to the diversity and complexity of datasets by dynamically selecting the
most suitable experts, thereby improving task accuracy and efficiency. Despite
impressive results, recent studies reveal issues in the MoE routing mechanism,
such as incorrect assignments and imbalanced expert allocation. Inspired by the
principles of Redundancy and Fault Tolerance Theory. We innovatively integrate
the concept of Mixture of Experts into the routing mechanism and propose an
efficient fine-tuning method called Mixture of Routers (MoR). It employs
multiple sub-routers for joint selection and uses a learnable main router to
determine the weights of the sub-routers. The results show that MoR outperforms
baseline models on most tasks, achieving an average performance improvement of
1%. MoR can serve as a plug-and-play, parameter-efficient fine-tuning method
suitable for a wide range of applications. Our code is available here:
https://anonymous.4open.science/r/MoR-DFC6.
|
2503.23363 | Jeong Jeong | Jiwon Jeong, Hyeju Jang, Hogun Park | Large Language Models Are Better Logical Fallacy Reasoners with
Counterargument, Explanation, and Goal-Aware Prompt Formulation | Accepted to NAACL 2025 Findings | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advancement of Large Language Models (LLMs) has greatly improved our
ability to process complex language. However, accurately detecting logical
fallacies remains a significant challenge. This study presents a novel and
effective prompt formulation approach for logical fallacy detection, applicable
in both supervised (fine-tuned) and unsupervised (zero-shot) settings. Our
method enriches input text incorporating implicit contextual information --
counterarguments, explanations, and goals -- which we query for validity within
the context of the argument. We then rank these queries based on confidence
scores to inform classification. We evaluate our approach across multiple
datasets from 5 domains, covering 29 distinct fallacy types, using models from
the GPT and LLaMA series. The results show substantial improvements over
state-of-the-art models, with F1 score increases of up to 0.60 in zero-shot
settings and up to 0.45 in fine-tuned models. Extensive analyses further
illustrate why and how our method excels.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 08:41:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jeong",
"Jiwon",
""
],
[
"Jang",
"Hyeju",
""
],
[
"Park",
"Hogun",
""
]
] | TITLE: Large Language Models Are Better Logical Fallacy Reasoners with
Counterargument, Explanation, and Goal-Aware Prompt Formulation
ABSTRACT: The advancement of Large Language Models (LLMs) has greatly improved our
ability to process complex language. However, accurately detecting logical
fallacies remains a significant challenge. This study presents a novel and
effective prompt formulation approach for logical fallacy detection, applicable
in both supervised (fine-tuned) and unsupervised (zero-shot) settings. Our
method enriches input text incorporating implicit contextual information --
counterarguments, explanations, and goals -- which we query for validity within
the context of the argument. We then rank these queries based on confidence
scores to inform classification. We evaluate our approach across multiple
datasets from 5 domains, covering 29 distinct fallacy types, using models from
the GPT and LLaMA series. The results show substantial improvements over
state-of-the-art models, with F1 score increases of up to 0.60 in zero-shot
settings and up to 0.45 in fine-tuned models. Extensive analyses further
illustrate why and how our method excels.
|
2503.23365 | Zhangcun Yan | Zhangcun Yan, Jianqing Li, Peng Hang, Jian Sun | OnSiteVRU: A High-Resolution Trajectory Dataset for High-Density
Vulnerable Road Users | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | With the acceleration of urbanization and the growth of transportation
demands, the safety of vulnerable road users (VRUs, such as pedestrians and
cyclists) in mixed traffic flows has become increasingly prominent,
necessitating high-precision and diverse trajectory data to support the
development and optimization of autonomous driving systems. However, existing
datasets fall short in capturing the diversity and dynamics of VRU behaviors,
making it difficult to meet the research demands of complex traffic
environments. To address this gap, this study developed the OnSiteVRU datasets,
which cover a variety of scenarios, including intersections, road segments, and
urban villages. These datasets provide trajectory data for motor vehicles,
electric bicycles, and human-powered bicycles, totaling approximately 17,429
trajectories with a precision of 0.04 seconds. The datasets integrate both
aerial-view natural driving data and onboard real-time dynamic detection data,
along with environmental information such as traffic signals, obstacles, and
real-time maps, enabling a comprehensive reconstruction of interaction events.
The results demonstrate that VRU\_Data outperforms traditional datasets in
terms of VRU density and scene coverage, offering a more comprehensive
representation of VRU behavioral characteristics. This provides critical
support for traffic flow modeling, trajectory prediction, and autonomous
driving virtual testing. The dataset is publicly available for download at:
https://www.kaggle.com/datasets/zcyan2/mixed-traffic-trajectory-dataset-in-from-shanghai.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 08:44:55 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Yan",
"Zhangcun",
""
],
[
"Li",
"Jianqing",
""
],
[
"Hang",
"Peng",
""
],
[
"Sun",
"Jian",
""
]
] | TITLE: OnSiteVRU: A High-Resolution Trajectory Dataset for High-Density
Vulnerable Road Users
ABSTRACT: With the acceleration of urbanization and the growth of transportation
demands, the safety of vulnerable road users (VRUs, such as pedestrians and
cyclists) in mixed traffic flows has become increasingly prominent,
necessitating high-precision and diverse trajectory data to support the
development and optimization of autonomous driving systems. However, existing
datasets fall short in capturing the diversity and dynamics of VRU behaviors,
making it difficult to meet the research demands of complex traffic
environments. To address this gap, this study developed the OnSiteVRU datasets,
which cover a variety of scenarios, including intersections, road segments, and
urban villages. These datasets provide trajectory data for motor vehicles,
electric bicycles, and human-powered bicycles, totaling approximately 17,429
trajectories with a precision of 0.04 seconds. The datasets integrate both
aerial-view natural driving data and onboard real-time dynamic detection data,
along with environmental information such as traffic signals, obstacles, and
real-time maps, enabling a comprehensive reconstruction of interaction events.
The results demonstrate that VRU\_Data outperforms traditional datasets in
terms of VRU density and scene coverage, offering a more comprehensive
representation of VRU behavioral characteristics. This provides critical
support for traffic flow modeling, trajectory prediction, and autonomous
driving virtual testing. The dataset is publicly available for download at:
https://www.kaggle.com/datasets/zcyan2/mixed-traffic-trajectory-dataset-in-from-shanghai.
|
2503.23371 | Gyeongyun Park | Jeonghyun Ko, Gyeongyun Park, Donghoon Lee, Kyunam Lee | FeRG-LLM : Feature Engineering by Reason Generation Large Language
Models | Accepted to NAACL 2025 Findings | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | One of the key tasks in machine learning for tabular data is feature
engineering. Although it is vital for improving the performance of models, it
demands considerable human expertise and deep domain knowledge, making it
labor-intensive endeavor. To address this issue, we propose a novel framework,
\textbf{FeRG-LLM} (\textbf{Fe}ature engineering by \textbf{R}eason
\textbf{G}eneration \textbf{L}arge \textbf{L}anguage \textbf{M}odels), a large
language model designed to automatically perform feature engineering at an
8-billion-parameter scale. We have constructed two-stage conversational
dialogues that enable language models to analyze machine learning tasks and
discovering new features, exhibiting their Chain-of-Thought (CoT) capabilities.
We use these dialogues to fine-tune Llama 3.1 8B model and integrate Direct
Preference Optimization (DPO) to receive feedback improving quality of new
features and the model's performance. Our experiments show that FeRG-LLM
performs comparably to or better than Llama 3.1 70B on most datasets, while
using fewer resources and achieving reduced inference time. It outperforms
other studies in classification tasks and performs well in regression tasks.
Moreover, since it does not rely on cloud-hosted LLMs like GPT-4 with extra API
costs when generating features, it can be deployed locally, addressing security
concerns.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 09:07:21 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ko",
"Jeonghyun",
""
],
[
"Park",
"Gyeongyun",
""
],
[
"Lee",
"Donghoon",
""
],
[
"Lee",
"Kyunam",
""
]
] | TITLE: FeRG-LLM : Feature Engineering by Reason Generation Large Language
Models
ABSTRACT: One of the key tasks in machine learning for tabular data is feature
engineering. Although it is vital for improving the performance of models, it
demands considerable human expertise and deep domain knowledge, making it
labor-intensive endeavor. To address this issue, we propose a novel framework,
\textbf{FeRG-LLM} (\textbf{Fe}ature engineering by \textbf{R}eason
\textbf{G}eneration \textbf{L}arge \textbf{L}anguage \textbf{M}odels), a large
language model designed to automatically perform feature engineering at an
8-billion-parameter scale. We have constructed two-stage conversational
dialogues that enable language models to analyze machine learning tasks and
discovering new features, exhibiting their Chain-of-Thought (CoT) capabilities.
We use these dialogues to fine-tune Llama 3.1 8B model and integrate Direct
Preference Optimization (DPO) to receive feedback improving quality of new
features and the model's performance. Our experiments show that FeRG-LLM
performs comparably to or better than Llama 3.1 70B on most datasets, while
using fewer resources and achieving reduced inference time. It outperforms
other studies in classification tasks and performs well in regression tasks.
Moreover, since it does not rely on cloud-hosted LLMs like GPT-4 with extra API
costs when generating features, it can be deployed locally, addressing security
concerns.
|
2503.23374 | Zongwei Wang | Zongwei Wang, Min Gao, Junliang Yu, Yupeng Hou, Shazia Sadiq, Hongzhi
Yin | RuleAgent: Discovering Rules for Recommendation Denoising with
Autonomous Language Agents | 11 pages, 4 figures | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The implicit feedback (e.g., clicks) in real-world recommender systems is
often prone to severe noise caused by unintentional interactions, such as
misclicks or curiosity-driven behavior. A common approach to denoising this
feedback is manually crafting rules based on observations of training loss
patterns. However, this approach is labor-intensive and the resulting rules
often lack generalization across diverse scenarios. To overcome these
limitations, we introduce RuleAgent, a language agent based framework which
mimics real-world data experts to autonomously discover rules for
recommendation denoising. Unlike the high-cost process of manual rule mining,
RuleAgent offers rapid and dynamic rule discovery, ensuring adaptability to
evolving data and varying scenarios. To achieve this, RuleAgent is equipped
with tailored profile, memory, planning, and action modules and leverages
reflection mechanisms to enhance its reasoning capabilities for rule discovery.
Furthermore, to avoid the frequent retraining in rule discovery, we propose
LossEraser-an unlearning strategy that streamlines training without
compromising denoising performance. Experiments on benchmark datasets
demonstrate that, compared with existing denoising methods, RuleAgent not only
derives the optimal recommendation performance but also produces generalizable
denoising rules, assisting researchers in efficient data cleaning.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 09:19:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Wang",
"Zongwei",
""
],
[
"Gao",
"Min",
""
],
[
"Yu",
"Junliang",
""
],
[
"Hou",
"Yupeng",
""
],
[
"Sadiq",
"Shazia",
""
],
[
"Yin",
"Hongzhi",
""
]
] | TITLE: RuleAgent: Discovering Rules for Recommendation Denoising with
Autonomous Language Agents
ABSTRACT: The implicit feedback (e.g., clicks) in real-world recommender systems is
often prone to severe noise caused by unintentional interactions, such as
misclicks or curiosity-driven behavior. A common approach to denoising this
feedback is manually crafting rules based on observations of training loss
patterns. However, this approach is labor-intensive and the resulting rules
often lack generalization across diverse scenarios. To overcome these
limitations, we introduce RuleAgent, a language agent based framework which
mimics real-world data experts to autonomously discover rules for
recommendation denoising. Unlike the high-cost process of manual rule mining,
RuleAgent offers rapid and dynamic rule discovery, ensuring adaptability to
evolving data and varying scenarios. To achieve this, RuleAgent is equipped
with tailored profile, memory, planning, and action modules and leverages
reflection mechanisms to enhance its reasoning capabilities for rule discovery.
Furthermore, to avoid the frequent retraining in rule discovery, we propose
LossEraser-an unlearning strategy that streamlines training without
compromising denoising performance. Experiments on benchmark datasets
demonstrate that, compared with existing denoising methods, RuleAgent not only
derives the optimal recommendation performance but also produces generalizable
denoising rules, assisting researchers in efficient data cleaning.
|
2503.23377 | Kai Liu | Kai Liu, Wei Li, Lai Chen, Shengqiong Wu, Yanhao Zheng, Jiayi Ji, Fan
Zhou, Rongxin Jiang, Jiebo Luo, Hao Fei, Tat-Seng Chua | JavisDiT: Joint Audio-Video Diffusion Transformer with Hierarchical
Spatio-Temporal Prior Synchronization | Work in progress. Homepage: https://javisdit.github.io/ | null | null | null | cs.CV cs.AI cs.SD eess.AS | http://creativecommons.org/licenses/by-sa/4.0/ | This paper introduces JavisDiT, a novel Joint Audio-Video Diffusion
Transformer designed for synchronized audio-video generation (JAVG). Built upon
the powerful Diffusion Transformer (DiT) architecture, JavisDiT is able to
generate high-quality audio and video content simultaneously from open-ended
user prompts. To ensure optimal synchronization, we introduce a fine-grained
spatio-temporal alignment mechanism through a Hierarchical Spatial-Temporal
Synchronized Prior (HiST-Sypo) Estimator. This module extracts both global and
fine-grained spatio-temporal priors, guiding the synchronization between the
visual and auditory components. Furthermore, we propose a new benchmark,
JavisBench, consisting of 10,140 high-quality text-captioned sounding videos
spanning diverse scenes and complex real-world scenarios. Further, we
specifically devise a robust metric for evaluating the synchronization between
generated audio-video pairs in real-world complex content. Experimental results
demonstrate that JavisDiT significantly outperforms existing methods by
ensuring both high-quality generation and precise synchronization, setting a
new standard for JAVG tasks. Our code, model, and dataset will be made publicly
available at https://javisdit.github.io/.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 09:40:42 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Kai",
""
],
[
"Li",
"Wei",
""
],
[
"Chen",
"Lai",
""
],
[
"Wu",
"Shengqiong",
""
],
[
"Zheng",
"Yanhao",
""
],
[
"Ji",
"Jiayi",
""
],
[
"Zhou",
"Fan",
""
],
[
"Jiang",
"Rongxin",
""
],
[
"Luo",
"Jiebo",
""
],
[
"Fei",
"Hao",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | TITLE: JavisDiT: Joint Audio-Video Diffusion Transformer with Hierarchical
Spatio-Temporal Prior Synchronization
ABSTRACT: This paper introduces JavisDiT, a novel Joint Audio-Video Diffusion
Transformer designed for synchronized audio-video generation (JAVG). Built upon
the powerful Diffusion Transformer (DiT) architecture, JavisDiT is able to
generate high-quality audio and video content simultaneously from open-ended
user prompts. To ensure optimal synchronization, we introduce a fine-grained
spatio-temporal alignment mechanism through a Hierarchical Spatial-Temporal
Synchronized Prior (HiST-Sypo) Estimator. This module extracts both global and
fine-grained spatio-temporal priors, guiding the synchronization between the
visual and auditory components. Furthermore, we propose a new benchmark,
JavisBench, consisting of 10,140 high-quality text-captioned sounding videos
spanning diverse scenes and complex real-world scenarios. Further, we
specifically devise a robust metric for evaluating the synchronization between
generated audio-video pairs in real-world complex content. Experimental results
demonstrate that JavisDiT significantly outperforms existing methods by
ensuring both high-quality generation and precise synchronization, setting a
new standard for JAVG tasks. Our code, model, and dataset will be made publicly
available at https://javisdit.github.io/.
|
2503.23390 | Song Lai | Song Lai, Zhe Zhao, Fei Zhu, Xi Lin, Qingfu Zhang, Gaofeng Meng | Pareto Continual Learning: Preference-Conditioned Learning and Adaption
for Dynamic Stability-Plasticity Trade-off | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual learning aims to learn multiple tasks sequentially. A key challenge
in continual learning is balancing between two objectives: retaining knowledge
from old tasks (stability) and adapting to new tasks (plasticity). Experience
replay methods, which store and replay past data alongside new data, have
become a widely adopted approach to mitigate catastrophic forgetting. However,
these methods neglect the dynamic nature of the stability-plasticity trade-off
and aim to find a fixed and unchanging balance, resulting in suboptimal
adaptation during training and inference. In this paper, we propose Pareto
Continual Learning (ParetoCL), a novel framework that reformulates the
stability-plasticity trade-off in continual learning as a multi-objective
optimization (MOO) problem. ParetoCL introduces a preference-conditioned model
to efficiently learn a set of Pareto optimal solutions representing different
trade-offs and enables dynamic adaptation during inference. From a
generalization perspective, ParetoCL can be seen as an objective augmentation
approach that learns from different objective combinations of stability and
plasticity. Extensive experiments across multiple datasets and settings
demonstrate that ParetoCL outperforms state-of-the-art methods and adapts to
diverse continual learning scenarios.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 10:38:36 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lai",
"Song",
""
],
[
"Zhao",
"Zhe",
""
],
[
"Zhu",
"Fei",
""
],
[
"Lin",
"Xi",
""
],
[
"Zhang",
"Qingfu",
""
],
[
"Meng",
"Gaofeng",
""
]
] | TITLE: Pareto Continual Learning: Preference-Conditioned Learning and Adaption
for Dynamic Stability-Plasticity Trade-off
ABSTRACT: Continual learning aims to learn multiple tasks sequentially. A key challenge
in continual learning is balancing between two objectives: retaining knowledge
from old tasks (stability) and adapting to new tasks (plasticity). Experience
replay methods, which store and replay past data alongside new data, have
become a widely adopted approach to mitigate catastrophic forgetting. However,
these methods neglect the dynamic nature of the stability-plasticity trade-off
and aim to find a fixed and unchanging balance, resulting in suboptimal
adaptation during training and inference. In this paper, we propose Pareto
Continual Learning (ParetoCL), a novel framework that reformulates the
stability-plasticity trade-off in continual learning as a multi-objective
optimization (MOO) problem. ParetoCL introduces a preference-conditioned model
to efficiently learn a set of Pareto optimal solutions representing different
trade-offs and enables dynamic adaptation during inference. From a
generalization perspective, ParetoCL can be seen as an objective augmentation
approach that learns from different objective combinations of stability and
plasticity. Extensive experiments across multiple datasets and settings
demonstrate that ParetoCL outperforms state-of-the-art methods and adapts to
diverse continual learning scenarios.
|
2503.23395 | Ting Dang | Ting Dang, Yan Gao, Hong Jia | Scaling Auditory Cognition via Test-Time Compute in Audio Language
Models | null | null | null | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large language models (LLMs) have shown exceptional versatility in natural
language processing, prompting recent efforts to extend their multimodal
capabilities to speech processing through the development of audio large
language models (Audio LLMs). While Audio LLMs excel in tasks such as speech
recognition and synthesis, it remains unclear how they perform when faced with
the auditory cognitive challenges posed by real-world environments, such as
audio comprehension and listening recall, particularly in the presence of
background noise or overlapping speech. Unlike text-based LLMs, which have
access to vast amounts of text data for pre-training, retraining Audio LLMs
with diverse auditory cognitive scenes is difficult due to the limited datasets
that simulate real-world auditory cognitive scenarios and the challenge of
acquiring auditory cognitive labels for training. While test-time compute (TTC)
methods have been shown to enhance the capabilities of text-based LLMs during
inference, a key challenge lies in designing these TTC methods to improve the
auditory capabilities of Audio LLMs. This study aims to address these two
research gaps by: i) exploring the auditory cognitive capabilities of Audio
LLMs, and ii) enhancing their capabilities using TTC approaches. We have
investigated five different Audio LLMs for auditory cognition using a
\textit{self-collected} database and have proposed five TTC approaches to
enhance auditory cognitive capabilities during inference. Our findings reveal
that Audio LLMs performance decreases in more challenging auditory cognitive
tasks. The proposed TTC approaches significantly enhance cognitive auditory
capabilities, advancing the development of more adaptable and resilient Audio
LLMs for practical applications such as assistive listening devices,
voice-based AI assistants, and communication technologies.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 11:04:18 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Dang",
"Ting",
""
],
[
"Gao",
"Yan",
""
],
[
"Jia",
"Hong",
""
]
] | TITLE: Scaling Auditory Cognition via Test-Time Compute in Audio Language
Models
ABSTRACT: Large language models (LLMs) have shown exceptional versatility in natural
language processing, prompting recent efforts to extend their multimodal
capabilities to speech processing through the development of audio large
language models (Audio LLMs). While Audio LLMs excel in tasks such as speech
recognition and synthesis, it remains unclear how they perform when faced with
the auditory cognitive challenges posed by real-world environments, such as
audio comprehension and listening recall, particularly in the presence of
background noise or overlapping speech. Unlike text-based LLMs, which have
access to vast amounts of text data for pre-training, retraining Audio LLMs
with diverse auditory cognitive scenes is difficult due to the limited datasets
that simulate real-world auditory cognitive scenarios and the challenge of
acquiring auditory cognitive labels for training. While test-time compute (TTC)
methods have been shown to enhance the capabilities of text-based LLMs during
inference, a key challenge lies in designing these TTC methods to improve the
auditory capabilities of Audio LLMs. This study aims to address these two
research gaps by: i) exploring the auditory cognitive capabilities of Audio
LLMs, and ii) enhancing their capabilities using TTC approaches. We have
investigated five different Audio LLMs for auditory cognition using a
\textit{self-collected} database and have proposed five TTC approaches to
enhance auditory cognitive capabilities during inference. Our findings reveal
that Audio LLMs performance decreases in more challenging auditory cognitive
tasks. The proposed TTC approaches significantly enhance cognitive auditory
capabilities, advancing the development of more adaptable and resilient Audio
LLMs for practical applications such as assistive listening devices,
voice-based AI assistants, and communication technologies.
|
2503.23398 | Leander Girrbach | Leander Girrbach, Stephan Alaniz, Genevieve Smith, Zeynep Akata | A Large Scale Analysis of Gender Biases in Text-to-Image Generative
Models | null | null | null | null | cs.CV cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing use of image generation technology, understanding its
social biases, including gender bias, is essential. This paper presents the
first large-scale study on gender bias in text-to-image (T2I) models, focusing
on everyday situations. While previous research has examined biases in
occupations, we extend this analysis to gender associations in daily
activities, objects, and contexts. We create a dataset of 3,217 gender-neutral
prompts and generate 200 images per prompt from five leading T2I models. We
automatically detect the perceived gender of people in the generated images and
filter out images with no person or multiple people of different genders,
leaving 2,293,295 images. To enable a broad analysis of gender bias in T2I
models, we group prompts into semantically similar concepts and calculate the
proportion of male- and female-gendered images for each prompt. Our analysis
shows that T2I models reinforce traditional gender roles, reflect common gender
stereotypes in household roles, and underrepresent women in financial related
activities. Women are predominantly portrayed in care- and human-centered
scenarios, and men in technical or physical labor scenarios.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 11:11:51 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Girrbach",
"Leander",
""
],
[
"Alaniz",
"Stephan",
""
],
[
"Smith",
"Genevieve",
""
],
[
"Akata",
"Zeynep",
""
]
] | TITLE: A Large Scale Analysis of Gender Biases in Text-to-Image Generative
Models
ABSTRACT: With the increasing use of image generation technology, understanding its
social biases, including gender bias, is essential. This paper presents the
first large-scale study on gender bias in text-to-image (T2I) models, focusing
on everyday situations. While previous research has examined biases in
occupations, we extend this analysis to gender associations in daily
activities, objects, and contexts. We create a dataset of 3,217 gender-neutral
prompts and generate 200 images per prompt from five leading T2I models. We
automatically detect the perceived gender of people in the generated images and
filter out images with no person or multiple people of different genders,
leaving 2,293,295 images. To enable a broad analysis of gender bias in T2I
models, we group prompts into semantically similar concepts and calculate the
proportion of male- and female-gendered images for each prompt. Our analysis
shows that T2I models reinforce traditional gender roles, reflect common gender
stereotypes in household roles, and underrepresent women in financial related
activities. Women are predominantly portrayed in care- and human-centered
scenarios, and men in technical or physical labor scenarios.
|
2503.23408 | Saiyam Sakhuja | Saiyam Sakhuja, Shivanshu Siyanwal, Abhishek Tiwari, Britant, Savita
Kashyap | Quantum-Assisted Machine Learning Models for Enhanced Weather Prediction | null | null | null | null | quant-ph cs.ET cs.LG | http://creativecommons.org/licenses/by/4.0/ | Quantum Machine Learning (QML) presents as a revolutionary approach to
weather forecasting by using quantum computing to improve predictive modeling
capabilities. In this study, we apply QML models, including Quantum Gated
Recurrent Units (QGRUs), Quantum Neural Networks (QNNs), Quantum Long
Short-Term Memory(QLSTM), Variational Quantum Circuits(VQCs), and Quantum
Support Vector Machines(QSVMs), to analyze meteorological time-series data from
the ERA5 dataset. Our methodology includes preprocessing meteorological
features, implementing QML architectures for both classification and regression
tasks. The results demonstrate that QML models can achieve reasonable accuracy
in both prediction and classification tasks, particularly in binary
classification. However, challenges such as quantum hardware limitations and
noise affect scalability and generalization. This research provides insights
into the feasibility of QML for weather prediction, paving the way for further
exploration of hybrid quantum-classical frameworks to enhance meteorological
forecasting.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 12:03:27 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Sakhuja",
"Saiyam",
""
],
[
"Siyanwal",
"Shivanshu",
""
],
[
"Tiwari",
"Abhishek",
""
],
[
"Britant",
"",
""
],
[
"Kashyap",
"Savita",
""
]
] | TITLE: Quantum-Assisted Machine Learning Models for Enhanced Weather Prediction
ABSTRACT: Quantum Machine Learning (QML) presents as a revolutionary approach to
weather forecasting by using quantum computing to improve predictive modeling
capabilities. In this study, we apply QML models, including Quantum Gated
Recurrent Units (QGRUs), Quantum Neural Networks (QNNs), Quantum Long
Short-Term Memory(QLSTM), Variational Quantum Circuits(VQCs), and Quantum
Support Vector Machines(QSVMs), to analyze meteorological time-series data from
the ERA5 dataset. Our methodology includes preprocessing meteorological
features, implementing QML architectures for both classification and regression
tasks. The results demonstrate that QML models can achieve reasonable accuracy
in both prediction and classification tasks, particularly in binary
classification. However, challenges such as quantum hardware limitations and
noise affect scalability and generalization. This research provides insights
into the feasibility of QML for weather prediction, paving the way for further
exploration of hybrid quantum-classical frameworks to enhance meteorological
forecasting.
|
2503.23409 | Ximu Zeng | Ximu Zeng, Liwei Deng, Penghao Chen, Xu Chen, Han Su, Kai Zheng | LIRA: A Learning-based Query-aware Partition Framework for Large-scale
ANN Search | This paper is accepted by WWW 2025 | null | 10.1145/3696410.3714633 | null | cs.IR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Approximate nearest neighbor search is fundamental in information retrieval.
Previous partition-based methods enhance search efficiency by probing partial
partitions, yet they face two common issues. In the query phase, a common
strategy is to probe partitions based on the distance ranks of a query to
partition centroids, which inevitably probes irrelevant partitions as it
ignores data distribution. In the partition construction phase, all
partition-based methods face the boundary problem that separates a query's
nearest neighbors to multiple partitions, resulting in a long-tailed kNN
distribution and degrading the optimal nprobe (i.e., the number of probing
partitions). To address this gap, we propose LIRA, a LearnIng-based queRy-aware
pArtition framework. Specifically, we propose a probing model to directly probe
the partitions containing the kNN of a query, which can reduce probing waste
and allow for query-aware probing with nprobe individually. Moreover, we
incorporate the probing model into a learning-based redundancy strategy to
mitigate the adverse impact of the long-tailed kNN distribution on search
efficiency. Extensive experiments on real-world vector datasets demonstrate the
superiority of LIRA in the trade-off among accuracy, latency, and query
fan-out. The codes are available at
https://github.com/SimoneZeng/LIRA-ANN-search.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 12:03:57 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zeng",
"Ximu",
""
],
[
"Deng",
"Liwei",
""
],
[
"Chen",
"Penghao",
""
],
[
"Chen",
"Xu",
""
],
[
"Su",
"Han",
""
],
[
"Zheng",
"Kai",
""
]
] | TITLE: LIRA: A Learning-based Query-aware Partition Framework for Large-scale
ANN Search
ABSTRACT: Approximate nearest neighbor search is fundamental in information retrieval.
Previous partition-based methods enhance search efficiency by probing partial
partitions, yet they face two common issues. In the query phase, a common
strategy is to probe partitions based on the distance ranks of a query to
partition centroids, which inevitably probes irrelevant partitions as it
ignores data distribution. In the partition construction phase, all
partition-based methods face the boundary problem that separates a query's
nearest neighbors to multiple partitions, resulting in a long-tailed kNN
distribution and degrading the optimal nprobe (i.e., the number of probing
partitions). To address this gap, we propose LIRA, a LearnIng-based queRy-aware
pArtition framework. Specifically, we propose a probing model to directly probe
the partitions containing the kNN of a query, which can reduce probing waste
and allow for query-aware probing with nprobe individually. Moreover, we
incorporate the probing model into a learning-based redundancy strategy to
mitigate the adverse impact of the long-tailed kNN distribution on search
efficiency. Extensive experiments on real-world vector datasets demonstrate the
superiority of LIRA in the trade-off among accuracy, latency, and query
fan-out. The codes are available at
https://github.com/SimoneZeng/LIRA-ANN-search.
|
2503.23422 | Jifeng Shen | Xin Zuo, Jiaran Jiang, Jifeng Shen, Wankou Yang | Improving underwater semantic segmentation with underwater image quality
attention and muti-scale aggregation attention | Accepted by Pattern Analysis and Applications | null | 10.1007/s10044-025-01460-7 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Underwater image understanding is crucial for both submarine navigation and
seabed exploration. However, the low illumination in underwater environments
degrades the imaging quality, which in turn seriously deteriorates the
performance of underwater semantic segmentation, particularly for outlining the
object region boundaries. To tackle this issue, we present UnderWater SegFormer
(UWSegFormer), a transformer-based framework for semantic segmentation of
low-quality underwater images. Firstly, we propose the Underwater Image Quality
Attention (UIQA) module. This module enhances the representation of highquality
semantic information in underwater image feature channels through a channel
self-attention mechanism. In order to address the issue of loss of imaging
details due to the underwater environment, the Multi-scale Aggregation
Attention(MAA) module is proposed. This module aggregates sets of semantic
features at different scales by extracting discriminative information from
high-level features,thus compensating for the semantic loss of detail in
underwater objects. Finally, during training, we introduce Edge Learning Loss
(ELL) in order to enhance the model's learning of underwater object edges and
improve the model's prediction accuracy. Experiments conducted on the SUIM and
DUT-USEG (DUT) datasets have demonstrated that the proposed method has
advantages in terms of segmentation completeness, boundary clarity, and
subjective perceptual details when compared to SOTA methods. In addition, the
proposed method achieves the highest mIoU of 82.12 and 71.41 on the SUIM and
DUT datasets, respectively. Code will be available at
https://github.com/SAWRJJ/UWSegFormer.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 12:47:56 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zuo",
"Xin",
""
],
[
"Jiang",
"Jiaran",
""
],
[
"Shen",
"Jifeng",
""
],
[
"Yang",
"Wankou",
""
]
] | TITLE: Improving underwater semantic segmentation with underwater image quality
attention and muti-scale aggregation attention
ABSTRACT: Underwater image understanding is crucial for both submarine navigation and
seabed exploration. However, the low illumination in underwater environments
degrades the imaging quality, which in turn seriously deteriorates the
performance of underwater semantic segmentation, particularly for outlining the
object region boundaries. To tackle this issue, we present UnderWater SegFormer
(UWSegFormer), a transformer-based framework for semantic segmentation of
low-quality underwater images. Firstly, we propose the Underwater Image Quality
Attention (UIQA) module. This module enhances the representation of highquality
semantic information in underwater image feature channels through a channel
self-attention mechanism. In order to address the issue of loss of imaging
details due to the underwater environment, the Multi-scale Aggregation
Attention(MAA) module is proposed. This module aggregates sets of semantic
features at different scales by extracting discriminative information from
high-level features,thus compensating for the semantic loss of detail in
underwater objects. Finally, during training, we introduce Edge Learning Loss
(ELL) in order to enhance the model's learning of underwater object edges and
improve the model's prediction accuracy. Experiments conducted on the SUIM and
DUT-USEG (DUT) datasets have demonstrated that the proposed method has
advantages in terms of segmentation completeness, boundary clarity, and
subjective perceptual details when compared to SOTA methods. In addition, the
proposed method achieves the highest mIoU of 82.12 and 71.41 on the SUIM and
DUT datasets, respectively. Code will be available at
https://github.com/SAWRJJ/UWSegFormer.
|
2503.23436 | Sheng Lu | Sheng Lu and Mingxi Ge and Jiuyi Zhang and Wanli Zhu and Guanjin Li
and Fangming Gu | Filtering with Time-frequency Analysis: An Adaptive and Lightweight
Model for Sequential Recommender Systems Based on Discrete Wavelet Transform | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential Recommender Systems (SRS) aim to model sequential behaviors of
users to capture their interests which usually evolve over time.
Transformer-based SRS have achieved distinguished successes recently. However,
studies reveal self-attention mechanism in Transformer-based models is
essentially a low-pass filter and ignores high frequency information
potentially including meaningful user interest patterns. This motivates us to
seek better filtering technologies for SRS, and finally we find Discrete
Wavelet Transform (DWT), a famous time-frequency analysis technique from
digital signal processing field, can effectively process both low-frequency and
high-frequency information. We design an adaptive time-frequency filter with
DWT technique, which decomposes user interests into multiple signals with
different frequency and time, and can automatically learn weights of these
signals. Furthermore, we develop DWTRec, a model for sequential recommendation
all based on the adaptive time-frequency filter. Thanks to fast DWT technique,
DWTRec has a lower time complexity and space complexity theoretically, and is
Proficient in modeling long sequences. Experiments show that our model
outperforms state-of-the-art baseline models in datasets with different
domains, sparsity levels and average sequence lengths. Especially, our model
shows great performance increase in contrast with previous models when the
sequence grows longer, which demonstrates another advantage of our model.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 13:28:42 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lu",
"Sheng",
""
],
[
"Ge",
"Mingxi",
""
],
[
"Zhang",
"Jiuyi",
""
],
[
"Zhu",
"Wanli",
""
],
[
"Li",
"Guanjin",
""
],
[
"Gu",
"Fangming",
""
]
] | TITLE: Filtering with Time-frequency Analysis: An Adaptive and Lightweight
Model for Sequential Recommender Systems Based on Discrete Wavelet Transform
ABSTRACT: Sequential Recommender Systems (SRS) aim to model sequential behaviors of
users to capture their interests which usually evolve over time.
Transformer-based SRS have achieved distinguished successes recently. However,
studies reveal self-attention mechanism in Transformer-based models is
essentially a low-pass filter and ignores high frequency information
potentially including meaningful user interest patterns. This motivates us to
seek better filtering technologies for SRS, and finally we find Discrete
Wavelet Transform (DWT), a famous time-frequency analysis technique from
digital signal processing field, can effectively process both low-frequency and
high-frequency information. We design an adaptive time-frequency filter with
DWT technique, which decomposes user interests into multiple signals with
different frequency and time, and can automatically learn weights of these
signals. Furthermore, we develop DWTRec, a model for sequential recommendation
all based on the adaptive time-frequency filter. Thanks to fast DWT technique,
DWTRec has a lower time complexity and space complexity theoretically, and is
Proficient in modeling long sequences. Experiments show that our model
outperforms state-of-the-art baseline models in datasets with different
domains, sparsity levels and average sequence lengths. Especially, our model
shows great performance increase in contrast with previous models when the
sequence grows longer, which demonstrates another advantage of our model.
|
2503.23439 | Hyunjong Ok | Hyunjong Ok, Suho Yoo, Jaeho Lee | Speculative End-Turn Detector for Efficient Speech Chatbot Assistant | Preprint | null | null | null | cs.CL cs.AI cs.LG cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spoken dialogue systems powered by large language models have demonstrated
remarkable abilities in understanding human speech and generating appropriate
spoken responses. However, these systems struggle with end-turn detection (ETD)
-- the ability to distinguish between user turn completion and hesitation. This
limitation often leads to premature or delayed responses, disrupting the flow
of spoken conversations. In this paper, we introduce the ETD Dataset, the first
public dataset for end-turn detection. The ETD dataset consists of both
synthetic speech data generated with text-to-speech models and real-world
speech data collected from web sources. We also propose SpeculativeETD, a novel
collaborative inference framework that balances efficiency and accuracy to
improve real-time ETD in resource-constrained environments. Our approach
jointly employs a lightweight GRU-based model, which rapidly detects the
non-speaking units in real-time on local devices, and a high-performance
Wav2vec-based model running on the server to make a more challenging
classification of distinguishing turn ends from mere pauses. Experiments
demonstrate that the proposed SpeculativeETD significantly improves ETD
accuracy while keeping the required computations low. Datasets and code will be
available after the review.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 13:34:23 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ok",
"Hyunjong",
""
],
[
"Yoo",
"Suho",
""
],
[
"Lee",
"Jaeho",
""
]
] | TITLE: Speculative End-Turn Detector for Efficient Speech Chatbot Assistant
ABSTRACT: Spoken dialogue systems powered by large language models have demonstrated
remarkable abilities in understanding human speech and generating appropriate
spoken responses. However, these systems struggle with end-turn detection (ETD)
-- the ability to distinguish between user turn completion and hesitation. This
limitation often leads to premature or delayed responses, disrupting the flow
of spoken conversations. In this paper, we introduce the ETD Dataset, the first
public dataset for end-turn detection. The ETD dataset consists of both
synthetic speech data generated with text-to-speech models and real-world
speech data collected from web sources. We also propose SpeculativeETD, a novel
collaborative inference framework that balances efficiency and accuracy to
improve real-time ETD in resource-constrained environments. Our approach
jointly employs a lightweight GRU-based model, which rapidly detects the
non-speaking units in real-time on local devices, and a high-performance
Wav2vec-based model running on the server to make a more challenging
classification of distinguishing turn ends from mere pauses. Experiments
demonstrate that the proposed SpeculativeETD significantly improves ETD
accuracy while keeping the required computations low. Datasets and code will be
available after the review.
|
2503.23447 | Jongseo Lee | Jongseo Lee, Joohyun Chang, Dongho Lee, Jinwoo Choi | CA^2ST: Cross-Attention in Audio, Space, and Time for Holistic Video
Recognition | 27 pages including appendix, TPAMI under review | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose Cross-Attention in Audio, Space, and Time (CA^2ST), a
transformer-based method for holistic video recognition. Recognizing actions in
videos requires both spatial and temporal understanding, yet most existing
models lack a balanced spatio-temporal understanding of videos. To address
this, we propose a novel two-stream architecture, called Cross-Attention in
Space and Time (CAST), using only RGB input. In each layer of CAST, Bottleneck
Cross-Attention (B-CA) enables spatial and temporal experts to exchange
information and make synergistic predictions. For holistic video understanding,
we extend CAST by integrating an audio expert, forming Cross-Attention in
Visual and Audio (CAVA). We validate the CAST on benchmarks with different
characteristics, EPIC-KITCHENS-100, Something-Something-V2, and Kinetics-400,
consistently showing balanced performance. We also validate the CAVA on
audio-visual action recognition benchmarks, including UCF-101, VGG-Sound,
KineticsSound, and EPIC-SOUNDS. With a favorable performance of CAVA across
these datasets, we demonstrate the effective information exchange among
multiple experts within the B-CA module. In summary, CA^2ST combines CAST and
CAVA by employing spatial, temporal, and audio experts through cross-attention,
achieving balanced and holistic video understanding.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 13:57:58 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lee",
"Jongseo",
""
],
[
"Chang",
"Joohyun",
""
],
[
"Lee",
"Dongho",
""
],
[
"Choi",
"Jinwoo",
""
]
] | TITLE: CA^2ST: Cross-Attention in Audio, Space, and Time for Holistic Video
Recognition
ABSTRACT: We propose Cross-Attention in Audio, Space, and Time (CA^2ST), a
transformer-based method for holistic video recognition. Recognizing actions in
videos requires both spatial and temporal understanding, yet most existing
models lack a balanced spatio-temporal understanding of videos. To address
this, we propose a novel two-stream architecture, called Cross-Attention in
Space and Time (CAST), using only RGB input. In each layer of CAST, Bottleneck
Cross-Attention (B-CA) enables spatial and temporal experts to exchange
information and make synergistic predictions. For holistic video understanding,
we extend CAST by integrating an audio expert, forming Cross-Attention in
Visual and Audio (CAVA). We validate the CAST on benchmarks with different
characteristics, EPIC-KITCHENS-100, Something-Something-V2, and Kinetics-400,
consistently showing balanced performance. We also validate the CAVA on
audio-visual action recognition benchmarks, including UCF-101, VGG-Sound,
KineticsSound, and EPIC-SOUNDS. With a favorable performance of CAVA across
these datasets, we demonstrate the effective information exchange among
multiple experts within the B-CA module. In summary, CA^2ST combines CAST and
CAVA by employing spatial, temporal, and audio experts through cross-attention,
achieving balanced and holistic video understanding.
|
2503.23448 | Leon Moonen | Max Hort and Linas Vidziunas and Leon Moonen | Semantic-Preserving Transformations as Mutation Operators: A Study on
Their Effectiveness in Defect Detection | Accepted for publication in Mutation 2025 at the 18th IEEE
International Conference on Software Testing, Verification and Validation
(ICST 2025) | null | null | null | cs.SE cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent advances in defect detection use language models. Existing works
enhanced the training data to improve the models' robustness when applied to
semantically identical code (i.e., predictions should be the same). However,
the use of semantically identical code has not been considered for improving
the tools during their application - a concept closely related to metamorphic
testing.
The goal of our study is to determine whether we can use semantic-preserving
transformations, analogue to mutation operators, to improve the performance of
defect detection tools in the testing stage. We first collect existing
publications which implemented semantic-preserving transformations and share
their implementation, such that we can reuse them. We empirically study the
effectiveness of three different ensemble strategies for enhancing defect
detection tools. We apply the collected transformations on the Devign dataset,
considering vulnerabilities as a type of defect, and two fine-tuned large
language models for defect detection (VulBERTa, PLBART). We found 28
publications with 94 different transformations.
We choose to implement 39 transformations from four of the publications, but
a manual check revealed that 23 out 39 transformations change code semantics.
Using the 16 remaining, correct transformations and three ensemble strategies,
we were not able to increase the accuracy of the defect detection models. Our
results show that reusing shared semantic-preserving transformation is
difficult, sometimes even causing wrongful changes to the semantics.
Keywords: defect detection, language model, semantic-preserving
transformation, ensemble
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:00:22 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Hort",
"Max",
""
],
[
"Vidziunas",
"Linas",
""
],
[
"Moonen",
"Leon",
""
]
] | TITLE: Semantic-Preserving Transformations as Mutation Operators: A Study on
Their Effectiveness in Defect Detection
ABSTRACT: Recent advances in defect detection use language models. Existing works
enhanced the training data to improve the models' robustness when applied to
semantically identical code (i.e., predictions should be the same). However,
the use of semantically identical code has not been considered for improving
the tools during their application - a concept closely related to metamorphic
testing.
The goal of our study is to determine whether we can use semantic-preserving
transformations, analogue to mutation operators, to improve the performance of
defect detection tools in the testing stage. We first collect existing
publications which implemented semantic-preserving transformations and share
their implementation, such that we can reuse them. We empirically study the
effectiveness of three different ensemble strategies for enhancing defect
detection tools. We apply the collected transformations on the Devign dataset,
considering vulnerabilities as a type of defect, and two fine-tuned large
language models for defect detection (VulBERTa, PLBART). We found 28
publications with 94 different transformations.
We choose to implement 39 transformations from four of the publications, but
a manual check revealed that 23 out 39 transformations change code semantics.
Using the 16 remaining, correct transformations and three ensemble strategies,
we were not able to increase the accuracy of the defect detection models. Our
results show that reusing shared semantic-preserving transformation is
difficult, sometimes even causing wrongful changes to the semantics.
Keywords: defect detection, language model, semantic-preserving
transformation, ensemble
|
2503.23450 | Bohao Xing | Bohao Xing, Kaishen Yuan, Zitong Yu, Xin Liu, Heikki K\"alvi\"ainen | AU-TTT: Vision Test-Time Training model for Facial Action Unit Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Facial Action Units (AUs) detection is a cornerstone of objective facial
expression analysis and a critical focus in affective computing. Despite its
importance, AU detection faces significant challenges, such as the high cost of
AU annotation and the limited availability of datasets. These constraints often
lead to overfitting in existing methods, resulting in substantial performance
degradation when applied across diverse datasets. Addressing these issues is
essential for improving the reliability and generalizability of AU detection
methods. Moreover, many current approaches leverage Transformers for their
effectiveness in long-context modeling, but they are hindered by the quadratic
complexity of self-attention. Recently, Test-Time Training (TTT) layers have
emerged as a promising solution for long-sequence modeling. Additionally, TTT
applies self-supervised learning for iterative updates during both training and
inference, offering a potential pathway to mitigate the generalization
challenges inherent in AU detection tasks. In this paper, we propose a novel
vision backbone tailored for AU detection, incorporating bidirectional TTT
blocks, named AU-TTT. Our approach introduces TTT Linear to the AU detection
task and optimizes image scanning mechanisms for enhanced performance.
Additionally, we design an AU-specific Region of Interest (RoI) scanning
mechanism to capture fine-grained facial features critical for AU detection.
Experimental results demonstrate that our method achieves competitive
performance in both within-domain and cross-domain scenarios.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:09:13 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xing",
"Bohao",
""
],
[
"Yuan",
"Kaishen",
""
],
[
"Yu",
"Zitong",
""
],
[
"Liu",
"Xin",
""
],
[
"Kälviäinen",
"Heikki",
""
]
] | TITLE: AU-TTT: Vision Test-Time Training model for Facial Action Unit Detection
ABSTRACT: Facial Action Units (AUs) detection is a cornerstone of objective facial
expression analysis and a critical focus in affective computing. Despite its
importance, AU detection faces significant challenges, such as the high cost of
AU annotation and the limited availability of datasets. These constraints often
lead to overfitting in existing methods, resulting in substantial performance
degradation when applied across diverse datasets. Addressing these issues is
essential for improving the reliability and generalizability of AU detection
methods. Moreover, many current approaches leverage Transformers for their
effectiveness in long-context modeling, but they are hindered by the quadratic
complexity of self-attention. Recently, Test-Time Training (TTT) layers have
emerged as a promising solution for long-sequence modeling. Additionally, TTT
applies self-supervised learning for iterative updates during both training and
inference, offering a potential pathway to mitigate the generalization
challenges inherent in AU detection tasks. In this paper, we propose a novel
vision backbone tailored for AU detection, incorporating bidirectional TTT
blocks, named AU-TTT. Our approach introduces TTT Linear to the AU detection
task and optimizes image scanning mechanisms for enhanced performance.
Additionally, we design an AU-specific Region of Interest (RoI) scanning
mechanism to capture fine-grained facial features critical for AU detection.
Experimental results demonstrate that our method achieves competitive
performance in both within-domain and cross-domain scenarios.
|
2503.23451 | Aimira Baitieva | Aimira Baitieva, Yacine Bouaouni, Alexandre Briot, Dick Ameln,
Souhaiel Khalfaoui, Samet Akcay | Beyond Academic Benchmarks: Critical Analysis and Best Practices for
Visual Industrial Anomaly Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anomaly detection (AD) is essential for automating visual inspection in
manufacturing. This field of computer vision is rapidly evolving, with
increasing attention towards real-world applications. Meanwhile, popular
datasets are typically produced in controlled lab environments with
artificially created defects, unable to capture the diversity of real
production conditions. New methods often fail in production settings, showing
significant performance degradation or requiring impractical computational
resources. This disconnect between academic results and industrial viability
threatens to misdirect visual anomaly detection research. This paper makes
three key contributions: (1) we demonstrate the importance of real-world
datasets and establish benchmarks using actual production data, (2) we provide
a fair comparison of existing SOTA methods across diverse tasks by utilizing
metrics that are valuable for practical applications, and (3) we present a
comprehensive analysis of recent advancements in this field by discussing
important challenges and new perspectives for bridging the academia-industry
gap. The code is publicly available at
https://github.com/abc-125/viad-benchmark
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:11:46 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Baitieva",
"Aimira",
""
],
[
"Bouaouni",
"Yacine",
""
],
[
"Briot",
"Alexandre",
""
],
[
"Ameln",
"Dick",
""
],
[
"Khalfaoui",
"Souhaiel",
""
],
[
"Akcay",
"Samet",
""
]
] | TITLE: Beyond Academic Benchmarks: Critical Analysis and Best Practices for
Visual Industrial Anomaly Detection
ABSTRACT: Anomaly detection (AD) is essential for automating visual inspection in
manufacturing. This field of computer vision is rapidly evolving, with
increasing attention towards real-world applications. Meanwhile, popular
datasets are typically produced in controlled lab environments with
artificially created defects, unable to capture the diversity of real
production conditions. New methods often fail in production settings, showing
significant performance degradation or requiring impractical computational
resources. This disconnect between academic results and industrial viability
threatens to misdirect visual anomaly detection research. This paper makes
three key contributions: (1) we demonstrate the importance of real-world
datasets and establish benchmarks using actual production data, (2) we provide
a fair comparison of existing SOTA methods across diverse tasks by utilizing
metrics that are valuable for practical applications, and (3) we present a
comprehensive analysis of recent advancements in this field by discussing
important challenges and new perspectives for bridging the academia-industry
gap. The code is publicly available at
https://github.com/abc-125/viad-benchmark
|
2503.23453 | Jiahui Liu | Maofu Liu, Jiahui Liu, Xiaokang Zhang | Semantic-Spatial Feature Fusion with Dynamic Graph Refinement for Remote
Sensing Image Captioning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote sensing image captioning aims to generate semantically accurate
descriptions that are closely linked to the visual features of remote sensing
images. Existing approaches typically emphasize fine-grained extraction of
visual features and capturing global information. However, they often overlook
the complementary role of textual information in enhancing visual semantics and
face challenges in precisely locating objects that are most relevant to the
image context. To address these challenges, this paper presents a
semantic-spatial feature fusion with dynamic graph refinement (SFDR) method,
which integrates the semantic-spatial feature fusion (SSFF) and dynamic graph
feature refinement (DGFR) modules. The SSFF module utilizes a multi-level
feature representation strategy by leveraging pre-trained CLIP features, grid
features, and ROI features to integrate rich semantic and spatial information.
In the DGFR module, a graph attention network captures the relationships
between feature nodes, while a dynamic weighting mechanism prioritizes objects
that are most relevant to the current scene and suppresses less significant
ones. Therefore, the proposed SFDR method significantly enhances the quality of
the generated descriptions. Experimental results on three benchmark datasets
demonstrate the effectiveness of the proposed method. The source code will be
available at https://github.com/zxk688}{https://github.com/zxk688.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:14:41 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Maofu",
""
],
[
"Liu",
"Jiahui",
""
],
[
"Zhang",
"Xiaokang",
""
]
] | TITLE: Semantic-Spatial Feature Fusion with Dynamic Graph Refinement for Remote
Sensing Image Captioning
ABSTRACT: Remote sensing image captioning aims to generate semantically accurate
descriptions that are closely linked to the visual features of remote sensing
images. Existing approaches typically emphasize fine-grained extraction of
visual features and capturing global information. However, they often overlook
the complementary role of textual information in enhancing visual semantics and
face challenges in precisely locating objects that are most relevant to the
image context. To address these challenges, this paper presents a
semantic-spatial feature fusion with dynamic graph refinement (SFDR) method,
which integrates the semantic-spatial feature fusion (SSFF) and dynamic graph
feature refinement (DGFR) modules. The SSFF module utilizes a multi-level
feature representation strategy by leveraging pre-trained CLIP features, grid
features, and ROI features to integrate rich semantic and spatial information.
In the DGFR module, a graph attention network captures the relationships
between feature nodes, while a dynamic weighting mechanism prioritizes objects
that are most relevant to the current scene and suppresses less significant
ones. Therefore, the proposed SFDR method significantly enhances the quality of
the generated descriptions. Experimental results on three benchmark datasets
demonstrate the effectiveness of the proposed method. The source code will be
available at https://github.com/zxk688}{https://github.com/zxk688.
|
2503.23455 | Yazhou Yao | Junzhu Mao, Yang Shen, Jinyang Guo, Yazhou Yao, and Xiansheng Hua | Efficient Token Compression for Vision Transformer with Spatial
Information Preserved | accepted by IEEE Transactions on Multimedia | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Token compression is essential for reducing the computational and memory
requirements of transformer models, enabling their deployment in
resource-constrained environments. In this work, we propose an efficient and
hardware-compatible token compression method called Prune and Merge. Our
approach integrates token pruning and merging operations within transformer
models to achieve layer-wise token compression. By introducing trainable merge
and reconstruct matrices and utilizing shortcut connections, we efficiently
merge tokens while preserving important information and enabling the
restoration of pruned tokens. Additionally, we introduce a novel
gradient-weighted attention scoring mechanism that computes token importance
scores during the training phase, eliminating the need for separate
computations during inference and enhancing compression efficiency. We also
leverage gradient information to capture the global impact of tokens and
automatically identify optimal compression structures. Extensive experiments on
the ImageNet-1k and ADE20K datasets validate the effectiveness of our approach,
achieving significant speed-ups with minimal accuracy degradation compared to
state-of-the-art methods. For instance, on DeiT-Small, we achieve a
1.64$\times$ speed-up with only a 0.2\% drop in accuracy on ImageNet-1k.
Moreover, by compressing segmenter models and comparing with existing methods,
we demonstrate the superior performance of our approach in terms of efficiency
and effectiveness. Code and models have been made available at
https://github.com/NUST-Machine-Intelligence-Laboratory/prune_and_merge.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:23:18 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Mao",
"Junzhu",
""
],
[
"Shen",
"Yang",
""
],
[
"Guo",
"Jinyang",
""
],
[
"Yao",
"Yazhou",
""
],
[
"Hua",
"Xiansheng",
""
]
] | TITLE: Efficient Token Compression for Vision Transformer with Spatial
Information Preserved
ABSTRACT: Token compression is essential for reducing the computational and memory
requirements of transformer models, enabling their deployment in
resource-constrained environments. In this work, we propose an efficient and
hardware-compatible token compression method called Prune and Merge. Our
approach integrates token pruning and merging operations within transformer
models to achieve layer-wise token compression. By introducing trainable merge
and reconstruct matrices and utilizing shortcut connections, we efficiently
merge tokens while preserving important information and enabling the
restoration of pruned tokens. Additionally, we introduce a novel
gradient-weighted attention scoring mechanism that computes token importance
scores during the training phase, eliminating the need for separate
computations during inference and enhancing compression efficiency. We also
leverage gradient information to capture the global impact of tokens and
automatically identify optimal compression structures. Extensive experiments on
the ImageNet-1k and ADE20K datasets validate the effectiveness of our approach,
achieving significant speed-ups with minimal accuracy degradation compared to
state-of-the-art methods. For instance, on DeiT-Small, we achieve a
1.64$\times$ speed-up with only a 0.2\% drop in accuracy on ImageNet-1k.
Moreover, by compressing segmenter models and comparing with existing methods,
we demonstrate the superior performance of our approach in terms of efficiency
and effectiveness. Code and models have been made available at
https://github.com/NUST-Machine-Intelligence-Laboratory/prune_and_merge.
|
2503.23456 | Xin Jiang | Maofu Liu, Xin Jiang, Xiaokang Zhang | CADFormer: Fine-Grained Cross-modal Alignment and Decoding Transformer
for Referring Remote Sensing Image Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Referring Remote Sensing Image Segmentation (RRSIS) is a challenging task,
aiming to segment specific target objects in remote sensing (RS) images based
on a given language expression. Existing RRSIS methods typically employ
coarse-grained unidirectional alignment approaches to obtain multimodal
features, and they often overlook the critical role of language features as
contextual information during the decoding process. Consequently, these methods
exhibit weak object-level correspondence between visual and language features,
leading to incomplete or erroneous predicted masks, especially when handling
complex expressions and intricate RS image scenes. To address these challenges,
we propose a fine-grained cross-modal alignment and decoding Transformer,
CADFormer, for RRSIS. Specifically, we design a semantic mutual guidance
alignment module (SMGAM) to achieve both vision-to-language and
language-to-vision alignment, enabling comprehensive integration of visual and
textual features for fine-grained cross-modal alignment. Furthermore, a
textual-enhanced cross-modal decoder (TCMD) is introduced to incorporate
language features during decoding, using refined textual information as context
to enhance the relationship between cross-modal features. To thoroughly
evaluate the performance of CADFormer, especially for inconspicuous targets in
complex scenes, we constructed a new RRSIS dataset, called RRSIS-HR, which
includes larger high-resolution RS image patches and semantically richer
language expressions. Extensive experiments on the RRSIS-HR dataset and the
popular RRSIS-D dataset demonstrate the effectiveness and superiority of
CADFormer. Datasets and source codes will be available at
https://github.com/zxk688.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:24:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Maofu",
""
],
[
"Jiang",
"Xin",
""
],
[
"Zhang",
"Xiaokang",
""
]
] | TITLE: CADFormer: Fine-Grained Cross-modal Alignment and Decoding Transformer
for Referring Remote Sensing Image Segmentation
ABSTRACT: Referring Remote Sensing Image Segmentation (RRSIS) is a challenging task,
aiming to segment specific target objects in remote sensing (RS) images based
on a given language expression. Existing RRSIS methods typically employ
coarse-grained unidirectional alignment approaches to obtain multimodal
features, and they often overlook the critical role of language features as
contextual information during the decoding process. Consequently, these methods
exhibit weak object-level correspondence between visual and language features,
leading to incomplete or erroneous predicted masks, especially when handling
complex expressions and intricate RS image scenes. To address these challenges,
we propose a fine-grained cross-modal alignment and decoding Transformer,
CADFormer, for RRSIS. Specifically, we design a semantic mutual guidance
alignment module (SMGAM) to achieve both vision-to-language and
language-to-vision alignment, enabling comprehensive integration of visual and
textual features for fine-grained cross-modal alignment. Furthermore, a
textual-enhanced cross-modal decoder (TCMD) is introduced to incorporate
language features during decoding, using refined textual information as context
to enhance the relationship between cross-modal features. To thoroughly
evaluate the performance of CADFormer, especially for inconspicuous targets in
complex scenes, we constructed a new RRSIS dataset, called RRSIS-HR, which
includes larger high-resolution RS image patches and semantically richer
language expressions. Extensive experiments on the RRSIS-HR dataset and the
popular RRSIS-D dataset demonstrate the effectiveness and superiority of
CADFormer. Datasets and source codes will be available at
https://github.com/zxk688.
|
2503.23459 | Shen Liang | Chenglong Lu, Shen Liang, Xuewei Wang, Wei Wang | Reinforcement Learning-based Token Pruning in Vision Transformers: A
Markov Game Approach | Accepted by IEEE International Conference on Multimedia & Expo (ICME)
2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision Transformers (ViTs) have computational costs scaling quadratically
with the number of tokens, calling for effective token pruning policies. Most
existing policies are handcrafted, lacking adaptivity to varying inputs.
Moreover, they fail to consider the sequential nature of token pruning across
multiple layers. In this work, for the first time (as far as we know), we
exploit Reinforcement Learning (RL) to data-adaptively learn a pruning policy.
Formulating token pruning as a sequential decision-making problem, we model it
as a Markov Game and utilize Multi-Agent Proximal Policy Optimization (MAPPO)
where each agent makes an individualized pruning decision for a single token.
We also develop reward functions that enable simultaneous collaboration and
competition of these agents to balance efficiency and accuracy. On the
well-known ImageNet-1k dataset, our method improves the inference speed by up
to 44% while incurring only a negligible accuracy drop of 0.4%. The source code
is available at https://github.com/daashuai/rl4evit.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:34:28 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lu",
"Chenglong",
""
],
[
"Liang",
"Shen",
""
],
[
"Wang",
"Xuewei",
""
],
[
"Wang",
"Wei",
""
]
] | TITLE: Reinforcement Learning-based Token Pruning in Vision Transformers: A
Markov Game Approach
ABSTRACT: Vision Transformers (ViTs) have computational costs scaling quadratically
with the number of tokens, calling for effective token pruning policies. Most
existing policies are handcrafted, lacking adaptivity to varying inputs.
Moreover, they fail to consider the sequential nature of token pruning across
multiple layers. In this work, for the first time (as far as we know), we
exploit Reinforcement Learning (RL) to data-adaptively learn a pruning policy.
Formulating token pruning as a sequential decision-making problem, we model it
as a Markov Game and utilize Multi-Agent Proximal Policy Optimization (MAPPO)
where each agent makes an individualized pruning decision for a single token.
We also develop reward functions that enable simultaneous collaboration and
competition of these agents to balance efficiency and accuracy. On the
well-known ImageNet-1k dataset, our method improves the inference speed by up
to 44% while incurring only a negligible accuracy drop of 0.4%. The source code
is available at https://github.com/daashuai/rl4evit.
|
2503.23463 | Xingcheng Zhou | Xingcheng Zhou, Xuyuan Han, Feng Yang, Yunpu Ma, Alois C. Knoll | OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision
Language Action Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present OpenDriveVLA, a Vision-Language Action (VLA) model designed for
end-to-end autonomous driving. OpenDriveVLA builds upon open-source pre-trained
large Vision-Language Models (VLMs) to generate reliable driving actions,
conditioned on 3D environmental perception, ego vehicle states, and driver
commands. To bridge the modality gap between driving visual representations and
language embeddings, we propose a hierarchical vision-language alignment
process, projecting both 2D and 3D structured visual tokens into a unified
semantic space. Besides, OpenDriveVLA models the dynamic relationships between
the ego vehicle, surrounding agents, and static road elements through an
autoregressive agent-env-ego interaction process, ensuring both spatially and
behaviorally informed trajectory planning. Extensive experiments on the
nuScenes dataset demonstrate that OpenDriveVLA achieves state-of-the-art
results across open-loop trajectory planning and driving-related
question-answering tasks. Qualitative analyses further illustrate
OpenDriveVLA's superior capability to follow high-level driving commands and
robustly generate trajectories under challenging scenarios, highlighting its
potential for next-generation end-to-end autonomous driving. We will release
our code to facilitate further research in this domain.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:45:54 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhou",
"Xingcheng",
""
],
[
"Han",
"Xuyuan",
""
],
[
"Yang",
"Feng",
""
],
[
"Ma",
"Yunpu",
""
],
[
"Knoll",
"Alois C.",
""
]
] | TITLE: OpenDriveVLA: Towards End-to-end Autonomous Driving with Large Vision
Language Action Model
ABSTRACT: We present OpenDriveVLA, a Vision-Language Action (VLA) model designed for
end-to-end autonomous driving. OpenDriveVLA builds upon open-source pre-trained
large Vision-Language Models (VLMs) to generate reliable driving actions,
conditioned on 3D environmental perception, ego vehicle states, and driver
commands. To bridge the modality gap between driving visual representations and
language embeddings, we propose a hierarchical vision-language alignment
process, projecting both 2D and 3D structured visual tokens into a unified
semantic space. Besides, OpenDriveVLA models the dynamic relationships between
the ego vehicle, surrounding agents, and static road elements through an
autoregressive agent-env-ego interaction process, ensuring both spatially and
behaviorally informed trajectory planning. Extensive experiments on the
nuScenes dataset demonstrate that OpenDriveVLA achieves state-of-the-art
results across open-loop trajectory planning and driving-related
question-answering tasks. Qualitative analyses further illustrate
OpenDriveVLA's superior capability to follow high-level driving commands and
robustly generate trajectories under challenging scenarios, highlighting its
potential for next-generation end-to-end autonomous driving. We will release
our code to facilitate further research in this domain.
|
2503.23466 | Leon Moonen | Max Hort and Leon Moonen | Codehacks: A Dataset of Adversarial Tests for Competitive Programming
Problems Obtained from Codeforces | Accepted for publication at the 18th IEEE International Conference on
Software Testing, Verification and Validation (ICST 2025) | null | null | null | cs.SE cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Software is used in critical applications in our day-to-day life and it is
important to ensure its correctness. One popular approach to assess correctness
is to evaluate software on tests. If a test fails, it indicates a fault in the
software under test; if all tests pass correctly, one may assume that the
software is correct. However, the reliability of these results depends on the
test suite considered, and there is a risk of false negatives (i.e. software
that passes all available tests but contains bugs because some cases are not
tested). Therefore, it is important to consider error-inducing test cases when
evaluating software.
To support data-driven creation of such a test-suite, which is especially of
interest for testing software synthesized from large language models, we curate
a dataset (Codehacks) of programming problems together with corresponding
error-inducing test cases (i.e., "hacks"). This dataset is collected from the
wild, in particular, from the Codeforces online judge platform. The dataset
comprises 288,617 hacks for 5,578 programming problems, each with a natural
language description, as well as the source code for 2,196 submitted solutions
to these problems that can be broken with their corresponding hacks.
Keywords: competitive programming, language model, dataset
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:50:03 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Hort",
"Max",
""
],
[
"Moonen",
"Leon",
""
]
] | TITLE: Codehacks: A Dataset of Adversarial Tests for Competitive Programming
Problems Obtained from Codeforces
ABSTRACT: Software is used in critical applications in our day-to-day life and it is
important to ensure its correctness. One popular approach to assess correctness
is to evaluate software on tests. If a test fails, it indicates a fault in the
software under test; if all tests pass correctly, one may assume that the
software is correct. However, the reliability of these results depends on the
test suite considered, and there is a risk of false negatives (i.e. software
that passes all available tests but contains bugs because some cases are not
tested). Therefore, it is important to consider error-inducing test cases when
evaluating software.
To support data-driven creation of such a test-suite, which is especially of
interest for testing software synthesized from large language models, we curate
a dataset (Codehacks) of programming problems together with corresponding
error-inducing test cases (i.e., "hacks"). This dataset is collected from the
wild, in particular, from the Codeforces online judge platform. The dataset
comprises 288,617 hacks for 5,578 programming problems, each with a natural
language description, as well as the source code for 2,196 submitted solutions
to these problems that can be broken with their corresponding hacks.
Keywords: competitive programming, language model, dataset
|
2503.23468 | Eytan Kats | Eytan Kats, Kai Gei{\ss}ler, Jochen G. Hirsch, Stefan Heldman, Mattias
P. Heinrich | Internal Organ Localization Using Depth Images | Accepted for German Conference on Medical Image Computing 2025 (BVM
2025) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Automated patient positioning is a crucial step in streamlining MRI workflows
and enhancing patient throughput. RGB-D camera-based systems offer a promising
approach to automate this process by leveraging depth information to estimate
internal organ positions. This paper investigates the feasibility of a
learning-based framework to infer approximate internal organ positions from the
body surface. Our approach utilizes a large-scale dataset of MRI scans to train
a deep learning model capable of accurately predicting organ positions and
shapes from depth images alone. We demonstrate the effectiveness of our method
in localization of multiple internal organs, including bones and soft tissues.
Our findings suggest that RGB-D camera-based systems integrated into MRI
workflows have the potential to streamline scanning procedures and improve
patient experience by enabling accurate and automated patient positioning.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 14:55:23 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kats",
"Eytan",
""
],
[
"Geißler",
"Kai",
""
],
[
"Hirsch",
"Jochen G.",
""
],
[
"Heldman",
"Stefan",
""
],
[
"Heinrich",
"Mattias P.",
""
]
] | TITLE: Internal Organ Localization Using Depth Images
ABSTRACT: Automated patient positioning is a crucial step in streamlining MRI workflows
and enhancing patient throughput. RGB-D camera-based systems offer a promising
approach to automate this process by leveraging depth information to estimate
internal organ positions. This paper investigates the feasibility of a
learning-based framework to infer approximate internal organ positions from the
body surface. Our approach utilizes a large-scale dataset of MRI scans to train
a deep learning model capable of accurately predicting organ positions and
shapes from depth images alone. We demonstrate the effectiveness of our method
in localization of multiple internal organs, including bones and soft tissues.
Our findings suggest that RGB-D camera-based systems integrated into MRI
workflows have the potential to streamline scanning procedures and improve
patient experience by enabling accurate and automated patient positioning.
|
2503.23470 | Dim Shaiakhmetov | Dim Shaiakhmetov, Gulnaz Gimaletdinova, Selcuk Cankurt, Kadyrmamat
Momunov | Evaluation of the Pronunciation of Tajweed Rules Based on DNN as a Step
Towards Interactive Recitation Learning | null | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proper recitation of the Quran, adhering to the rules of Tajweed, is crucial
for preventing mistakes during recitation and requires significant effort to
master. Traditional methods of teaching these rules are limited by the
availability of qualified instructors and time constraints. Automatic
evaluation of recitation can address these challenges by providing prompt
feedback and supporting independent practice. This study focuses on developing
a deep learning model to classify three Tajweed rules - separate stretching (Al
Mad), tight noon (Ghunnah), and hide (Ikhfaa) - using the publicly available
QDAT dataset, which contains over 1,500 audio recordings. The input data
consisted of audio recordings from this dataset, transformed into normalized
mel-spectrograms. For classification, the EfficientNet-B0 architecture was
used, enhanced with a Squeeze-and-Excitation attention mechanism. The developed
model achieved accuracy rates of 95.35%, 99.34%, and 97.01% for the respective
rules. An analysis of the learning curves confirmed the model's robustness and
absence of overfitting. The proposed approach demonstrates high efficiency and
paves the way for developing interactive educational systems for Tajweed study.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 15:03:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Shaiakhmetov",
"Dim",
""
],
[
"Gimaletdinova",
"Gulnaz",
""
],
[
"Cankurt",
"Selcuk",
""
],
[
"Momunov",
"Kadyrmamat",
""
]
] | TITLE: Evaluation of the Pronunciation of Tajweed Rules Based on DNN as a Step
Towards Interactive Recitation Learning
ABSTRACT: Proper recitation of the Quran, adhering to the rules of Tajweed, is crucial
for preventing mistakes during recitation and requires significant effort to
master. Traditional methods of teaching these rules are limited by the
availability of qualified instructors and time constraints. Automatic
evaluation of recitation can address these challenges by providing prompt
feedback and supporting independent practice. This study focuses on developing
a deep learning model to classify three Tajweed rules - separate stretching (Al
Mad), tight noon (Ghunnah), and hide (Ikhfaa) - using the publicly available
QDAT dataset, which contains over 1,500 audio recordings. The input data
consisted of audio recordings from this dataset, transformed into normalized
mel-spectrograms. For classification, the EfficientNet-B0 architecture was
used, enhanced with a Squeeze-and-Excitation attention mechanism. The developed
model achieved accuracy rates of 95.35%, 99.34%, and 97.01% for the respective
rules. An analysis of the learning curves confirmed the model's robustness and
absence of overfitting. The proposed approach demonstrates high efficiency and
paves the way for developing interactive educational systems for Tajweed study.
|
2503.23472 | Guandong Li | Guandong Li, Mengxia Ye | Efficient Dynamic Attention 3D Convolution for Hyperspectral Image
Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks face several challenges in hyperspectral image
classification, including insufficient utilization of joint spatial-spectral
information, gradient vanishing with increasing depth, and overfitting. To
enhance feature extraction efficiency while skipping redundant information,
this paper proposes a dynamic attention convolution design based on an improved
3D-DenseNet model. The design employs multiple parallel convolutional kernels
instead of a single kernel and assigns dynamic attention weights to these
parallel convolutions. This dynamic attention mechanism achieves adaptive
feature response based on spatial characteristics in the spatial dimension of
hyperspectral images, focusing more on key spatial structures. In the spectral
dimension, it enables dynamic discrimination of different bands, alleviating
information redundancy and computational complexity caused by high spectral
dimensionality. The DAC module enhances model representation capability by
attention-based aggregation of multiple convolutional kernels without
increasing network depth or width. The proposed method demonstrates superior
performance in both inference speed and accuracy, outperforming mainstream
hyperspectral image classification methods on the IN, UP, and KSC datasets.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 15:12:23 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Li",
"Guandong",
""
],
[
"Ye",
"Mengxia",
""
]
] | TITLE: Efficient Dynamic Attention 3D Convolution for Hyperspectral Image
Classification
ABSTRACT: Deep neural networks face several challenges in hyperspectral image
classification, including insufficient utilization of joint spatial-spectral
information, gradient vanishing with increasing depth, and overfitting. To
enhance feature extraction efficiency while skipping redundant information,
this paper proposes a dynamic attention convolution design based on an improved
3D-DenseNet model. The design employs multiple parallel convolutional kernels
instead of a single kernel and assigns dynamic attention weights to these
parallel convolutions. This dynamic attention mechanism achieves adaptive
feature response based on spatial characteristics in the spatial dimension of
hyperspectral images, focusing more on key spatial structures. In the spectral
dimension, it enables dynamic discrimination of different bands, alleviating
information redundancy and computational complexity caused by high spectral
dimensionality. The DAC module enhances model representation capability by
attention-based aggregation of multiple convolutional kernels without
increasing network depth or width. The proposed method demonstrates superior
performance in both inference speed and accuracy, outperforming mainstream
hyperspectral image classification methods on the IN, UP, and KSC datasets.
|
2503.23480 | Haofei Kuang | Haofei Kuang, Yue Pan, Xingguang Zhong, Louis Wiesmann, Jens Behley
and Cyrill Stachniss | Improving Indoor Localization Accuracy by Using an Efficient Implicit
Neural Map Representation | 8 pages, 5 figures. Accepted to ICRA 2025 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Globally localizing a mobile robot in a known map is often a foundation for
enabling robots to navigate and operate autonomously. In indoor environments,
traditional Monte Carlo localization based on occupancy grid maps is considered
the gold standard, but its accuracy is limited by the representation
capabilities of the occupancy grid map. In this paper, we address the problem
of building an effective map representation that allows to accurately perform
probabilistic global localization. To this end, we propose an implicit neural
map representation that is able to capture positional and directional geometric
features from 2D LiDAR scans to efficiently represent the environment and learn
a neural network that is able to predict both, the non-projective signed
distance and a direction-aware projective distance for an arbitrary point in
the mapped environment. This combination of neural map representation with a
light-weight neural network allows us to design an efficient observation model
within a conventional Monte Carlo localization framework for pose estimation of
a robot in real time. We evaluated our approach to indoor localization on a
publicly available dataset for global localization and the experimental results
indicate that our approach is able to more accurately localize a mobile robot
than other localization approaches employing occupancy or existing neural map
representations. In contrast to other approaches employing an implicit neural
map representation for 2D LiDAR localization, our approach allows to perform
real-time pose tracking after convergence and near real-time global
localization. The code of our approach is available at:
https://github.com/PRBonn/enm-mcl.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 15:31:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kuang",
"Haofei",
""
],
[
"Pan",
"Yue",
""
],
[
"Zhong",
"Xingguang",
""
],
[
"Wiesmann",
"Louis",
""
],
[
"Behley",
"Jens",
""
],
[
"Stachniss",
"Cyrill",
""
]
] | TITLE: Improving Indoor Localization Accuracy by Using an Efficient Implicit
Neural Map Representation
ABSTRACT: Globally localizing a mobile robot in a known map is often a foundation for
enabling robots to navigate and operate autonomously. In indoor environments,
traditional Monte Carlo localization based on occupancy grid maps is considered
the gold standard, but its accuracy is limited by the representation
capabilities of the occupancy grid map. In this paper, we address the problem
of building an effective map representation that allows to accurately perform
probabilistic global localization. To this end, we propose an implicit neural
map representation that is able to capture positional and directional geometric
features from 2D LiDAR scans to efficiently represent the environment and learn
a neural network that is able to predict both, the non-projective signed
distance and a direction-aware projective distance for an arbitrary point in
the mapped environment. This combination of neural map representation with a
light-weight neural network allows us to design an efficient observation model
within a conventional Monte Carlo localization framework for pose estimation of
a robot in real time. We evaluated our approach to indoor localization on a
publicly available dataset for global localization and the experimental results
indicate that our approach is able to more accurately localize a mobile robot
than other localization approaches employing occupancy or existing neural map
representations. In contrast to other approaches employing an implicit neural
map representation for 2D LiDAR localization, our approach allows to perform
real-time pose tracking after convergence and near real-time global
localization. The code of our approach is available at:
https://github.com/PRBonn/enm-mcl.
|
2503.23491 | Jiaxin Xu | Jiaxin Xu, Gang Liu, Ruilan Guo, Meng Jiang, Tengfei Luo | POINT$^{2}$: A Polymer Informatics Training and Testing Database | null | null | null | null | cond-mat.mtrl-sci cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The advancement of polymer informatics has been significantly propelled by
the integration of machine learning (ML) techniques, enabling the rapid
prediction of polymer properties and expediting the discovery of
high-performance polymeric materials. However, the field lacks a standardized
workflow that encompasses prediction accuracy, uncertainty quantification, ML
interpretability, and polymer synthesizability. In this study, we introduce
POINT$^{2}$ (POlymer INformatics Training and Testing), a comprehensive
benchmark database and protocol designed to address these critical challenges.
Leveraging the existing labeled datasets and the unlabeled PI1M dataset, a
collection of approximately one million virtual polymers generated via a
recurrent neural network trained on the realistic polymers, we develop an
ensemble of ML models, including Quantile Random Forests, Multilayer
Perceptrons with dropout, Graph Neural Networks, and pretrained large language
models. These models are coupled with diverse polymer representations such as
Morgan, MACCS, RDKit, Topological, Atom Pair fingerprints, and graph-based
descriptors to achieve property predictions, uncertainty estimations, model
interpretability, and template-based polymerization synthesizability across a
spectrum of properties, including gas permeability, thermal conductivity, glass
transition temperature, melting temperature, fractional free volume, and
density. The POINT$^{2}$ database can serve as a valuable resource for the
polymer informatics community for polymer discovery and optimization.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 15:46:01 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Xu",
"Jiaxin",
""
],
[
"Liu",
"Gang",
""
],
[
"Guo",
"Ruilan",
""
],
[
"Jiang",
"Meng",
""
],
[
"Luo",
"Tengfei",
""
]
] | TITLE: POINT$^{2}$: A Polymer Informatics Training and Testing Database
ABSTRACT: The advancement of polymer informatics has been significantly propelled by
the integration of machine learning (ML) techniques, enabling the rapid
prediction of polymer properties and expediting the discovery of
high-performance polymeric materials. However, the field lacks a standardized
workflow that encompasses prediction accuracy, uncertainty quantification, ML
interpretability, and polymer synthesizability. In this study, we introduce
POINT$^{2}$ (POlymer INformatics Training and Testing), a comprehensive
benchmark database and protocol designed to address these critical challenges.
Leveraging the existing labeled datasets and the unlabeled PI1M dataset, a
collection of approximately one million virtual polymers generated via a
recurrent neural network trained on the realistic polymers, we develop an
ensemble of ML models, including Quantile Random Forests, Multilayer
Perceptrons with dropout, Graph Neural Networks, and pretrained large language
models. These models are coupled with diverse polymer representations such as
Morgan, MACCS, RDKit, Topological, Atom Pair fingerprints, and graph-based
descriptors to achieve property predictions, uncertainty estimations, model
interpretability, and template-based polymerization synthesizability across a
spectrum of properties, including gas permeability, thermal conductivity, glass
transition temperature, melting temperature, fractional free volume, and
density. The POINT$^{2}$ database can serve as a valuable resource for the
polymer informatics community for polymer discovery and optimization.
|
2503.23502 | Jannik Endres | Jannik Endres, Oliver Hahn, Charles Corbi\`ere, Simone Schaub-Meyer,
Stefan Roth, Alexandre Alahi | Boosting Omnidirectional Stereo Matching with a Pre-trained Depth
Foundation Model | Project page: https://vita-epfl.github.io/DFI-OmniStereo-website/ | null | null | null | cs.CV cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Omnidirectional depth perception is essential for mobile robotics
applications that require scene understanding across a full 360{\deg} field of
view. Camera-based setups offer a cost-effective option by using stereo depth
estimation to generate dense, high-resolution depth maps without relying on
expensive active sensing. However, existing omnidirectional stereo matching
approaches achieve only limited depth accuracy across diverse environments,
depth ranges, and lighting conditions, due to the scarcity of real-world data.
We present DFI-OmniStereo, a novel omnidirectional stereo matching method that
leverages a large-scale pre-trained foundation model for relative monocular
depth estimation within an iterative optimization-based stereo matching
architecture. We introduce a dedicated two-stage training strategy to utilize
the relative monocular depth features for our omnidirectional stereo matching
before scale-invariant fine-tuning. DFI-OmniStereo achieves state-of-the-art
results on the real-world Helvipad dataset, reducing disparity MAE by
approximately 16% compared to the previous best omnidirectional stereo method.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 16:24:22 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Endres",
"Jannik",
""
],
[
"Hahn",
"Oliver",
""
],
[
"Corbière",
"Charles",
""
],
[
"Schaub-Meyer",
"Simone",
""
],
[
"Roth",
"Stefan",
""
],
[
"Alahi",
"Alexandre",
""
]
] | TITLE: Boosting Omnidirectional Stereo Matching with a Pre-trained Depth
Foundation Model
ABSTRACT: Omnidirectional depth perception is essential for mobile robotics
applications that require scene understanding across a full 360{\deg} field of
view. Camera-based setups offer a cost-effective option by using stereo depth
estimation to generate dense, high-resolution depth maps without relying on
expensive active sensing. However, existing omnidirectional stereo matching
approaches achieve only limited depth accuracy across diverse environments,
depth ranges, and lighting conditions, due to the scarcity of real-world data.
We present DFI-OmniStereo, a novel omnidirectional stereo matching method that
leverages a large-scale pre-trained foundation model for relative monocular
depth estimation within an iterative optimization-based stereo matching
architecture. We introduce a dedicated two-stage training strategy to utilize
the relative monocular depth features for our omnidirectional stereo matching
before scale-invariant fine-tuning. DFI-OmniStereo achieves state-of-the-art
results on the real-world Helvipad dataset, reducing disparity MAE by
approximately 16% compared to the previous best omnidirectional stereo method.
|
2503.23503 | Katrina Brown | Sid Bharthulwar, John Rho, Katrina Brown | Evolutionary Prompt Optimization Discovers Emergent Multimodal Reasoning
Strategies in Vision-Language Models | Published at ICLR 2025 Workshop on Reasoning and Planning for LLMs | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present a framework for optimizing prompts in vision-language models to
elicit multimodal reasoning without model retraining. Using an evolutionary
algorithm to guide prompt updates downstream of visual tasks, our approach
improves upon baseline prompt-updating algorithms, which lack evolution-style
"survival of the fittest" iteration. Crucially, we find this approach enables
the language model to independently discover progressive problem-solving
techniques across several evolution generations. For example, the model reasons
that to "break down" visually complex spatial tasks, making a tool call to a
Python interpreter to perform tasks (such as cropping, image segmentation, or
saturation changes) would improve performance significantly. Our
experimentation shows that explicitly evoking this "tool calling" call, via
system-level XML $...\texttt{<tool>} ... \texttt{</tool>}...$ tags, can
effectively flag Python interpreter access for the same language model to
generate relevant programs, generating advanced multimodal functionality. This
functionality can be crystallized into a system-level prompt that induces
improved performance at inference time, and our experimentation suggests up to
$\approx 50\%$ relative improvement across select visual tasks. Downstream
performance is trained and evaluated across subtasks from MathVista, M3CoT, and
GeoBench-VLM datasets. Importantly, our approach shows that evolutionary prompt
optimization guides language models towards self-reasoning discoveries, which
result in improved zero-shot generalization across tasks.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 16:25:45 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Bharthulwar",
"Sid",
""
],
[
"Rho",
"John",
""
],
[
"Brown",
"Katrina",
""
]
] | TITLE: Evolutionary Prompt Optimization Discovers Emergent Multimodal Reasoning
Strategies in Vision-Language Models
ABSTRACT: We present a framework for optimizing prompts in vision-language models to
elicit multimodal reasoning without model retraining. Using an evolutionary
algorithm to guide prompt updates downstream of visual tasks, our approach
improves upon baseline prompt-updating algorithms, which lack evolution-style
"survival of the fittest" iteration. Crucially, we find this approach enables
the language model to independently discover progressive problem-solving
techniques across several evolution generations. For example, the model reasons
that to "break down" visually complex spatial tasks, making a tool call to a
Python interpreter to perform tasks (such as cropping, image segmentation, or
saturation changes) would improve performance significantly. Our
experimentation shows that explicitly evoking this "tool calling" call, via
system-level XML $...\texttt{<tool>} ... \texttt{</tool>}...$ tags, can
effectively flag Python interpreter access for the same language model to
generate relevant programs, generating advanced multimodal functionality. This
functionality can be crystallized into a system-level prompt that induces
improved performance at inference time, and our experimentation suggests up to
$\approx 50\%$ relative improvement across select visual tasks. Downstream
performance is trained and evaluated across subtasks from MathVista, M3CoT, and
GeoBench-VLM datasets. Importantly, our approach shows that evolutionary prompt
optimization guides language models towards self-reasoning discoveries, which
result in improved zero-shot generalization across tasks.
|
2503.23507 | Saumik Bhattacharya | Siladittya Manna, Suresh Das, Sayantari Ghosh and Saumik Bhattacharya | Federated Self-Supervised Learning for One-Shot Cross-Modal and
Cross-Imaging Technique Segmentation | null | null | null | null | cs.CV cs.LG eess.IV physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized federated learning enables learning of data representations
from multiple sources without compromising the privacy of the clients. In
applications like medical image segmentation, where obtaining a large annotated
dataset from a single source is a distressing problem, federated
self-supervised learning can provide some solace. In this work, we push the
limits further by exploring a federated self-supervised one-shot segmentation
task representing a more data-scarce scenario. We adopt a pre-existing
self-supervised few-shot segmentation framework CoWPro and adapt it to the
federated learning scenario. To the best of our knowledge, this work is the
first to attempt a self-supervised few-shot segmentation task in the federated
learning domain. Moreover, we consider the clients to be constituted of data
from different modalities and imaging techniques like MR or CT, which makes the
problem even harder. Additionally, we reinforce and improve the baseline CoWPro
method using a fused dice loss which shows considerable improvement in
performance over the baseline CoWPro. Finally, we evaluate this novel framework
on a completely unseen held-out part of the local client dataset. We observe
that the proposed framework can achieve performance at par or better than the
FedAvg version of the CoWPro framework on the held-out validation dataset.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 16:40:12 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Manna",
"Siladittya",
""
],
[
"Das",
"Suresh",
""
],
[
"Ghosh",
"Sayantari",
""
],
[
"Bhattacharya",
"Saumik",
""
]
] | TITLE: Federated Self-Supervised Learning for One-Shot Cross-Modal and
Cross-Imaging Technique Segmentation
ABSTRACT: Decentralized federated learning enables learning of data representations
from multiple sources without compromising the privacy of the clients. In
applications like medical image segmentation, where obtaining a large annotated
dataset from a single source is a distressing problem, federated
self-supervised learning can provide some solace. In this work, we push the
limits further by exploring a federated self-supervised one-shot segmentation
task representing a more data-scarce scenario. We adopt a pre-existing
self-supervised few-shot segmentation framework CoWPro and adapt it to the
federated learning scenario. To the best of our knowledge, this work is the
first to attempt a self-supervised few-shot segmentation task in the federated
learning domain. Moreover, we consider the clients to be constituted of data
from different modalities and imaging techniques like MR or CT, which makes the
problem even harder. Additionally, we reinforce and improve the baseline CoWPro
method using a fused dice loss which shows considerable improvement in
performance over the baseline CoWPro. Finally, we evaluate this novel framework
on a completely unseen held-out part of the local client dataset. We observe
that the proposed framework can achieve performance at par or better than the
FedAvg version of the CoWPro framework on the held-out validation dataset.
|
2503.23508 | Yuming Chen | Yuming Chen, Jiangyan Feng, Haodong Zhang, Lijun Gong, Feng Zhu, Rui
Zhao, Qibin Hou, Ming-Ming Cheng, Yibing Song | Re-Aligning Language to Visual Objects with an Agentic Workflow | 33 pages, 20 figures, 17 tables, ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language-based object detection (LOD) aims to align visual objects with
language expressions. A large amount of paired data is utilized to improve LOD
model generalizations. During the training process, recent studies leverage
vision-language models (VLMs) to automatically generate human-like expressions
for visual objects, facilitating training data scaling up. In this process, we
observe that VLM hallucinations bring inaccurate object descriptions (e.g.,
object name, color, and shape) to deteriorate VL alignment quality. To reduce
VLM hallucinations, we propose an agentic workflow controlled by an LLM to
re-align language to visual objects via adaptively adjusting image and text
prompts. We name this workflow Real-LOD, which includes planning, tool use, and
reflection steps. Given an image with detected objects and VLM raw language
expressions, Real-LOD reasons its state automatically and arranges action based
on our neural symbolic designs (i.e., planning). The action will adaptively
adjust the image and text prompts and send them to VLMs for object
re-description (i.e., tool use). Then, we use another LLM to analyze these
refined expressions for feedback (i.e., reflection). These steps are conducted
in a cyclic form to gradually improve language descriptions for re-aligning to
visual objects. We construct a dataset that contains a tiny amount of 0.18M
images with re-aligned language expression and train a prevalent LOD model to
surpass existing LOD methods by around 50% on the standard benchmarks. Our
Real-LOD workflow, with automatic VL refinement, reveals a potential to
preserve data quality along with scaling up data quantity, which further
improves LOD performance from a data-alignment perspective.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 16:41:12 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Chen",
"Yuming",
""
],
[
"Feng",
"Jiangyan",
""
],
[
"Zhang",
"Haodong",
""
],
[
"Gong",
"Lijun",
""
],
[
"Zhu",
"Feng",
""
],
[
"Zhao",
"Rui",
""
],
[
"Hou",
"Qibin",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Song",
"Yibing",
""
]
] | TITLE: Re-Aligning Language to Visual Objects with an Agentic Workflow
ABSTRACT: Language-based object detection (LOD) aims to align visual objects with
language expressions. A large amount of paired data is utilized to improve LOD
model generalizations. During the training process, recent studies leverage
vision-language models (VLMs) to automatically generate human-like expressions
for visual objects, facilitating training data scaling up. In this process, we
observe that VLM hallucinations bring inaccurate object descriptions (e.g.,
object name, color, and shape) to deteriorate VL alignment quality. To reduce
VLM hallucinations, we propose an agentic workflow controlled by an LLM to
re-align language to visual objects via adaptively adjusting image and text
prompts. We name this workflow Real-LOD, which includes planning, tool use, and
reflection steps. Given an image with detected objects and VLM raw language
expressions, Real-LOD reasons its state automatically and arranges action based
on our neural symbolic designs (i.e., planning). The action will adaptively
adjust the image and text prompts and send them to VLMs for object
re-description (i.e., tool use). Then, we use another LLM to analyze these
refined expressions for feedback (i.e., reflection). These steps are conducted
in a cyclic form to gradually improve language descriptions for re-aligning to
visual objects. We construct a dataset that contains a tiny amount of 0.18M
images with re-aligned language expression and train a prevalent LOD model to
surpass existing LOD methods by around 50% on the standard benchmarks. Our
Real-LOD workflow, with automatic VL refinement, reveals a potential to
preserve data quality along with scaling up data quantity, which further
improves LOD performance from a data-alignment perspective.
|
2503.23510 | Xingyu Lyu | Xingyu Lyu, Mengya Zhang, Xiaokuan Zhang, Jianyu Niu, Yinqian Zhang,
Zhiqiang Lin | Demystifying Private Transactions and Their Impact in PoW and PoS
Ethereum | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Ethereum, private transactions, a specialized transaction type employed to
evade public Peer-to-Peer (P2P) network broadcasting, remain largely
unexplored, particularly in the context of the transition from Proof-of-Work
(PoW) to Proof-of-Stake (PoS) consensus mechanisms. To address this gap, we
investigate the transaction characteristics, (un)intended usages, and monetary
impacts by analyzing large-scale datasets comprising 14,810,392 private
transactions within a 15.5-month PoW dataset and 30,062,232 private
transactions within a 15.5-month PoS dataset. While originally designed for
security purposes, we find that private transactions predominantly serve three
distinct functions in both PoW and PoS Ethereum: extracting Maximum Extractable
Value (MEV), facilitating monetary transfers to distribute mining rewards, and
interacting with popular Decentralized Finance (DeFi) applications.
Furthermore, we find that private transactions are utilized in DeFi attacks to
circumvent surveillance by white hat monitors, with an increased prevalence
observed in PoS Ethereum compared to PoW Ethereum. Additionally, in PoS
Ethereum, there is a subtle uptick in the role of private transactions for MEV
extraction. This shift could be attributed to the decrease in transaction
costs. However, this reduction in transaction cost and the cancellation of
block rewards result in a significant decrease in mining profits for block
creators.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 16:45:18 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Lyu",
"Xingyu",
""
],
[
"Zhang",
"Mengya",
""
],
[
"Zhang",
"Xiaokuan",
""
],
[
"Niu",
"Jianyu",
""
],
[
"Zhang",
"Yinqian",
""
],
[
"Lin",
"Zhiqiang",
""
]
] | TITLE: Demystifying Private Transactions and Their Impact in PoW and PoS
Ethereum
ABSTRACT: In Ethereum, private transactions, a specialized transaction type employed to
evade public Peer-to-Peer (P2P) network broadcasting, remain largely
unexplored, particularly in the context of the transition from Proof-of-Work
(PoW) to Proof-of-Stake (PoS) consensus mechanisms. To address this gap, we
investigate the transaction characteristics, (un)intended usages, and monetary
impacts by analyzing large-scale datasets comprising 14,810,392 private
transactions within a 15.5-month PoW dataset and 30,062,232 private
transactions within a 15.5-month PoS dataset. While originally designed for
security purposes, we find that private transactions predominantly serve three
distinct functions in both PoW and PoS Ethereum: extracting Maximum Extractable
Value (MEV), facilitating monetary transfers to distribute mining rewards, and
interacting with popular Decentralized Finance (DeFi) applications.
Furthermore, we find that private transactions are utilized in DeFi attacks to
circumvent surveillance by white hat monitors, with an increased prevalence
observed in PoS Ethereum compared to PoW Ethereum. Additionally, in PoS
Ethereum, there is a subtle uptick in the role of private transactions for MEV
extraction. This shift could be attributed to the decrease in transaction
costs. However, this reduction in transaction cost and the cancellation of
block rewards result in a significant decrease in mining profits for block
creators.
|
2503.23514 | Siqi Fan | Siqi Fan, Xiusheng Huang, Yiqun Yao, Xuezhi Fang, Kang Liu, Peng Han,
Shuo Shang, Aixin Sun, Yequan Wang | If an LLM Were a Character, Would It Know Its Own Story? Evaluating
Lifelong Learning in LLMs | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) can carry out human-like dialogue, but unlike
humans, they are stateless due to the superposition property. However, during
multi-turn, multi-agent interactions, LLMs begin to exhibit consistent,
character-like behaviors, hinting at a form of emergent lifelong learning.
Despite this, existing benchmarks often fail to capture these dynamics,
primarily focusing on static, open-ended evaluations. To address this gap, we
introduce LIFESTATE-BENCH, a benchmark designed to assess lifelong learning in
LLMs. It features two episodic datasets: Hamlet and a synthetic script
collection, rich in narrative structure and character interactions. Our fact
checking evaluation probes models' self-awareness, episodic memory retrieval,
and relationship tracking, across both parametric and non-parametric
approaches. Experiments on models like Llama3.1-8B, GPT-4-turbo, and DeepSeek
R1, we demonstrate that nonparametric methods significantly outperform
parametric ones in managing stateful learning. However, all models exhibit
challenges with catastrophic forgetting as interactions extend, highlighting
the need for further advancements in lifelong learning.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 16:50:57 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Fan",
"Siqi",
""
],
[
"Huang",
"Xiusheng",
""
],
[
"Yao",
"Yiqun",
""
],
[
"Fang",
"Xuezhi",
""
],
[
"Liu",
"Kang",
""
],
[
"Han",
"Peng",
""
],
[
"Shang",
"Shuo",
""
],
[
"Sun",
"Aixin",
""
],
[
"Wang",
"Yequan",
""
]
] | TITLE: If an LLM Were a Character, Would It Know Its Own Story? Evaluating
Lifelong Learning in LLMs
ABSTRACT: Large language models (LLMs) can carry out human-like dialogue, but unlike
humans, they are stateless due to the superposition property. However, during
multi-turn, multi-agent interactions, LLMs begin to exhibit consistent,
character-like behaviors, hinting at a form of emergent lifelong learning.
Despite this, existing benchmarks often fail to capture these dynamics,
primarily focusing on static, open-ended evaluations. To address this gap, we
introduce LIFESTATE-BENCH, a benchmark designed to assess lifelong learning in
LLMs. It features two episodic datasets: Hamlet and a synthetic script
collection, rich in narrative structure and character interactions. Our fact
checking evaluation probes models' self-awareness, episodic memory retrieval,
and relationship tracking, across both parametric and non-parametric
approaches. Experiments on models like Llama3.1-8B, GPT-4-turbo, and DeepSeek
R1, we demonstrate that nonparametric methods significantly outperform
parametric ones in managing stateful learning. However, all models exhibit
challenges with catastrophic forgetting as interactions extend, highlighting
the need for further advancements in lifelong learning.
|
2503.23519 | Haruya Ishikawa | Haruya Ishikawa and Yoshimitsu Aoki | BoundMatch: Boundary detection applied to semi-supervised segmentation
for urban-driving scenes | 15 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised semantic segmentation (SS-SS) aims to mitigate the heavy
annotation burden of dense pixel labeling by leveraging abundant unlabeled
images alongside a small labeled set. While current teacher-student consistency
regularization methods achieve strong results, they often overlook a critical
challenge: the precise delineation of object boundaries. In this paper, we
propose BoundMatch, a novel multi-task SS-SS framework that explicitly
integrates semantic boundary detection into the consistency regularization
pipeline. Our core mechanism, Boundary Consistency Regularized Multi-Task
Learning (BCRM), enforces prediction agreement between teacher and student
models on both segmentation masks and detailed semantic boundaries. To further
enhance performance and sharpen contours, BoundMatch incorporates two
lightweight fusion modules: Boundary-Semantic Fusion (BSF) injects learned
boundary cues into the segmentation decoder, while Spatial Gradient Fusion
(SGF) refines boundary predictions using mask gradients, leading to
higher-quality boundary pseudo-labels. This framework is built upon SAMTH, a
strong teacher-student baseline featuring a Harmonious Batch Normalization
(HBN) update strategy for improved stability. Extensive experiments on diverse
datasets including Cityscapes, BDD100K, SYNTHIA, ADE20K, and Pascal VOC show
that BoundMatch achieves competitive performance against state-of-the-art
methods while significantly improving boundary-specific evaluation metrics. We
also demonstrate its effectiveness in realistic large-scale unlabeled data
scenarios and on lightweight architectures designed for mobile deployment.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 17:02:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ishikawa",
"Haruya",
""
],
[
"Aoki",
"Yoshimitsu",
""
]
] | TITLE: BoundMatch: Boundary detection applied to semi-supervised segmentation
for urban-driving scenes
ABSTRACT: Semi-supervised semantic segmentation (SS-SS) aims to mitigate the heavy
annotation burden of dense pixel labeling by leveraging abundant unlabeled
images alongside a small labeled set. While current teacher-student consistency
regularization methods achieve strong results, they often overlook a critical
challenge: the precise delineation of object boundaries. In this paper, we
propose BoundMatch, a novel multi-task SS-SS framework that explicitly
integrates semantic boundary detection into the consistency regularization
pipeline. Our core mechanism, Boundary Consistency Regularized Multi-Task
Learning (BCRM), enforces prediction agreement between teacher and student
models on both segmentation masks and detailed semantic boundaries. To further
enhance performance and sharpen contours, BoundMatch incorporates two
lightweight fusion modules: Boundary-Semantic Fusion (BSF) injects learned
boundary cues into the segmentation decoder, while Spatial Gradient Fusion
(SGF) refines boundary predictions using mask gradients, leading to
higher-quality boundary pseudo-labels. This framework is built upon SAMTH, a
strong teacher-student baseline featuring a Harmonious Batch Normalization
(HBN) update strategy for improved stability. Extensive experiments on diverse
datasets including Cityscapes, BDD100K, SYNTHIA, ADE20K, and Pascal VOC show
that BoundMatch achieves competitive performance against state-of-the-art
methods while significantly improving boundary-specific evaluation metrics. We
also demonstrate its effectiveness in realistic large-scale unlabeled data
scenarios and on lightweight architectures designed for mobile deployment.
|
2503.23523 | Haochen Liu | Haochen Liu, Song Wang, Chen Chen, Jundong Li | Question-Aware Knowledge Graph Prompting for Enhancing Large Language
Models | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) often struggle with tasks requiring external
knowledge, such as knowledge-intensive Multiple Choice Question Answering
(MCQA). Integrating Knowledge Graphs (KGs) can enhance reasoning; however,
existing methods typically demand costly fine-tuning or retrieve noisy KG
information. Recent approaches leverage Graph Neural Networks (GNNs) to
generate KG-based input embedding prefixes as soft prompts for LLMs but fail to
account for question relevance, resulting in noisy prompts. Moreover, in MCQA
tasks, the absence of relevant KG knowledge for certain answer options remains
a significant challenge. To address these issues, we propose Question-Aware
Knowledge Graph Prompting (QAP), which incorporates question embeddings into
GNN aggregation to dynamically assess KG relevance. QAP employs global
attention to capture inter-option relationships, enriching soft prompts with
inferred knowledge. Experimental results demonstrate that QAP outperforms
state-of-the-art methods across multiple datasets, highlighting its
effectiveness.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 17:09:11 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Haochen",
""
],
[
"Wang",
"Song",
""
],
[
"Chen",
"Chen",
""
],
[
"Li",
"Jundong",
""
]
] | TITLE: Question-Aware Knowledge Graph Prompting for Enhancing Large Language
Models
ABSTRACT: Large Language Models (LLMs) often struggle with tasks requiring external
knowledge, such as knowledge-intensive Multiple Choice Question Answering
(MCQA). Integrating Knowledge Graphs (KGs) can enhance reasoning; however,
existing methods typically demand costly fine-tuning or retrieve noisy KG
information. Recent approaches leverage Graph Neural Networks (GNNs) to
generate KG-based input embedding prefixes as soft prompts for LLMs but fail to
account for question relevance, resulting in noisy prompts. Moreover, in MCQA
tasks, the absence of relevant KG knowledge for certain answer options remains
a significant challenge. To address these issues, we propose Question-Aware
Knowledge Graph Prompting (QAP), which incorporates question embeddings into
GNN aggregation to dynamically assess KG relevance. QAP employs global
attention to capture inter-option relationships, enriching soft prompts with
inferred knowledge. Experimental results demonstrate that QAP outperforms
state-of-the-art methods across multiple datasets, highlighting its
effectiveness.
|
2503.23529 | Shuhei Tarashima | Shuhei Tarashima, Xinqi Shu, Norio Tagawa | ViLAaD: Enhancing "Attracting and Dispersing'' Source-Free Domain
Adaptation with Vision-and-Language Model | 15 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained source model
to a target dataset from a different domain without access to the source data.
Conventional SFDA methods are limited by the information encoded in the
pre-trained source model and the unlabeled target data. Recently, approaches
leveraging auxiliary resources have emerged, yet remain in their early stages,
offering ample opportunities for research. In this work, we propose a novel
method that incorporates auxiliary information by extending an existing SFDA
framework using Vision-and-Language (ViL) models. Specifically, we build upon
Attracting and Dispersing (AaD), a widely adopted SFDA technique, and
generalize its core principle to naturally integrate ViL models as a powerful
initialization for target adaptation. Our approach, called ViL-enhanced AaD
(ViLAaD), preserves the simplicity and flexibility of the AaD framework, while
leveraging ViL models to significantly boost adaptation performance. We
validate our method through experiments using various ViL models, demonstrating
that ViLAaD consistently outperforms both AaD and zero-shot classification by
ViL models, especially when both the source model and ViL model provide strong
initializations. Moreover, the flexibility of ViLAaD allows it to be seamlessly
incorporated into an alternating optimization framework with ViL prompt tuning
and extended with additional objectives for target model adaptation. Extensive
experiments on four SFDA benchmarks show that this enhanced version, ViLAaD++,
achieves state-of-the-art performance across multiple SFDA scenarios, including
Closed-set SFDA, Partial-set SFDA, and Open-set SFDA.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 17:22:55 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tarashima",
"Shuhei",
""
],
[
"Shu",
"Xinqi",
""
],
[
"Tagawa",
"Norio",
""
]
] | TITLE: ViLAaD: Enhancing "Attracting and Dispersing'' Source-Free Domain
Adaptation with Vision-and-Language Model
ABSTRACT: Source-Free Domain Adaptation (SFDA) aims to adapt a pre-trained source model
to a target dataset from a different domain without access to the source data.
Conventional SFDA methods are limited by the information encoded in the
pre-trained source model and the unlabeled target data. Recently, approaches
leveraging auxiliary resources have emerged, yet remain in their early stages,
offering ample opportunities for research. In this work, we propose a novel
method that incorporates auxiliary information by extending an existing SFDA
framework using Vision-and-Language (ViL) models. Specifically, we build upon
Attracting and Dispersing (AaD), a widely adopted SFDA technique, and
generalize its core principle to naturally integrate ViL models as a powerful
initialization for target adaptation. Our approach, called ViL-enhanced AaD
(ViLAaD), preserves the simplicity and flexibility of the AaD framework, while
leveraging ViL models to significantly boost adaptation performance. We
validate our method through experiments using various ViL models, demonstrating
that ViLAaD consistently outperforms both AaD and zero-shot classification by
ViL models, especially when both the source model and ViL model provide strong
initializations. Moreover, the flexibility of ViLAaD allows it to be seamlessly
incorporated into an alternating optimization framework with ViL prompt tuning
and extended with additional objectives for target model adaptation. Extensive
experiments on four SFDA benchmarks show that this enhanced version, ViLAaD++,
achieves state-of-the-art performance across multiple SFDA scenarios, including
Closed-set SFDA, Partial-set SFDA, and Open-set SFDA.
|
2503.23537 | Xiaoyang Li | Hanyu Liu, Xiaoyang Li, Yixuan Jiang, Haotian Tang, Dongchen Wu,
Yameng Guo | Redundant feature screening method for human activity recognition based
on attention purification mechanism | 12 pages,7 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of sensor-based Human Activity Recognition (HAR), deep neural
networks provide advanced technical support. Many studies have proven that
recognition accuracy can be improved by increasing the depth or width of the
network. However, for wearable devices, the balance between network performance
and resource consumption is crucial. With minimum resource consumption as the
basic principle, we propose a universal attention feature purification
mechanism, called MSAP, which is suitable for multi-scale networks. The
mechanism effectively solves the feature redundancy caused by the superposition
of multi-scale features by means of inter-scale attention screening and
connection method. In addition, we have designed a network correction module
that integrates seamlessly between layers of individual network modules to
mitigate inherent problems in deep networks. We also built an embedded
deployment system that is in line with the current level of wearable technology
to test the practical feasibility of the HAR model, and further prove the
efficiency of the method. Extensive experiments on four public datasets show
that the proposed method model effectively reduces redundant features in
filtered data and provides excellent performance with little resource
consumption.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 17:44:12 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Liu",
"Hanyu",
""
],
[
"Li",
"Xiaoyang",
""
],
[
"Jiang",
"Yixuan",
""
],
[
"Tang",
"Haotian",
""
],
[
"Wu",
"Dongchen",
""
],
[
"Guo",
"Yameng",
""
]
] | TITLE: Redundant feature screening method for human activity recognition based
on attention purification mechanism
ABSTRACT: In the field of sensor-based Human Activity Recognition (HAR), deep neural
networks provide advanced technical support. Many studies have proven that
recognition accuracy can be improved by increasing the depth or width of the
network. However, for wearable devices, the balance between network performance
and resource consumption is crucial. With minimum resource consumption as the
basic principle, we propose a universal attention feature purification
mechanism, called MSAP, which is suitable for multi-scale networks. The
mechanism effectively solves the feature redundancy caused by the superposition
of multi-scale features by means of inter-scale attention screening and
connection method. In addition, we have designed a network correction module
that integrates seamlessly between layers of individual network modules to
mitigate inherent problems in deep networks. We also built an embedded
deployment system that is in line with the current level of wearable technology
to test the practical feasibility of the HAR model, and further prove the
efficiency of the method. Extensive experiments on four public datasets show
that the proposed method model effectively reduces redundant features in
filtered data and provides excellent performance with little resource
consumption.
|
2503.23542 | Xabier De Zuazo | Xabier de Zuazo, Eva Navas, Ibon Saratxaga and Inma Hern\'aez Rioja | Whisper-LM: Improving ASR Models with Language Models for Low-Resource
Languages | 26 pages, 6 figures, includes supplementary materials. Will be
submitted to IEEE/ACM Transactions on Audio, Speech, and Language Processing | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Automatic speech recognition systems have undoubtedly advanced with the
integration of multilingual and multitask models such as Whisper, which have
shown a promising ability to understand and process speech across a wide range
of languages. Despite their robustness, these models often fall short in
handling the linguistic distinctions of minority languages. This study
addresses this gap by integrating traditional and novel language models with
fine-tuned Whisper models to raise their performance in less commonly studied
languages. Through rigorous fine-tuning and evaluation across multiple
datasets, we demonstrate substantial improvements in word error rate,
particularly in low-resource scenarios. Our approach not only does take
advantage of the extensive data Whisper was pre-trained on, but also
complements its linguistic adaptability by incorporating language models. We
obtained improvements up to 51\% for in-distribution datasets and up to 34\%
for out-of-distribution sentences using statistical language models, while
large language models provided moderate but consistently robust improvement
across diverse linguistic contexts. The findings reveal that, while the
integration reliably benefits all model sizes, the extent of improvement
varies, highlighting the importance of optimized language model parameters.
Finally, we emphasize the importance of selecting appropriate evaluation
parameters when reporting the results using transformer-based ASR models. In
summary, this research clears the way for more inclusive ASR technologies that
perform better across languages by enriching their linguistic knowledge. For
further implementation details of this study, the technical documentation and
source code are available at http://www.github.com/hitz-zentroa/whisper-lm.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 18:03:52 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"de Zuazo",
"Xabier",
""
],
[
"Navas",
"Eva",
""
],
[
"Saratxaga",
"Ibon",
""
],
[
"Rioja",
"Inma Hernáez",
""
]
] | TITLE: Whisper-LM: Improving ASR Models with Language Models for Low-Resource
Languages
ABSTRACT: Automatic speech recognition systems have undoubtedly advanced with the
integration of multilingual and multitask models such as Whisper, which have
shown a promising ability to understand and process speech across a wide range
of languages. Despite their robustness, these models often fall short in
handling the linguistic distinctions of minority languages. This study
addresses this gap by integrating traditional and novel language models with
fine-tuned Whisper models to raise their performance in less commonly studied
languages. Through rigorous fine-tuning and evaluation across multiple
datasets, we demonstrate substantial improvements in word error rate,
particularly in low-resource scenarios. Our approach not only does take
advantage of the extensive data Whisper was pre-trained on, but also
complements its linguistic adaptability by incorporating language models. We
obtained improvements up to 51\% for in-distribution datasets and up to 34\%
for out-of-distribution sentences using statistical language models, while
large language models provided moderate but consistently robust improvement
across diverse linguistic contexts. The findings reveal that, while the
integration reliably benefits all model sizes, the extent of improvement
varies, highlighting the importance of optimized language model parameters.
Finally, we emphasize the importance of selecting appropriate evaluation
parameters when reporting the results using transformer-based ASR models. In
summary, this research clears the way for more inclusive ASR technologies that
perform better across languages by enriching their linguistic knowledge. For
further implementation details of this study, the technical documentation and
source code are available at http://www.github.com/hitz-zentroa/whisper-lm.
|
2503.23550 | Alexis Molina | Manel Gil-Sorribes, Alexis Molina | Addressing Model Overcomplexity in Drug-Drug Interaction Prediction With
Molecular Fingerprints | Accepted to the GEM Workshop at ICLR 2025 | null | null | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurately predicting drug-drug interactions (DDIs) is crucial for
pharmaceutical research and clinical safety. Recent deep learning models often
suffer from high computational costs and limited generalization across
datasets. In this study, we investigate a simpler yet effective approach using
molecular representations such as Morgan fingerprints (MFPS), graph-based
embeddings from graph convolutional networks (GCNs), and transformer-derived
embeddings from MoLFormer integrated into a straightforward neural network. We
benchmark our implementation on DrugBank DDI splits and a drug-drug affinity
(DDA) dataset from the Food and Drug Administration. MFPS along with MoLFormer
and GCN representations achieve competitive performance across tasks, even in
the more challenging leak-proof split, highlighting the sufficiency of simple
molecular representations. Moreover, we are able to identify key molecular
motifs and structural patterns relevant to drug interactions via gradient-based
analyses using the representations under study. Despite these results, dataset
limitations such as insufficient chemical diversity, limited dataset size, and
inconsistent labeling impact robust evaluation and challenge the need for more
complex approaches. Our work provides a meaningful baseline and emphasizes the
need for better dataset curation and progressive complexity scaling.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 18:27:01 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Gil-Sorribes",
"Manel",
""
],
[
"Molina",
"Alexis",
""
]
] | TITLE: Addressing Model Overcomplexity in Drug-Drug Interaction Prediction With
Molecular Fingerprints
ABSTRACT: Accurately predicting drug-drug interactions (DDIs) is crucial for
pharmaceutical research and clinical safety. Recent deep learning models often
suffer from high computational costs and limited generalization across
datasets. In this study, we investigate a simpler yet effective approach using
molecular representations such as Morgan fingerprints (MFPS), graph-based
embeddings from graph convolutional networks (GCNs), and transformer-derived
embeddings from MoLFormer integrated into a straightforward neural network. We
benchmark our implementation on DrugBank DDI splits and a drug-drug affinity
(DDA) dataset from the Food and Drug Administration. MFPS along with MoLFormer
and GCN representations achieve competitive performance across tasks, even in
the more challenging leak-proof split, highlighting the sufficiency of simple
molecular representations. Moreover, we are able to identify key molecular
motifs and structural patterns relevant to drug interactions via gradient-based
analyses using the representations under study. Despite these results, dataset
limitations such as insufficient chemical diversity, limited dataset size, and
inconsistent labeling impact robust evaluation and challenge the need for more
complex approaches. Our work provides a meaningful baseline and emphasizes the
need for better dataset curation and progressive complexity scaling.
|
2503.23571 | Shutong Jin | Shutong Jin, Axel Kaliff, Ruiyu Wang, Muhammad Zahid and Florian T.
Pokorny | Can Visuo-motor Policies Benefit from Random Exploration Data? A Case
Study on Stacking | This work has been submitted to the IEEE for possible publication | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human demonstrations have been key to recent advancements in robotic
manipulation, but their scalability is hampered by the substantial cost of the
required human labor. In this paper, we focus on random exploration data-video
sequences and actions produced autonomously via motions to randomly sampled
positions in the workspace-as an often overlooked resource for training
visuo-motor policies in robotic manipulation. Within the scope of imitation
learning, we examine random exploration data through two paradigms: (a) by
investigating the use of random exploration video frames with three
self-supervised learning objectives-reconstruction, contrastive, and
distillation losses-and evaluating their applicability to visual pre-training;
and (b) by analyzing random motor commands in the context of a staged learning
framework to assess their effectiveness in autonomous data collection. Towards
this goal, we present a large-scale experimental study based on over 750 hours
of robot data collection, comprising 400 successful and 12,000 failed episodes.
Our results indicate that: (a) among the three self-supervised learning
objectives, contrastive loss appears most effective for visual pre-training
while leveraging random exploration video frames; (b) data collected with
random motor commands may play a crucial role in balancing the training data
distribution and improving success rates in autonomous data collection within
this study. The source code and dataset will be made publicly available at
https://cloudgripper.org.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 19:36:29 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Jin",
"Shutong",
""
],
[
"Kaliff",
"Axel",
""
],
[
"Wang",
"Ruiyu",
""
],
[
"Zahid",
"Muhammad",
""
],
[
"Pokorny",
"Florian T.",
""
]
] | TITLE: Can Visuo-motor Policies Benefit from Random Exploration Data? A Case
Study on Stacking
ABSTRACT: Human demonstrations have been key to recent advancements in robotic
manipulation, but their scalability is hampered by the substantial cost of the
required human labor. In this paper, we focus on random exploration data-video
sequences and actions produced autonomously via motions to randomly sampled
positions in the workspace-as an often overlooked resource for training
visuo-motor policies in robotic manipulation. Within the scope of imitation
learning, we examine random exploration data through two paradigms: (a) by
investigating the use of random exploration video frames with three
self-supervised learning objectives-reconstruction, contrastive, and
distillation losses-and evaluating their applicability to visual pre-training;
and (b) by analyzing random motor commands in the context of a staged learning
framework to assess their effectiveness in autonomous data collection. Towards
this goal, we present a large-scale experimental study based on over 750 hours
of robot data collection, comprising 400 successful and 12,000 failed episodes.
Our results indicate that: (a) among the three self-supervised learning
objectives, contrastive loss appears most effective for visual pre-training
while leveraging random exploration video frames; (b) data collected with
random motor commands may play a crucial role in balancing the training data
distribution and improving success rates in autonomous data collection within
this study. The source code and dataset will be made publicly available at
https://cloudgripper.org.
|
2503.23573 | Yannic Neuhaus | Maximilian Augustin, Yannic Neuhaus, Matthias Hein | DASH: Detection and Assessment of Systematic Hallucinations of VLMs | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-language models (VLMs) are prone to object hallucinations, where they
erroneously indicate the presenceof certain objects in an image. Existing
benchmarks quantify hallucinations using relatively small, labeled datasets.
However, this approach is i) insufficient to assess hallucinations that arise
in open-world settings, where VLMs are widely used, and ii) inadequate for
detecting systematic errors in VLMs. We propose DASH (Detection and Assessment
of Systematic Hallucinations), an automatic, large-scale pipeline designed to
identify systematic hallucinations of VLMs on real-world images in an
open-world setting. A key component is DASH-OPT for image-based retrieval,
where we optimize over the ''natural image manifold'' to generate images that
mislead the VLM. The output of DASH consists of clusters of real and
semantically similar images for which the VLM hallucinates an object. We apply
DASH to PaliGemma and two LLaVA-NeXT models across 380 object classes and, in
total, find more than 19k clusters with 950k images. We study the transfer of
the identified systematic hallucinations to other VLMs and show that
fine-tuning PaliGemma with the model-specific images obtained with DASH
mitigates object hallucinations. Code and data are available at
https://YanNeu.github.io/DASH.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 19:45:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Augustin",
"Maximilian",
""
],
[
"Neuhaus",
"Yannic",
""
],
[
"Hein",
"Matthias",
""
]
] | TITLE: DASH: Detection and Assessment of Systematic Hallucinations of VLMs
ABSTRACT: Vision-language models (VLMs) are prone to object hallucinations, where they
erroneously indicate the presenceof certain objects in an image. Existing
benchmarks quantify hallucinations using relatively small, labeled datasets.
However, this approach is i) insufficient to assess hallucinations that arise
in open-world settings, where VLMs are widely used, and ii) inadequate for
detecting systematic errors in VLMs. We propose DASH (Detection and Assessment
of Systematic Hallucinations), an automatic, large-scale pipeline designed to
identify systematic hallucinations of VLMs on real-world images in an
open-world setting. A key component is DASH-OPT for image-based retrieval,
where we optimize over the ''natural image manifold'' to generate images that
mislead the VLM. The output of DASH consists of clusters of real and
semantically similar images for which the VLM hallucinates an object. We apply
DASH to PaliGemma and two LLaVA-NeXT models across 380 object classes and, in
total, find more than 19k clusters with 950k images. We study the transfer of
the identified systematic hallucinations to other VLMs and show that
fine-tuning PaliGemma with the model-specific images obtained with DASH
mitigates object hallucinations. Code and data are available at
https://YanNeu.github.io/DASH.
|
2503.23577 | Cameron Fiore | Cameron Fiore, Hongyi Fan, Benjamin Kimia | Multiview Image-Based Localization | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The image retrieval (IR) approach to image localization has distinct
advantages to the 3D and the deep learning (DNN) approaches: it is
seen-agnostic, simpler to implement and use, has no privacy issues, and is
computationally efficient. The main drawback of this approach is relatively
poor localization in both position and orientation of the query camera when
compared to the competing approaches. This paper represents a hybrid approach
that stores only image features in the database like some IR methods, but
relies on a latent 3D reconstruction, like 3D methods but without retaining a
3D scene reconstruction. The approach is based on two ideas: {\em (i)} a novel
proposal where query camera center estimation relies only on relative
translation estimates but not relative rotation estimates through a decoupling
of the two, and {\em (ii)} a shift from computing optimal pose from estimated
relative pose to computing optimal pose from multiview correspondences, thus
cutting out the ``middle-man''. Our approach shows improved performance on the
7-Scenes and Cambridge Landmarks datasets while also improving on timing and
memory footprint as compared to state-of-the-art.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 20:00:31 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Fiore",
"Cameron",
""
],
[
"Fan",
"Hongyi",
""
],
[
"Kimia",
"Benjamin",
""
]
] | TITLE: Multiview Image-Based Localization
ABSTRACT: The image retrieval (IR) approach to image localization has distinct
advantages to the 3D and the deep learning (DNN) approaches: it is
seen-agnostic, simpler to implement and use, has no privacy issues, and is
computationally efficient. The main drawback of this approach is relatively
poor localization in both position and orientation of the query camera when
compared to the competing approaches. This paper represents a hybrid approach
that stores only image features in the database like some IR methods, but
relies on a latent 3D reconstruction, like 3D methods but without retaining a
3D scene reconstruction. The approach is based on two ideas: {\em (i)} a novel
proposal where query camera center estimation relies only on relative
translation estimates but not relative rotation estimates through a decoupling
of the two, and {\em (ii)} a shift from computing optimal pose from estimated
relative pose to computing optimal pose from multiview correspondences, thus
cutting out the ``middle-man''. Our approach shows improved performance on the
7-Scenes and Cambridge Landmarks datasets while also improving on timing and
memory footprint as compared to state-of-the-art.
|
2503.23587 | Vladim\'ir Petr\'ik | Martin Malenick\'y, Martin C\'ifka, M\'ed\'eric Fourmy, Louis Montaut,
Justin Carpentier, Josef Sivic, Vladimir Petrik | PhysPose: Refining 6D Object Poses with Physical Constraints | Project page: https://data.ciirc.cvut.cz/public/projects/2025PhysPose | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate 6D object pose estimation from images is a key problem in
object-centric scene understanding, enabling applications in robotics,
augmented reality, and scene reconstruction. Despite recent advances, existing
methods often produce physically inconsistent pose estimates, hindering their
deployment in real-world scenarios. We introduce PhysPose, a novel approach
that integrates physical reasoning into pose estimation through a
postprocessing optimization enforcing non-penetration and gravitational
constraints. By leveraging scene geometry, PhysPose refines pose estimates to
ensure physical plausibility. Our approach achieves state-of-the-art accuracy
on the YCB-Video dataset from the BOP benchmark and improves over the
state-of-the-art pose estimation methods on the HOPE-Video dataset.
Furthermore, we demonstrate its impact in robotics by significantly improving
success rates in a challenging pick-and-place task, highlighting the importance
of physical consistency in real-world applications.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 20:52:17 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Malenický",
"Martin",
""
],
[
"Cífka",
"Martin",
""
],
[
"Fourmy",
"Médéric",
""
],
[
"Montaut",
"Louis",
""
],
[
"Carpentier",
"Justin",
""
],
[
"Sivic",
"Josef",
""
],
[
"Petrik",
"Vladimir",
""
]
] | TITLE: PhysPose: Refining 6D Object Poses with Physical Constraints
ABSTRACT: Accurate 6D object pose estimation from images is a key problem in
object-centric scene understanding, enabling applications in robotics,
augmented reality, and scene reconstruction. Despite recent advances, existing
methods often produce physically inconsistent pose estimates, hindering their
deployment in real-world scenarios. We introduce PhysPose, a novel approach
that integrates physical reasoning into pose estimation through a
postprocessing optimization enforcing non-penetration and gravitational
constraints. By leveraging scene geometry, PhysPose refines pose estimates to
ensure physical plausibility. Our approach achieves state-of-the-art accuracy
on the YCB-Video dataset from the BOP benchmark and improves over the
state-of-the-art pose estimation methods on the HOPE-Video dataset.
Furthermore, we demonstrate its impact in robotics by significantly improving
success rates in a challenging pick-and-place task, highlighting the importance
of physical consistency in real-world applications.
|
2503.23598 | Kalliopi Basioti | Kalliopi Basioti, Pritish Sahu, Qingze Tony Liu, Zihao Xu, Hao Wang,
Vladimir Pavlovic | GenVP: Generating Visual Puzzles with Contrastive Hierarchical VAEs | Accepted to ICLR 2025 | null | null | null | cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Raven's Progressive Matrices (RPMs) is an established benchmark to examine
the ability to perform high-level abstract visual reasoning (AVR). Despite the
current success of algorithms that solve this task, humans can generalize
beyond a given puzzle and create new puzzles given a set of rules, whereas
machines remain locked in solving a fixed puzzle from a curated choice list. We
propose Generative Visual Puzzles (GenVP), a framework to model the entire RPM
generation process, a substantially more challenging task. Our model's
capability spans from generating multiple solutions for one specific problem
prompt to creating complete new puzzles out of the desired set of rules.
Experiments on five different datasets indicate that GenVP achieves
state-of-the-art (SOTA) performance both in puzzle-solving accuracy and
out-of-distribution (OOD) generalization in 22 OOD scenarios. Compared to SOTA
generative approaches, which struggle to solve RPMs when the feasible solution
space increases, GenVP efficiently generalizes to these challenging setups.
Moreover, our model demonstrates the ability to produce a wide range of
complete RPMs given a set of abstract rules by effectively capturing the
relationships between abstract rules and visual object properties.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 21:35:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Basioti",
"Kalliopi",
""
],
[
"Sahu",
"Pritish",
""
],
[
"Liu",
"Qingze Tony",
""
],
[
"Xu",
"Zihao",
""
],
[
"Wang",
"Hao",
""
],
[
"Pavlovic",
"Vladimir",
""
]
] | TITLE: GenVP: Generating Visual Puzzles with Contrastive Hierarchical VAEs
ABSTRACT: Raven's Progressive Matrices (RPMs) is an established benchmark to examine
the ability to perform high-level abstract visual reasoning (AVR). Despite the
current success of algorithms that solve this task, humans can generalize
beyond a given puzzle and create new puzzles given a set of rules, whereas
machines remain locked in solving a fixed puzzle from a curated choice list. We
propose Generative Visual Puzzles (GenVP), a framework to model the entire RPM
generation process, a substantially more challenging task. Our model's
capability spans from generating multiple solutions for one specific problem
prompt to creating complete new puzzles out of the desired set of rules.
Experiments on five different datasets indicate that GenVP achieves
state-of-the-art (SOTA) performance both in puzzle-solving accuracy and
out-of-distribution (OOD) generalization in 22 OOD scenarios. Compared to SOTA
generative approaches, which struggle to solve RPMs when the feasible solution
space increases, GenVP efficiently generalizes to these challenging setups.
Moreover, our model demonstrates the ability to produce a wide range of
complete RPMs given a set of abstract rules by effectively capturing the
relationships between abstract rules and visual object properties.
|
2503.23602 | Emanuela Merelli | Marco Caputo, Michele Russo, Emanuela Merelli | Space of Data through the Lens of Multilevel Graph | 18 pages, 11 figures, ITADATA 2024 conference | null | null | ITADATA/2024/17 | cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work seeks to tackle the inherent complexity of dataspaces by
introducing a novel data structure that can represent datasets across multiple
levels of abstraction, ranging from local to global. We propose the concept of
a multilevel graph, which is equipped with two fundamental operations:
contraction and expansion of its topology. This multilevel graph is
specifically designed to fulfil the requirements for incremental abstraction
and flexibility, as outlined in existing definitions of dataspaces.
Furthermore, we provide a comprehensive suite of methods for manipulating this
graph structure, establishing a robust framework for data analysis. While its
effectiveness has been empirically validated for unstructured data, its
application to structured data is also inherently viable. Preliminary results
are presented through a real-world scenario based on a collection of dream
reports.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 21:54:07 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Caputo",
"Marco",
""
],
[
"Russo",
"Michele",
""
],
[
"Merelli",
"Emanuela",
""
]
] | TITLE: Space of Data through the Lens of Multilevel Graph
ABSTRACT: This work seeks to tackle the inherent complexity of dataspaces by
introducing a novel data structure that can represent datasets across multiple
levels of abstraction, ranging from local to global. We propose the concept of
a multilevel graph, which is equipped with two fundamental operations:
contraction and expansion of its topology. This multilevel graph is
specifically designed to fulfil the requirements for incremental abstraction
and flexibility, as outlined in existing definitions of dataspaces.
Furthermore, we provide a comprehensive suite of methods for manipulating this
graph structure, establishing a robust framework for data analysis. While its
effectiveness has been empirically validated for unstructured data, its
application to structured data is also inherently viable. Preliminary results
are presented through a real-world scenario based on a collection of dream
reports.
|
2503.23612 | Samuel Belkadi | Samuel Belkadi, Steve Hong, Marian Chen | Make Autoregressive Great Again: Diffusion-Free Graph Generation with
Next-Scale Prediction | Draft #1 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Autoregressive models are popular generative models due to their speed and
properties. However, they require an explicit sequence order, which contradicts
the unordered nature of graphs. In contrast, diffusion models maintain
permutation invariance and enable one-shot generation but require up to
thousands of denoising steps and additional features, leading to high
computational costs. Inspired by recent breakthroughs in image
generation-especially the success of visual autoregressive methods-we propose
MAG, a novel diffusion-free graph generation framework based on next-scale
prediction. By leveraging a hierarchy of latent representations, the model
progressively generates scales of the entire graph without the need for
explicit node ordering. Extensive experiments on both generic and molecular
graph datasets demonstrate that MAG delivers competitive performance compared
to state-of-the-art methods, achieving up to three orders of magnitude in
speedup during inference.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 22:30:34 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Belkadi",
"Samuel",
""
],
[
"Hong",
"Steve",
""
],
[
"Chen",
"Marian",
""
]
] | TITLE: Make Autoregressive Great Again: Diffusion-Free Graph Generation with
Next-Scale Prediction
ABSTRACT: Autoregressive models are popular generative models due to their speed and
properties. However, they require an explicit sequence order, which contradicts
the unordered nature of graphs. In contrast, diffusion models maintain
permutation invariance and enable one-shot generation but require up to
thousands of denoising steps and additional features, leading to high
computational costs. Inspired by recent breakthroughs in image
generation-especially the success of visual autoregressive methods-we propose
MAG, a novel diffusion-free graph generation framework based on next-scale
prediction. By leveraging a hierarchy of latent representations, the model
progressively generates scales of the entire graph without the need for
explicit node ordering. Extensive experiments on both generic and molecular
graph datasets demonstrate that MAG delivers competitive performance compared
to state-of-the-art methods, achieving up to three orders of magnitude in
speedup during inference.
|
2503.23617 | Nisal Ranasinghe | Nisal Ranasinghe, Damith Senanayake, Saman Halgamuge | Graph-Eq: Discovering Mathematical Equations using Graph Generative
Models | 8 pages, 4 figures | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The ability to discover meaningful, accurate, and concise mathematical
equations that describe datasets is valuable across various domains. Equations
offer explicit relationships between variables, enabling deeper insights into
underlying data patterns. Most existing equation discovery methods rely on
genetic programming, which iteratively searches the equation space but is often
slow and prone to overfitting. By representing equations as directed acyclic
graphs, we leverage the use of graph neural networks to learn the underlying
semantics of equations, and generate new, previously unseen equations. Although
graph generative models have been shown to be successful in discovering new
types of graphs in many fields, there application in discovering equations
remains largely unexplored. In this work, we propose Graph-EQ, a deep graph
generative model designed for efficient equation discovery. Graph-EQ uses a
conditional variational autoencoder (CVAE) to learn a rich latent
representation of the equation space by training it on a large corpus of
equations in an unsupervised manner. Instead of directly searching the equation
space, we employ Bayesian optimization to efficiently explore this learned
latent space. We show that the encoder-decoder architecture of Graph-Eq is able
to accurately reconstruct input equations. Moreover, we show that the learned
latent representation can be sampled and decoded into valid equations,
including new and previously unseen equations in the training data. Finally, we
assess Graph-Eq's ability to discover equations that best fit a dataset by
exploring the latent space using Bayesian optimization. Latent space
exploration is done on 20 dataset with known ground-truth equations, and
Graph-Eq is shown to successfully discover the grountruth equation in the
majority of datasets.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 22:47:57 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Ranasinghe",
"Nisal",
""
],
[
"Senanayake",
"Damith",
""
],
[
"Halgamuge",
"Saman",
""
]
] | TITLE: Graph-Eq: Discovering Mathematical Equations using Graph Generative
Models
ABSTRACT: The ability to discover meaningful, accurate, and concise mathematical
equations that describe datasets is valuable across various domains. Equations
offer explicit relationships between variables, enabling deeper insights into
underlying data patterns. Most existing equation discovery methods rely on
genetic programming, which iteratively searches the equation space but is often
slow and prone to overfitting. By representing equations as directed acyclic
graphs, we leverage the use of graph neural networks to learn the underlying
semantics of equations, and generate new, previously unseen equations. Although
graph generative models have been shown to be successful in discovering new
types of graphs in many fields, there application in discovering equations
remains largely unexplored. In this work, we propose Graph-EQ, a deep graph
generative model designed for efficient equation discovery. Graph-EQ uses a
conditional variational autoencoder (CVAE) to learn a rich latent
representation of the equation space by training it on a large corpus of
equations in an unsupervised manner. Instead of directly searching the equation
space, we employ Bayesian optimization to efficiently explore this learned
latent space. We show that the encoder-decoder architecture of Graph-Eq is able
to accurately reconstruct input equations. Moreover, we show that the learned
latent representation can be sampled and decoded into valid equations,
including new and previously unseen equations in the training data. Finally, we
assess Graph-Eq's ability to discover equations that best fit a dataset by
exploring the latent space using Bayesian optimization. Latent space
exploration is done on 20 dataset with known ground-truth equations, and
Graph-Eq is shown to successfully discover the grountruth equation in the
majority of datasets.
|
2503.23618 | Amar Kumar | Amar Kumar, Anita Kriz, Barak Pertzov, Tal Arbel | Leveraging Vision-Language Foundation Models to Reveal Hidden
Image-Attribute Relationships in Medical Imaging | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-language foundation models (VLMs) have shown impressive performance in
guiding image generation through text, with emerging applications in medical
imaging. In this work, we are the first to investigate the question: 'Can
fine-tuned foundation models help identify critical, and possibly unknown, data
properties?' By evaluating our proposed method on a chest x-ray dataset, we
show that these models can generate high-resolution, precisely edited images
compared to methods that rely on Structural Causal Models (SCMs) according to
numerous metrics. For the first time, we demonstrate that fine-tuned VLMs can
reveal hidden data relationships that were previously obscured due to available
metadata granularity and model capacity limitations. Our experiments
demonstrate both the potential of these models to reveal underlying dataset
properties while also exposing the limitations of fine-tuned VLMs for accurate
image editing and susceptibility to biases and spurious correlations.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 22:49:26 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Kumar",
"Amar",
""
],
[
"Kriz",
"Anita",
""
],
[
"Pertzov",
"Barak",
""
],
[
"Arbel",
"Tal",
""
]
] | TITLE: Leveraging Vision-Language Foundation Models to Reveal Hidden
Image-Attribute Relationships in Medical Imaging
ABSTRACT: Vision-language foundation models (VLMs) have shown impressive performance in
guiding image generation through text, with emerging applications in medical
imaging. In this work, we are the first to investigate the question: 'Can
fine-tuned foundation models help identify critical, and possibly unknown, data
properties?' By evaluating our proposed method on a chest x-ray dataset, we
show that these models can generate high-resolution, precisely edited images
compared to methods that rely on Structural Causal Models (SCMs) according to
numerous metrics. For the first time, we demonstrate that fine-tuned VLMs can
reveal hidden data relationships that were previously obscured due to available
metadata granularity and model capacity limitations. Our experiments
demonstrate both the potential of these models to reveal underlying dataset
properties while also exposing the limitations of fine-tuned VLMs for accurate
image editing and susceptibility to biases and spurious correlations.
|
2503.23623 | Zahra TehraniNasab | Zahra TehraniNasab, Amar Kumar, Tal Arbel | Language-Guided Trajectory Traversal in Disentangled Stable Diffusion
Latent Space for Factorized Medical Image Generation | 10 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text-to-image diffusion models have demonstrated a remarkable ability to
generate photorealistic images from natural language prompts. These
high-resolution, language-guided synthesized images are essential for the
explainability of disease or exploring causal relationships. However, their
potential for disentangling and controlling latent factors of variation in
specialized domains like medical imaging remains under-explored. In this work,
we present the first investigation of the power of pre-trained vision-language
foundation models, once fine-tuned on medical image datasets, to perform latent
disentanglement for factorized medical image generation and interpolation.
Through extensive experiments on chest X-ray and skin datasets, we illustrate
that fine-tuned, language-guided Stable Diffusion inherently learns to
factorize key attributes for image generation, such as the patient's anatomical
structures or disease diagnostic features. We devise a framework to identify,
isolate, and manipulate key attributes through latent space trajectory
traversal of generative models, facilitating precise control over medical image
synthesis.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 23:15:52 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"TehraniNasab",
"Zahra",
""
],
[
"Kumar",
"Amar",
""
],
[
"Arbel",
"Tal",
""
]
] | TITLE: Language-Guided Trajectory Traversal in Disentangled Stable Diffusion
Latent Space for Factorized Medical Image Generation
ABSTRACT: Text-to-image diffusion models have demonstrated a remarkable ability to
generate photorealistic images from natural language prompts. These
high-resolution, language-guided synthesized images are essential for the
explainability of disease or exploring causal relationships. However, their
potential for disentangling and controlling latent factors of variation in
specialized domains like medical imaging remains under-explored. In this work,
we present the first investigation of the power of pre-trained vision-language
foundation models, once fine-tuned on medical image datasets, to perform latent
disentanglement for factorized medical image generation and interpolation.
Through extensive experiments on chest X-ray and skin datasets, we illustrate
that fine-tuned, language-guided Stable Diffusion inherently learns to
factorize key attributes for image generation, such as the patient's anatomical
structures or disease diagnostic features. We devise a framework to identify,
isolate, and manipulate key attributes through latent space trajectory
traversal of generative models, facilitating precise control over medical image
synthesis.
|
2503.23626 | Anirudh Satheesh | Anirudh Satheesh and Keenan Powell | A Constrained Multi-Agent Reinforcement Learning Approach to Autonomous
Traffic Signal Control | Submitted to ACM Journal for Autonomous Transportation Systems | null | null | null | cs.MA cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic congestion in modern cities is exacerbated by the limitations of
traditional fixed-time traffic signal systems, which fail to adapt to dynamic
traffic patterns. Adaptive Traffic Signal Control (ATSC) algorithms have
emerged as a solution by dynamically adjusting signal timing based on real-time
traffic conditions. However, the main limitation of such methods is that they
are not transferable to environments under real-world constraints, such as
balancing efficiency, minimizing collisions, and ensuring fairness across
intersections. In this paper, we view the ATSC problem as a constrained
multi-agent reinforcement learning (MARL) problem and propose a novel algorithm
named Multi-Agent Proximal Policy Optimization with Lagrange Cost Estimator
(MAPPO-LCE) to produce effective traffic signal control policies. Our approach
integrates the Lagrange multipliers method to balance rewards and constraints,
with a cost estimator for stable adjustment. We also introduce three
constraints on the traffic network: GreenTime, GreenSkip, and PhaseSkip, which
penalize traffic policies that do not conform to real-world scenarios. Our
experimental results on three real-world datasets demonstrate that MAPPO-LCE
outperforms three baseline MARL algorithms by across all environments and
traffic constraints (improving on MAPPO by 12.60%, IPPO by 10.29%, and QTRAN by
13.10%). Our results show that constrained MARL is a valuable tool for traffic
planners to deploy scalable and efficient ATSC methods in real-world traffic
networks. We provide code at https://github.com/Asatheesh6561/MAPPO-LCE.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 23:29:48 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Satheesh",
"Anirudh",
""
],
[
"Powell",
"Keenan",
""
]
] | TITLE: A Constrained Multi-Agent Reinforcement Learning Approach to Autonomous
Traffic Signal Control
ABSTRACT: Traffic congestion in modern cities is exacerbated by the limitations of
traditional fixed-time traffic signal systems, which fail to adapt to dynamic
traffic patterns. Adaptive Traffic Signal Control (ATSC) algorithms have
emerged as a solution by dynamically adjusting signal timing based on real-time
traffic conditions. However, the main limitation of such methods is that they
are not transferable to environments under real-world constraints, such as
balancing efficiency, minimizing collisions, and ensuring fairness across
intersections. In this paper, we view the ATSC problem as a constrained
multi-agent reinforcement learning (MARL) problem and propose a novel algorithm
named Multi-Agent Proximal Policy Optimization with Lagrange Cost Estimator
(MAPPO-LCE) to produce effective traffic signal control policies. Our approach
integrates the Lagrange multipliers method to balance rewards and constraints,
with a cost estimator for stable adjustment. We also introduce three
constraints on the traffic network: GreenTime, GreenSkip, and PhaseSkip, which
penalize traffic policies that do not conform to real-world scenarios. Our
experimental results on three real-world datasets demonstrate that MAPPO-LCE
outperforms three baseline MARL algorithms by across all environments and
traffic constraints (improving on MAPPO by 12.60%, IPPO by 10.29%, and QTRAN by
13.10%). Our results show that constrained MARL is a valuable tool for traffic
planners to deploy scalable and efficient ATSC methods in real-world traffic
networks. We provide code at https://github.com/Asatheesh6561/MAPPO-LCE.
|
2503.23660 | Xinhan Di | Junjie Zheng, Zihao Chen, Chaofan Ding, Xinhan Di | DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue
Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning Guidance | 11 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current movie dubbing technology can generate the desired voice from a given
speech prompt, ensuring good synchronization between speech and visuals while
accurately conveying the intended emotions. However, in movie dubbing, key
aspects such as adapting to different dubbing styles, handling dialogue,
narration, and monologue effectively, and understanding subtle details like the
age and gender of speakers, have not been well studied. To address this
challenge, we propose a framework of multi-modal large language model. First,
it utilizes multimodal Chain-of-Thought (CoT) reasoning methods on visual
inputs to understand dubbing styles and fine-grained attributes. Second, it
generates high-quality dubbing through large speech generation models, guided
by multimodal conditions. Additionally, we have developed a movie dubbing
dataset with CoT annotations. The evaluation results demonstrate a performance
improvement over state-of-the-art methods across multiple datasets. In
particular, for the evaluation metrics, the SPK-SIM and EMO-SIM increases from
82.48% to 89.74%, 66.24% to 78.88% for dubbing setting 2.0 on V2C Animation
dataset, LSE-D and MCD-SL decreases from 14.79 to 14.63, 5.24 to 4.74 for
dubbing setting 2.0 on Grid dataset, SPK-SIM increases from 64.03 to 83.42 and
WER decreases from 52.69% to 23.20% for initial reasoning setting on proposed
CoT-Movie-Dubbing dataset in the comparison with the state-of-the art models.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 01:51:09 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zheng",
"Junjie",
""
],
[
"Chen",
"Zihao",
""
],
[
"Ding",
"Chaofan",
""
],
[
"Di",
"Xinhan",
""
]
] | TITLE: DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue
Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning Guidance
ABSTRACT: Current movie dubbing technology can generate the desired voice from a given
speech prompt, ensuring good synchronization between speech and visuals while
accurately conveying the intended emotions. However, in movie dubbing, key
aspects such as adapting to different dubbing styles, handling dialogue,
narration, and monologue effectively, and understanding subtle details like the
age and gender of speakers, have not been well studied. To address this
challenge, we propose a framework of multi-modal large language model. First,
it utilizes multimodal Chain-of-Thought (CoT) reasoning methods on visual
inputs to understand dubbing styles and fine-grained attributes. Second, it
generates high-quality dubbing through large speech generation models, guided
by multimodal conditions. Additionally, we have developed a movie dubbing
dataset with CoT annotations. The evaluation results demonstrate a performance
improvement over state-of-the-art methods across multiple datasets. In
particular, for the evaluation metrics, the SPK-SIM and EMO-SIM increases from
82.48% to 89.74%, 66.24% to 78.88% for dubbing setting 2.0 on V2C Animation
dataset, LSE-D and MCD-SL decreases from 14.79 to 14.63, 5.24 to 4.74 for
dubbing setting 2.0 on Grid dataset, SPK-SIM increases from 64.03 to 83.42 and
WER decreases from 52.69% to 23.20% for initial reasoning setting on proposed
CoT-Movie-Dubbing dataset in the comparison with the state-of-the art models.
|
2503.23664 | Masahiko Tsuji | Masahiko Tsuji, Hitoshi Niigaki, Ryuichi Tanida | LiM-Loc: Visual Localization with Dense and Accurate 3D Reference Maps
Directly Corresponding 2D Keypoints to 3D LiDAR Point Clouds | 8 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual localization is to estimate the 6-DOF camera pose of a query image in
a 3D reference map. We extract keypoints from the reference image and generate
a 3D reference map with 3D reconstruction of the keypoints in advance. We
emphasize that the more keypoints in the 3D reference map and the smaller the
error of the 3D positions of the keypoints, the higher the accuracy of the
camera pose estimation. However, previous image-only methods require a huge
number of images, and it is difficult to 3D-reconstruct keypoints without error
due to inevitable mismatches and failures in feature matching. As a result, the
3D reference map is sparse and inaccurate. In contrast, accurate 3D reference
maps can be generated by combining images and 3D sensors. Recently, 3D-LiDAR
has been widely used around the world. LiDAR, which measures a large space with
high density, has become inexpensive. In addition, accurately calibrated
cameras are also widely used, so images that record the external parameters of
the camera without errors can be easily obtained. In this paper, we propose a
method to directly assign 3D LiDAR point clouds to keypoints to generate dense
and accurate 3D reference maps. The proposed method avoids feature matching and
achieves accurate 3D reconstruction for almost all keypoints. To estimate
camera pose over a wide area, we use the wide-area LiDAR point cloud to remove
points that are not visible to the camera and reduce 2D-3D correspondence
errors. Using indoor and outdoor datasets, we apply the proposed method to
several state-of-the-art local features and confirm that it improves the
accuracy of camera pose estimation.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 02:01:39 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Tsuji",
"Masahiko",
""
],
[
"Niigaki",
"Hitoshi",
""
],
[
"Tanida",
"Ryuichi",
""
]
] | TITLE: LiM-Loc: Visual Localization with Dense and Accurate 3D Reference Maps
Directly Corresponding 2D Keypoints to 3D LiDAR Point Clouds
ABSTRACT: Visual localization is to estimate the 6-DOF camera pose of a query image in
a 3D reference map. We extract keypoints from the reference image and generate
a 3D reference map with 3D reconstruction of the keypoints in advance. We
emphasize that the more keypoints in the 3D reference map and the smaller the
error of the 3D positions of the keypoints, the higher the accuracy of the
camera pose estimation. However, previous image-only methods require a huge
number of images, and it is difficult to 3D-reconstruct keypoints without error
due to inevitable mismatches and failures in feature matching. As a result, the
3D reference map is sparse and inaccurate. In contrast, accurate 3D reference
maps can be generated by combining images and 3D sensors. Recently, 3D-LiDAR
has been widely used around the world. LiDAR, which measures a large space with
high density, has become inexpensive. In addition, accurately calibrated
cameras are also widely used, so images that record the external parameters of
the camera without errors can be easily obtained. In this paper, we propose a
method to directly assign 3D LiDAR point clouds to keypoints to generate dense
and accurate 3D reference maps. The proposed method avoids feature matching and
achieves accurate 3D reconstruction for almost all keypoints. To estimate
camera pose over a wide area, we use the wide-area LiDAR point cloud to remove
points that are not visible to the camera and reduce 2D-3D correspondence
errors. Using indoor and outdoor datasets, we apply the proposed method to
several state-of-the-art local features and confirm that it improves the
accuracy of camera pose estimation.
|
2503.23670 | Takeshi Noda | Takeshi Noda and Chao Chen and Junsheng Zhou and Weiqi Zhang and
Yu-Shen Liu and Zhizhong Han | Learning Bijective Surface Parameterization for Inferring Signed
Distance Functions from Sparse Point Clouds with Grid Deformation | Accepted by Conference on Computer Vision and Pattern Recognition
(CVPR) 2025. Project page:https://takeshie.github.io/Bijective-SDF | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Inferring signed distance functions (SDFs) from sparse point clouds remains a
challenge in surface reconstruction. The key lies in the lack of detailed
geometric information in sparse point clouds, which is essential for learning a
continuous field. To resolve this issue, we present a novel approach that
learns a dynamic deformation network to predict SDFs in an end-to-end manner.
To parameterize a continuous surface from sparse points, we propose a bijective
surface parameterization (BSP) that learns the global shape from local patches.
Specifically, we construct a bijective mapping for sparse points from the
parametric domain to 3D local patches, integrating patches into the global
surface. Meanwhile, we introduce grid deformation optimization (GDO) into the
surface approximation to optimize the deformation of grid points and further
refine the parametric surfaces. Experimental results on synthetic and real
scanned datasets demonstrate that our method significantly outperforms the
current state-of-the-art methods. Project page:
https://takeshie.github.io/Bijective-SDF
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 02:27:02 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Noda",
"Takeshi",
""
],
[
"Chen",
"Chao",
""
],
[
"Zhou",
"Junsheng",
""
],
[
"Zhang",
"Weiqi",
""
],
[
"Liu",
"Yu-Shen",
""
],
[
"Han",
"Zhizhong",
""
]
] | TITLE: Learning Bijective Surface Parameterization for Inferring Signed
Distance Functions from Sparse Point Clouds with Grid Deformation
ABSTRACT: Inferring signed distance functions (SDFs) from sparse point clouds remains a
challenge in surface reconstruction. The key lies in the lack of detailed
geometric information in sparse point clouds, which is essential for learning a
continuous field. To resolve this issue, we present a novel approach that
learns a dynamic deformation network to predict SDFs in an end-to-end manner.
To parameterize a continuous surface from sparse points, we propose a bijective
surface parameterization (BSP) that learns the global shape from local patches.
Specifically, we construct a bijective mapping for sparse points from the
parametric domain to 3D local patches, integrating patches into the global
surface. Meanwhile, we introduce grid deformation optimization (GDO) into the
surface approximation to optimize the deformation of grid points and further
refine the parametric surfaces. Experimental results on synthetic and real
scanned datasets demonstrate that our method significantly outperforms the
current state-of-the-art methods. Project page:
https://takeshie.github.io/Bijective-SDF
|
2503.23673 | Zhengyi Zhao | Zhengyi Zhao, Shubo Zhang, Bin Liang, Binyang Li, Kam-Fai Wong | WHERE and WHICH: Iterative Debate for Biomedical Synthetic Data
Augmentation | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In Biomedical Natural Language Processing (BioNLP) tasks, such as Relation
Extraction, Named Entity Recognition, and Text Classification, the scarcity of
high-quality data remains a significant challenge. This limitation poisons
large language models to correctly understand relationships between biological
entities, such as molecules and diseases, or drug interactions, and further
results in potential misinterpretation of biomedical documents. To address this
issue, current approaches generally adopt the Synthetic Data Augmentation
method which involves similarity computation followed by word replacement, but
counterfactual data are usually generated. As a result, these methods disrupt
meaningful word sets or produce sentences with meanings that deviate
substantially from the original context, rendering them ineffective in
improving model performance. To this end, this paper proposes a
biomedical-dedicated rationale-based synthetic data augmentation method. Beyond
the naive lexicon similarity, specific bio-relation similarity is measured to
hold the augmented instance having a strong correlation with bio-relation
instead of simply increasing the diversity of augmented data. Moreover, a
multi-agents-involved reflection mechanism helps the model iteratively
distinguish different usage of similar entities to escape falling into the
mis-replace trap. We evaluate our method on the BLURB and BigBIO benchmark,
which includes 9 common datasets spanning four major BioNLP tasks. Our
experimental results demonstrate consistent performance improvements across all
tasks, highlighting the effectiveness of our approach in addressing the
challenges associated with data scarcity and enhancing the overall performance
of biomedical NLP models.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 02:36:30 GMT"
}
] | 2025-04-01T00:00:00 | [
[
"Zhao",
"Zhengyi",
""
],
[
"Zhang",
"Shubo",
""
],
[
"Liang",
"Bin",
""
],
[
"Li",
"Binyang",
""
],
[
"Wong",
"Kam-Fai",
""
]
] | TITLE: WHERE and WHICH: Iterative Debate for Biomedical Synthetic Data
Augmentation
ABSTRACT: In Biomedical Natural Language Processing (BioNLP) tasks, such as Relation
Extraction, Named Entity Recognition, and Text Classification, the scarcity of
high-quality data remains a significant challenge. This limitation poisons
large language models to correctly understand relationships between biological
entities, such as molecules and diseases, or drug interactions, and further
results in potential misinterpretation of biomedical documents. To address this
issue, current approaches generally adopt the Synthetic Data Augmentation
method which involves similarity computation followed by word replacement, but
counterfactual data are usually generated. As a result, these methods disrupt
meaningful word sets or produce sentences with meanings that deviate
substantially from the original context, rendering them ineffective in
improving model performance. To this end, this paper proposes a
biomedical-dedicated rationale-based synthetic data augmentation method. Beyond
the naive lexicon similarity, specific bio-relation similarity is measured to
hold the augmented instance having a strong correlation with bio-relation
instead of simply increasing the diversity of augmented data. Moreover, a
multi-agents-involved reflection mechanism helps the model iteratively
distinguish different usage of similar entities to escape falling into the
mis-replace trap. We evaluate our method on the BLURB and BigBIO benchmark,
which includes 9 common datasets spanning four major BioNLP tasks. Our
experimental results demonstrate consistent performance improvements across all
tasks, highlighting the effectiveness of our approach in addressing the
challenges associated with data scarcity and enhancing the overall performance
of biomedical NLP models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.