Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.14862 | TengQi Ye | Ying Liu, Yijing Hua, Haojiang Chai, Yanbo Wang, TengQi Ye | Fine-Grained Open-Vocabulary Object Detection with Fined-Grained
Prompts: Task, Dataset and Benchmark | 8 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-vocabulary detectors are proposed to locate and recognize objects in
novel classes. However, variations in vision-aware language vocabulary data
used for open-vocabulary learning can lead to unfair and unreliable
evaluations. Recent evaluation methods have attempted to address this issue by
incorporating object properties or adding locations and characteristics to the
captions. Nevertheless, since these properties and locations depend on the
specific details of the images instead of classes, detectors can not make
accurate predictions without precise descriptions provided through human
annotation. This paper introduces 3F-OVD, a novel task that extends supervised
fine-grained object detection to the open-vocabulary setting. Our task is
intuitive and challenging, requiring a deep understanding of Fine-grained
captions and careful attention to Fine-grained details in images in order to
accurately detect Fine-grained objects. Additionally, due to the scarcity of
qualified fine-grained object detection datasets, we have created a new
dataset, NEU-171K, tailored for both supervised and open-vocabulary settings.
We benchmark state-of-the-art object detectors on our dataset for both
settings. Furthermore, we propose a simple yet effective post-processing
technique.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 03:41:46 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 04:44:21 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Liu",
"Ying",
""
],
[
"Hua",
"Yijing",
""
],
[
"Chai",
"Haojiang",
""
],
[
"Wang",
"Yanbo",
""
],
[
"Ye",
"TengQi",
""
]
] | TITLE: Fine-Grained Open-Vocabulary Object Detection with Fined-Grained
Prompts: Task, Dataset and Benchmark
ABSTRACT: Open-vocabulary detectors are proposed to locate and recognize objects in
novel classes. However, variations in vision-aware language vocabulary data
used for open-vocabulary learning can lead to unfair and unreliable
evaluations. Recent evaluation methods have attempted to address this issue by
incorporating object properties or adding locations and characteristics to the
captions. Nevertheless, since these properties and locations depend on the
specific details of the images instead of classes, detectors can not make
accurate predictions without precise descriptions provided through human
annotation. This paper introduces 3F-OVD, a novel task that extends supervised
fine-grained object detection to the open-vocabulary setting. Our task is
intuitive and challenging, requiring a deep understanding of Fine-grained
captions and careful attention to Fine-grained details in images in order to
accurately detect Fine-grained objects. Additionally, due to the scarcity of
qualified fine-grained object detection datasets, we have created a new
dataset, NEU-171K, tailored for both supervised and open-vocabulary settings.
We benchmark state-of-the-art object detectors on our dataset for both
settings. Furthermore, we propose a simple yet effective post-processing
technique.
|
2503.15106 | Amir Hamza | Amir Hamza, Andrea Caraffa, Davide Boscaini, Fabio Poiesi | Distilling 3D distinctive local descriptors for 6D pose estimation | Project Website: https://tev-fbk.github.io/dGeDi/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Three-dimensional local descriptors are crucial for encoding geometric
surface properties, making them essential for various point cloud understanding
tasks. Among these descriptors, GeDi has demonstrated strong zero-shot 6D pose
estimation capabilities but remains computationally impractical for real-world
applications due to its expensive inference process. Can we retain GeDi's
effectiveness while significantly improving its efficiency? In this paper, we
explore this question by introducing a knowledge distillation framework that
trains an efficient student model to regress local descriptors from a GeDi
teacher. Our key contributions include: an efficient large-scale training
procedure that ensures robustness to occlusions and partial observations while
operating under compute and storage constraints, and a novel loss formulation
that handles weak supervision from non-distinctive teacher descriptors. We
validate our approach on five BOP Benchmark datasets and demonstrate a
significant reduction in inference time while maintaining competitive
performance with existing methods, bringing zero-shot 6D pose estimation closer
to real-time feasibility. Project Website: https://tev-fbk.github.io/dGeDi/
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:04:37 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 08:27:13 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hamza",
"Amir",
""
],
[
"Caraffa",
"Andrea",
""
],
[
"Boscaini",
"Davide",
""
],
[
"Poiesi",
"Fabio",
""
]
] | TITLE: Distilling 3D distinctive local descriptors for 6D pose estimation
ABSTRACT: Three-dimensional local descriptors are crucial for encoding geometric
surface properties, making them essential for various point cloud understanding
tasks. Among these descriptors, GeDi has demonstrated strong zero-shot 6D pose
estimation capabilities but remains computationally impractical for real-world
applications due to its expensive inference process. Can we retain GeDi's
effectiveness while significantly improving its efficiency? In this paper, we
explore this question by introducing a knowledge distillation framework that
trains an efficient student model to regress local descriptors from a GeDi
teacher. Our key contributions include: an efficient large-scale training
procedure that ensures robustness to occlusions and partial observations while
operating under compute and storage constraints, and a novel loss formulation
that handles weak supervision from non-distinctive teacher descriptors. We
validate our approach on five BOP Benchmark datasets and demonstrate a
significant reduction in inference time while maintaining competitive
performance with existing methods, bringing zero-shot 6D pose estimation closer
to real-time feasibility. Project Website: https://tev-fbk.github.io/dGeDi/
|
2503.15110 | Ziqin Huang | Zinqin Huang, Gu Wang, Chenyangguang Zhang, Ruida Zhang, Xiu Li,
Xiangyang Ji | GIVEPose: Gradual Intra-class Variation Elimination for RGB-based
Category-Level Object Pose Estimation | CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in RGBD-based category-level object pose estimation have been
limited by their reliance on precise depth information, restricting their
broader applicability. In response, RGB-based methods have been developed.
Among these methods, geometry-guided pose regression that originated from
instance-level tasks has demonstrated strong performance. However, we argue
that the NOCS map is an inadequate intermediate representation for
geometry-guided pose regression method, as its many-to-one correspondence with
category-level pose introduces redundant instance-specific information,
resulting in suboptimal results. This paper identifies the intra-class
variation problem inherent in pose regression based solely on the NOCS map and
proposes the Intra-class Variation-Free Consensus (IVFC) map, a novel
coordinate representation generated from the category-level consensus model. By
leveraging the complementary strengths of the NOCS map and the IVFC map, we
introduce GIVEPose, a framework that implements Gradual Intra-class Variation
Elimination for category-level object pose estimation. Extensive evaluations on
both synthetic and real-world datasets demonstrate that GIVEPose significantly
outperforms existing state-of-the-art RGB-based approaches, achieving
substantial improvements in category-level object pose estimation. Our code is
available at https://github.com/ziqin-h/GIVEPose.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:07:01 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 10:15:48 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Huang",
"Zinqin",
""
],
[
"Wang",
"Gu",
""
],
[
"Zhang",
"Chenyangguang",
""
],
[
"Zhang",
"Ruida",
""
],
[
"Li",
"Xiu",
""
],
[
"Ji",
"Xiangyang",
""
]
] | TITLE: GIVEPose: Gradual Intra-class Variation Elimination for RGB-based
Category-Level Object Pose Estimation
ABSTRACT: Recent advances in RGBD-based category-level object pose estimation have been
limited by their reliance on precise depth information, restricting their
broader applicability. In response, RGB-based methods have been developed.
Among these methods, geometry-guided pose regression that originated from
instance-level tasks has demonstrated strong performance. However, we argue
that the NOCS map is an inadequate intermediate representation for
geometry-guided pose regression method, as its many-to-one correspondence with
category-level pose introduces redundant instance-specific information,
resulting in suboptimal results. This paper identifies the intra-class
variation problem inherent in pose regression based solely on the NOCS map and
proposes the Intra-class Variation-Free Consensus (IVFC) map, a novel
coordinate representation generated from the category-level consensus model. By
leveraging the complementary strengths of the NOCS map and the IVFC map, we
introduce GIVEPose, a framework that implements Gradual Intra-class Variation
Elimination for category-level object pose estimation. Extensive evaluations on
both synthetic and real-world datasets demonstrate that GIVEPose significantly
outperforms existing state-of-the-art RGB-based approaches, achieving
substantial improvements in category-level object pose estimation. Our code is
available at https://github.com/ziqin-h/GIVEPose.
|
2503.15195 | Giorgia Crosilla | Giorgia Crosilla, Lukas Klic and Giovanni Colavizza | Benchmarking Large Language Models for Handwritten Text Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Traditional machine learning models for Handwritten Text Recognition (HTR)
rely on supervised training, requiring extensive manual annotations, and often
produce errors due to the separation between layout and text processing. In
contrast, Multimodal Large Language Models (MLLMs) offer a general approach to
recognizing diverse handwriting styles without the need for model-specific
training. The study benchmarks various proprietary and open-source LLMs against
Transkribus models, evaluating their performance on both modern and historical
datasets written in English, French, German, and Italian. In addition, emphasis
is placed on testing the models' ability to autonomously correct previously
generated outputs. Findings indicate that proprietary models, especially Claude
3.5 Sonnet, outperform open-source alternatives in zero-shot settings. MLLMs
achieve excellent results in recognizing modern handwriting and exhibit a
preference for the English language due to their pre-training dataset
composition. Comparisons with Transkribus show no consistent advantage for
either approach. Moreover, LLMs demonstrate limited ability to autonomously
correct errors in zero-shot transcriptions.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 13:33:29 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 15:49:10 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Crosilla",
"Giorgia",
""
],
[
"Klic",
"Lukas",
""
],
[
"Colavizza",
"Giovanni",
""
]
] | TITLE: Benchmarking Large Language Models for Handwritten Text Recognition
ABSTRACT: Traditional machine learning models for Handwritten Text Recognition (HTR)
rely on supervised training, requiring extensive manual annotations, and often
produce errors due to the separation between layout and text processing. In
contrast, Multimodal Large Language Models (MLLMs) offer a general approach to
recognizing diverse handwriting styles without the need for model-specific
training. The study benchmarks various proprietary and open-source LLMs against
Transkribus models, evaluating their performance on both modern and historical
datasets written in English, French, German, and Italian. In addition, emphasis
is placed on testing the models' ability to autonomously correct previously
generated outputs. Findings indicate that proprietary models, especially Claude
3.5 Sonnet, outperform open-source alternatives in zero-shot settings. MLLMs
achieve excellent results in recognizing modern handwriting and exhibit a
preference for the English language due to their pre-training dataset
composition. Comparisons with Transkribus show no consistent advantage for
either approach. Moreover, LLMs demonstrate limited ability to autonomously
correct errors in zero-shot transcriptions.
|
2503.15220 | Rrubaa Panchendrarajan | Rrubaa Panchendrarajan and Arkaitz Zubiaga | Entity-aware Cross-lingual Claim Detection for Automated Fact-checking | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Identifying claims requiring verification is a critical task in automated
fact-checking, especially given the proliferation of misinformation on social
media platforms. Despite significant progress in the task, there remain open
challenges such as dealing with multilingual and multimodal data prevalent in
online discourse. Addressing the multilingual challenge, recent efforts have
focused on fine-tuning pre-trained multilingual language models. While these
models can handle multiple languages, their ability to effectively transfer
cross-lingual knowledge for detecting claims spreading on social media remains
under-explored. In this paper, we introduce EX-Claim, an entity-aware
cross-lingual claim detection model that generalizes well to handle claims
written in any language. The model leverages entity information derived from
named entity recognition and entity linking techniques to improve the
language-level performance of both seen and unseen languages during training.
Extensive experiments conducted on three datasets from different social media
platforms demonstrate that our proposed model significantly outperforms the
baselines, across 27 languages, and achieves the highest rate of knowledge
transfer, even with limited training data.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:00:55 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Mar 2025 11:33:29 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Panchendrarajan",
"Rrubaa",
""
],
[
"Zubiaga",
"Arkaitz",
""
]
] | TITLE: Entity-aware Cross-lingual Claim Detection for Automated Fact-checking
ABSTRACT: Identifying claims requiring verification is a critical task in automated
fact-checking, especially given the proliferation of misinformation on social
media platforms. Despite significant progress in the task, there remain open
challenges such as dealing with multilingual and multimodal data prevalent in
online discourse. Addressing the multilingual challenge, recent efforts have
focused on fine-tuning pre-trained multilingual language models. While these
models can handle multiple languages, their ability to effectively transfer
cross-lingual knowledge for detecting claims spreading on social media remains
under-explored. In this paper, we introduce EX-Claim, an entity-aware
cross-lingual claim detection model that generalizes well to handle claims
written in any language. The model leverages entity information derived from
named entity recognition and entity linking techniques to improve the
language-level performance of both seen and unseen languages during training.
Extensive experiments conducted on three datasets from different social media
platforms demonstrate that our proposed model significantly outperforms the
baselines, across 27 languages, and achieves the highest rate of knowledge
transfer, even with limited training data.
|
2503.15491 | Kazuhiro Sasabuchi | Kazuhiro Sasabuchi, Naoki Wake, Atsushi Kanehira, Jun Takamatsu, and
Katsushi Ikeuchi | Agreeing to Interact in Human-Robot Interaction using Large Language
Models and Vision Language Models | null | null | null | null | cs.HC cs.CL cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | In human-robot interaction (HRI), the beginning of an interaction is often
complex. Whether the robot should communicate with the human is dependent on
several situational factors (e.g., the current human's activity, urgency of the
interaction, etc.). We test whether large language models (LLM) and vision
language models (VLM) can provide solutions to this problem. We compare four
different system-design patterns using LLMs and VLMs, and test on a test set
containing 84 human-robot situations. The test set mixes several publicly
available datasets and also includes situations where the appropriate action to
take is open-ended. Our results using the GPT-4o and Phi-3 Vision model
indicate that LLMs and VLMs are capable of handling interaction beginnings when
the desired actions are clear, however, challenge remains in the open-ended
situations where the model must balance between the human and robot situation.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2025 07:26:49 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Sasabuchi",
"Kazuhiro",
""
],
[
"Wake",
"Naoki",
""
],
[
"Kanehira",
"Atsushi",
""
],
[
"Takamatsu",
"Jun",
""
],
[
"Ikeuchi",
"Katsushi",
""
]
] | TITLE: Agreeing to Interact in Human-Robot Interaction using Large Language
Models and Vision Language Models
ABSTRACT: In human-robot interaction (HRI), the beginning of an interaction is often
complex. Whether the robot should communicate with the human is dependent on
several situational factors (e.g., the current human's activity, urgency of the
interaction, etc.). We test whether large language models (LLM) and vision
language models (VLM) can provide solutions to this problem. We compare four
different system-design patterns using LLMs and VLMs, and test on a test set
containing 84 human-robot situations. The test set mixes several publicly
available datasets and also includes situations where the appropriate action to
take is open-ended. Our results using the GPT-4o and Phi-3 Vision model
indicate that LLMs and VLMs are capable of handling interaction beginnings when
the desired actions are clear, however, challenge remains in the open-ended
situations where the model must balance between the human and robot situation.
|
2503.15507 | Shi Qiu | Yue Qiu, Yuqi Tong, Yu Zhang, Qixuan Liu, Jialun Pei, Shi Qiu,
Pheng-Ann Heng, Chi-Wing Fu | CvhSlicer 2.0: Immersive and Interactive Visualization of Chinese
Visible Human Data in XR Environments | IEEE VR 2025 Posters | null | null | null | cs.HC cs.GR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of human anatomy through advanced visualization techniques is
crucial for medical research and education. In this work, we introduce
CvhSlicer 2.0, an innovative XR system designed for immersive and interactive
visualization of the Chinese Visible Human (CVH) dataset. Particularly, our
proposed system operates entirely on a commercial XR headset, offering a range
of visualization and interaction tools for dynamic 2D and 3D data exploration.
By conducting comprehensive evaluations, our CvhSlicer 2.0 demonstrates strong
capabilities in visualizing anatomical data, enhancing user engagement and
improving educational effectiveness. A demo video is available at
https://youtu.be/CfR72S_0N-4
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2025 15:38:08 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Qiu",
"Yue",
""
],
[
"Tong",
"Yuqi",
""
],
[
"Zhang",
"Yu",
""
],
[
"Liu",
"Qixuan",
""
],
[
"Pei",
"Jialun",
""
],
[
"Qiu",
"Shi",
""
],
[
"Heng",
"Pheng-Ann",
""
],
[
"Fu",
"Chi-Wing",
""
]
] | TITLE: CvhSlicer 2.0: Immersive and Interactive Visualization of Chinese
Visible Human Data in XR Environments
ABSTRACT: The study of human anatomy through advanced visualization techniques is
crucial for medical research and education. In this work, we introduce
CvhSlicer 2.0, an innovative XR system designed for immersive and interactive
visualization of the Chinese Visible Human (CVH) dataset. Particularly, our
proposed system operates entirely on a commercial XR headset, offering a range
of visualization and interaction tools for dynamic 2D and 3D data exploration.
By conducting comprehensive evaluations, our CvhSlicer 2.0 demonstrates strong
capabilities in visualizing anatomical data, enhancing user engagement and
improving educational effectiveness. A demo video is available at
https://youtu.be/CfR72S_0N-4
|
2503.15509 | David Sumpter | Amandine M. Caut, Amy Rouillard, Beimnet Zenebe, Matthias Green,
\'Ag\'ust P\'almason Morthens and David J. T. Sumpter | Representing data in words | null | null | null | null | cs.HC cs.CL | http://creativecommons.org/licenses/by/4.0/ | An important part of data science is the use of visualisations to display
data in a way that is easy to digest. Visualisations often rely on underlying
statistical or machine learning models -- ranging from basic calculations like
category means to advanced methods such as principal component analysis of
multidimensional datasets -- to convey insights. We introduce an analogous
concept for word descriptions of data, which we call wordalisations.
Wordalisations describe data in easy to digest words, without necessarily
reporting numerical values from the data. We show how to create wordalisations
using large language models, through prompt templates engineered according to a
task-agnostic structure which can be used to automatically generate prompts
from data. We show how to produce reliable and engaging texts on three
application areas: scouting football players, personality tests, and
international survey data. Using the model cards framework, we emphasise the
importance of clearly stating the model we are imposing on the data when
creating the wordalisation, detailing how numerical values are translated into
words, incorporating background information into prompts for the large language
model, and documenting the limitations of the wordalisations. We argue that our
model cards approach is a more appropriate framework for setting best practices
in wordalisation of data than performance tests on benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2025 16:04:40 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Caut",
"Amandine M.",
""
],
[
"Rouillard",
"Amy",
""
],
[
"Zenebe",
"Beimnet",
""
],
[
"Green",
"Matthias",
""
],
[
"Morthens",
"Ágúst Pálmason",
""
],
[
"Sumpter",
"David J. T.",
""
]
] | TITLE: Representing data in words
ABSTRACT: An important part of data science is the use of visualisations to display
data in a way that is easy to digest. Visualisations often rely on underlying
statistical or machine learning models -- ranging from basic calculations like
category means to advanced methods such as principal component analysis of
multidimensional datasets -- to convey insights. We introduce an analogous
concept for word descriptions of data, which we call wordalisations.
Wordalisations describe data in easy to digest words, without necessarily
reporting numerical values from the data. We show how to create wordalisations
using large language models, through prompt templates engineered according to a
task-agnostic structure which can be used to automatically generate prompts
from data. We show how to produce reliable and engaging texts on three
application areas: scouting football players, personality tests, and
international survey data. Using the model cards framework, we emphasise the
importance of clearly stating the model we are imposing on the data when
creating the wordalisation, detailing how numerical values are translated into
words, incorporating background information into prompts for the large language
model, and documenting the limitations of the wordalisations. We argue that our
model cards approach is a more appropriate framework for setting best practices
in wordalisation of data than performance tests on benchmark datasets.
|
2503.15528 | Sarah Seifi | Sarah Seifi, Tobias Sukianto, Cecilia Carbonelli, Lorenzo Servadei,
Robert Wille | Complying with the EU AI Act: Innovations in Explainable and
User-Centric Hand Gesture Recognition | null | null | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | The EU AI Act underscores the importance of transparency, user-centricity,
and robustness in AI systems, particularly for high-risk systems. In response,
we present advancements in XentricAI, an explainable hand gesture recognition
(HGR) system designed to meet these regulatory requirements. XentricAI adresses
fundamental challenges in HGR, such as the opacity of black-box models using
explainable AI methods and the handling of distributional shifts in real-world
data through transfer learning techniques. We extend an existing radar-based
HGR dataset by adding 28,000 new gestures, with contributions from multiple
users across varied locations, including 24,000 out-of-distribution gestures.
Leveraging this real-world dataset, we enhance XentricAI's capabilities by
integrating a variational autoencoder module for improved gesture anomaly
detection, incorporating user-specific thresholding. This integration enables
the identification of 11.50% more anomalous gestures. Our extensive evaluations
demonstrate a 97.5% sucess rate in characterizing these anomalies,
significantly improving system explainability. Furthermore, the implementation
of transfer learning techniques has shown a substantial increase in user
adaptability, with an average improvement of at least 15.17%. This work
contributes to the development of trustworthy AI systems by providing both
technical advancements and regulatory compliance, offering a commercially
viable solution that aligns with the EU AI Act requirements.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 15:50:03 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Seifi",
"Sarah",
""
],
[
"Sukianto",
"Tobias",
""
],
[
"Carbonelli",
"Cecilia",
""
],
[
"Servadei",
"Lorenzo",
""
],
[
"Wille",
"Robert",
""
]
] | TITLE: Complying with the EU AI Act: Innovations in Explainable and
User-Centric Hand Gesture Recognition
ABSTRACT: The EU AI Act underscores the importance of transparency, user-centricity,
and robustness in AI systems, particularly for high-risk systems. In response,
we present advancements in XentricAI, an explainable hand gesture recognition
(HGR) system designed to meet these regulatory requirements. XentricAI adresses
fundamental challenges in HGR, such as the opacity of black-box models using
explainable AI methods and the handling of distributional shifts in real-world
data through transfer learning techniques. We extend an existing radar-based
HGR dataset by adding 28,000 new gestures, with contributions from multiple
users across varied locations, including 24,000 out-of-distribution gestures.
Leveraging this real-world dataset, we enhance XentricAI's capabilities by
integrating a variational autoencoder module for improved gesture anomaly
detection, incorporating user-specific thresholding. This integration enables
the identification of 11.50% more anomalous gestures. Our extensive evaluations
demonstrate a 97.5% sucess rate in characterizing these anomalies,
significantly improving system explainability. Furthermore, the implementation
of transfer learning techniques has shown a substantial increase in user
adaptability, with an average improvement of at least 15.17%. This work
contributes to the development of trustworthy AI systems by providing both
technical advancements and regulatory compliance, offering a commercially
viable solution that aligns with the EU AI Act requirements.
|
2503.15542 | Joshua Ellul | Cyrus Malik, Josef Bajada, Joshua Ellul | Identifying Likely-Reputable Blockchain Projects on Ethereum | null | null | null | null | cs.CR cs.AI cs.ET | http://creativecommons.org/licenses/by/4.0/ | Identifying reputable Ethereum projects remains a critical challenge within
the expanding blockchain ecosystem. The ability to distinguish between
legitimate initiatives and potentially fraudulent schemes is non-trivial. This
work presents a systematic approach that integrates multiple data sources with
advanced analytics to evaluate credibility, transparency, and overall
trustworthiness. The methodology applies machine learning techniques to analyse
transaction histories on the Ethereum blockchain.
The study classifies accounts based on a dataset comprising 2,179 entities
linked to illicit activities and 3,977 associated with reputable projects.
Using the LightGBM algorithm, the approach achieves an average accuracy of
0.984 and an average AUC of 0.999, validated through 10-fold cross-validation.
Key influential factors include time differences between transactions and
received_tnx.
The proposed methodology provides a robust mechanism for identifying
reputable Ethereum projects, fostering a more secure and transparent investment
environment. By equipping stakeholders with data-driven insights, this research
enables more informed decision-making, risk mitigation, and the promotion of
legitimate blockchain initiatives. Furthermore, it lays the foundation for
future advancements in trust assessment methodologies, contributing to the
continued development and maturity of the Ethereum ecosystem.
| [
{
"version": "v1",
"created": "Fri, 14 Mar 2025 21:43:25 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Malik",
"Cyrus",
""
],
[
"Bajada",
"Josef",
""
],
[
"Ellul",
"Joshua",
""
]
] | TITLE: Identifying Likely-Reputable Blockchain Projects on Ethereum
ABSTRACT: Identifying reputable Ethereum projects remains a critical challenge within
the expanding blockchain ecosystem. The ability to distinguish between
legitimate initiatives and potentially fraudulent schemes is non-trivial. This
work presents a systematic approach that integrates multiple data sources with
advanced analytics to evaluate credibility, transparency, and overall
trustworthiness. The methodology applies machine learning techniques to analyse
transaction histories on the Ethereum blockchain.
The study classifies accounts based on a dataset comprising 2,179 entities
linked to illicit activities and 3,977 associated with reputable projects.
Using the LightGBM algorithm, the approach achieves an average accuracy of
0.984 and an average AUC of 0.999, validated through 10-fold cross-validation.
Key influential factors include time differences between transactions and
received_tnx.
The proposed methodology provides a robust mechanism for identifying
reputable Ethereum projects, fostering a more secure and transparent investment
environment. By equipping stakeholders with data-driven insights, this research
enables more informed decision-making, risk mitigation, and the promotion of
legitimate blockchain initiatives. Furthermore, it lays the foundation for
future advancements in trust assessment methodologies, contributing to the
continued development and maturity of the Ethereum ecosystem.
|
2503.15545 | Wei-Chang Yeh | Wei-Chang Yeh | Data-Driven Approximation of Binary-State Network Reliability Function:
Algorithm Selection and Reliability Thresholds for Large-Scale Systems | null | null | null | null | cs.LG cs.NA math.NA stat.ML | http://creativecommons.org/publicdomain/zero/1.0/ | Network reliability assessment is pivotal for ensuring the robustness of
modern infrastructure systems, from power grids to communication networks.
While exact reliability computation for binary-state networks is NP-hard,
existing approximation methods face critical tradeoffs between accuracy,
scalability, and data efficiency. This study evaluates 20 machine learning
methods across three reliability regimes full range (0.0-1.0), high reliability
(0.9-1.0), and ultra high reliability (0.99-1.0) to address these gaps. We
demonstrate that large-scale networks with arc reliability larger than or equal
to 0.9 exhibit near-unity system reliability, enabling computational
simplifications. Further, we establish a dataset-scale-driven paradigm for
algorithm selection: Artificial Neural Networks (ANN) excel with limited data,
while Polynomial Regression (PR) achieves superior accuracy in data-rich
environments. Our findings reveal ANN's Test-MSE of 7.24E-05 at 30,000 samples
and PR's optimal performance (5.61E-05) at 40,000 samples, outperforming
traditional Monte Carlo simulations. These insights provide actionable
guidelines for balancing accuracy, interpretability, and computational
efficiency in reliability engineering, with implications for infrastructure
resilience and system optimization.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 13:51:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yeh",
"Wei-Chang",
""
]
] | TITLE: Data-Driven Approximation of Binary-State Network Reliability Function:
Algorithm Selection and Reliability Thresholds for Large-Scale Systems
ABSTRACT: Network reliability assessment is pivotal for ensuring the robustness of
modern infrastructure systems, from power grids to communication networks.
While exact reliability computation for binary-state networks is NP-hard,
existing approximation methods face critical tradeoffs between accuracy,
scalability, and data efficiency. This study evaluates 20 machine learning
methods across three reliability regimes full range (0.0-1.0), high reliability
(0.9-1.0), and ultra high reliability (0.99-1.0) to address these gaps. We
demonstrate that large-scale networks with arc reliability larger than or equal
to 0.9 exhibit near-unity system reliability, enabling computational
simplifications. Further, we establish a dataset-scale-driven paradigm for
algorithm selection: Artificial Neural Networks (ANN) excel with limited data,
while Polynomial Regression (PR) achieves superior accuracy in data-rich
environments. Our findings reveal ANN's Test-MSE of 7.24E-05 at 30,000 samples
and PR's optimal performance (5.61E-05) at 40,000 samples, outperforming
traditional Monte Carlo simulations. These insights provide actionable
guidelines for balancing accuracy, interpretability, and computational
efficiency in reliability engineering, with implications for infrastructure
resilience and system optimization.
|
2503.15549 | Andy Gray | Andy Gray, Alma Rahat, Stephen Lindsay, Jen Pearson, Tom Crick | Rendering Transparency to Ranking in Educational Assessment via Bayesian
Comparative Judgement | null | null | null | null | cs.CY cs.AI cs.HC cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Ensuring transparency in educational assessment is increasingly critical,
particularly post-pandemic, as demand grows for fairer and more reliable
evaluation methods. Comparative Judgement (CJ) offers a promising alternative
to traditional assessments, yet concerns remain about its perceived opacity.
This paper examines how Bayesian Comparative Judgement (BCJ) enhances
transparency by integrating prior information into the judgement process,
providing a structured, data-driven approach that improves interpretability and
accountability.
BCJ assigns probabilities to judgement outcomes, offering quantifiable
measures of uncertainty and deeper insights into decision confidence. By
systematically tracking how prior data and successive judgements inform final
rankings, BCJ clarifies the assessment process and helps identify assessor
disagreements. Multi-criteria BCJ extends this by evaluating multiple learning
outcomes (LOs) independently, preserving the richness of CJ while producing
transparent, granular rankings aligned with specific assessment goals. It also
enables a holistic ranking derived from individual LOs, ensuring comprehensive
evaluations without compromising detailed feedback.
Using a real higher education dataset with professional markers in the UK, we
demonstrate BCJ's quantitative rigour and ability to clarify ranking
rationales. Through qualitative analysis and discussions with experienced CJ
practitioners, we explore its effectiveness in contexts where transparency is
crucial, such as high-stakes national assessments. We highlight the benefits
and limitations of BCJ, offering insights into its real-world application
across various educational settings.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 20:56:55 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gray",
"Andy",
""
],
[
"Rahat",
"Alma",
""
],
[
"Lindsay",
"Stephen",
""
],
[
"Pearson",
"Jen",
""
],
[
"Crick",
"Tom",
""
]
] | TITLE: Rendering Transparency to Ranking in Educational Assessment via Bayesian
Comparative Judgement
ABSTRACT: Ensuring transparency in educational assessment is increasingly critical,
particularly post-pandemic, as demand grows for fairer and more reliable
evaluation methods. Comparative Judgement (CJ) offers a promising alternative
to traditional assessments, yet concerns remain about its perceived opacity.
This paper examines how Bayesian Comparative Judgement (BCJ) enhances
transparency by integrating prior information into the judgement process,
providing a structured, data-driven approach that improves interpretability and
accountability.
BCJ assigns probabilities to judgement outcomes, offering quantifiable
measures of uncertainty and deeper insights into decision confidence. By
systematically tracking how prior data and successive judgements inform final
rankings, BCJ clarifies the assessment process and helps identify assessor
disagreements. Multi-criteria BCJ extends this by evaluating multiple learning
outcomes (LOs) independently, preserving the richness of CJ while producing
transparent, granular rankings aligned with specific assessment goals. It also
enables a holistic ranking derived from individual LOs, ensuring comprehensive
evaluations without compromising detailed feedback.
Using a real higher education dataset with professional markers in the UK, we
demonstrate BCJ's quantitative rigour and ability to clarify ranking
rationales. Through qualitative analysis and discussions with experienced CJ
practitioners, we explore its effectiveness in contexts where transparency is
crucial, such as high-stakes national assessments. We highlight the benefits
and limitations of BCJ, offering insights into its real-world application
across various educational settings.
|
2503.15552 | Tharindu Kumarage | Tharindu Kumarage, Cameron Johnson, Jadie Adams, Lin Ai, Matthias
Kirchner, Anthony Hoogs, Joshua Garland, Julia Hirschberg, Arslan Basharat,
Huan Liu | Personalized Attacks of Social Engineering in Multi-turn Conversations
-- LLM Agents for Simulation and Detection | null | null | null | null | cs.CR cs.CL | http://creativecommons.org/licenses/by/4.0/ | The rapid advancement of conversational agents, particularly chatbots powered
by Large Language Models (LLMs), poses a significant risk of social engineering
(SE) attacks on social media platforms. SE detection in multi-turn, chat-based
interactions is considerably more complex than single-instance detection due to
the dynamic nature of these conversations. A critical factor in mitigating this
threat is understanding the mechanisms through which SE attacks operate,
specifically how attackers exploit vulnerabilities and how victims' personality
traits contribute to their susceptibility. In this work, we propose an
LLM-agentic framework, SE-VSim, to simulate SE attack mechanisms by generating
multi-turn conversations. We model victim agents with varying personality
traits to assess how psychological profiles influence susceptibility to
manipulation. Using a dataset of over 1000 simulated conversations, we examine
attack scenarios in which adversaries, posing as recruiters, funding agencies,
and journalists, attempt to extract sensitive information. Based on this
analysis, we present a proof of concept, SE-OmniGuard, to offer personalized
protection to users by leveraging prior knowledge of the victims personality,
evaluating attack strategies, and monitoring information exchanges in
conversations to identify potential SE attempts.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 19:14:44 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Kumarage",
"Tharindu",
""
],
[
"Johnson",
"Cameron",
""
],
[
"Adams",
"Jadie",
""
],
[
"Ai",
"Lin",
""
],
[
"Kirchner",
"Matthias",
""
],
[
"Hoogs",
"Anthony",
""
],
[
"Garland",
"Joshua",
""
],
[
"Hirschberg",
"Julia",
""
],
[
"Basharat",
"Arslan",
""
],
[
"Liu",
"Huan",
""
]
] | TITLE: Personalized Attacks of Social Engineering in Multi-turn Conversations
-- LLM Agents for Simulation and Detection
ABSTRACT: The rapid advancement of conversational agents, particularly chatbots powered
by Large Language Models (LLMs), poses a significant risk of social engineering
(SE) attacks on social media platforms. SE detection in multi-turn, chat-based
interactions is considerably more complex than single-instance detection due to
the dynamic nature of these conversations. A critical factor in mitigating this
threat is understanding the mechanisms through which SE attacks operate,
specifically how attackers exploit vulnerabilities and how victims' personality
traits contribute to their susceptibility. In this work, we propose an
LLM-agentic framework, SE-VSim, to simulate SE attack mechanisms by generating
multi-turn conversations. We model victim agents with varying personality
traits to assess how psychological profiles influence susceptibility to
manipulation. Using a dataset of over 1000 simulated conversations, we examine
attack scenarios in which adversaries, posing as recruiters, funding agencies,
and journalists, attempt to extract sensitive information. Based on this
analysis, we present a proof of concept, SE-OmniGuard, to offer personalized
protection to users by leveraging prior knowledge of the victims personality,
evaluating attack strategies, and monitoring information exchanges in
conversations to identify potential SE attempts.
|
2503.15554 | Shih-Chieh Dai | Shih-Chieh Dai, Jun Xu, Guanhong Tao | A Comprehensive Study of LLM Secure Code Generation | null | null | null | null | cs.CR cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | LLMs are widely used in software development. However, the code generated by
LLMs often contains vulnerabilities. Several secure code generation methods
have been proposed to address this issue, but their current evaluation schemes
leave several concerns unaddressed. Specifically, most existing studies
evaluate security and functional correctness separately, using different
datasets. That is, they assess vulnerabilities using security-related code
datasets while validating functionality with general code datasets. In
addition, prior research primarily relies on a single static analyzer, CodeQL,
to detect vulnerabilities in generated code, which limits the scope of security
evaluation.
In this work, we conduct a comprehensive study to systematically assess the
improvements introduced by four state-of-the-art secure code generation
techniques. Specifically, we apply both security inspection and functionality
validation to the same generated code and evaluate these two aspects together.
We also employ three popular static analyzers and two LLMs to identify
potential vulnerabilities in the generated code. Our study reveals that
existing techniques often compromise the functionality of generated code to
enhance security. Their overall performance remains limited when evaluating
security and functionality together. In fact, many techniques even degrade the
performance of the base LLM. Our further inspection reveals that these
techniques often either remove vulnerable lines of code entirely or generate
``garbage code'' that is unrelated to the intended task. Moreover, the commonly
used static analyzer CodeQL fails to detect several vulnerabilities, further
obscuring the actual security improvements achieved by existing techniques. Our
study serves as a guideline for a more rigorous and comprehensive evaluation of
secure code generation performance in future work.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 20:12:50 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Dai",
"Shih-Chieh",
""
],
[
"Xu",
"Jun",
""
],
[
"Tao",
"Guanhong",
""
]
] | TITLE: A Comprehensive Study of LLM Secure Code Generation
ABSTRACT: LLMs are widely used in software development. However, the code generated by
LLMs often contains vulnerabilities. Several secure code generation methods
have been proposed to address this issue, but their current evaluation schemes
leave several concerns unaddressed. Specifically, most existing studies
evaluate security and functional correctness separately, using different
datasets. That is, they assess vulnerabilities using security-related code
datasets while validating functionality with general code datasets. In
addition, prior research primarily relies on a single static analyzer, CodeQL,
to detect vulnerabilities in generated code, which limits the scope of security
evaluation.
In this work, we conduct a comprehensive study to systematically assess the
improvements introduced by four state-of-the-art secure code generation
techniques. Specifically, we apply both security inspection and functionality
validation to the same generated code and evaluate these two aspects together.
We also employ three popular static analyzers and two LLMs to identify
potential vulnerabilities in the generated code. Our study reveals that
existing techniques often compromise the functionality of generated code to
enhance security. Their overall performance remains limited when evaluating
security and functionality together. In fact, many techniques even degrade the
performance of the base LLM. Our further inspection reveals that these
techniques often either remove vulnerable lines of code entirely or generate
``garbage code'' that is unrelated to the intended task. Moreover, the commonly
used static analyzer CodeQL fails to detect several vulnerabilities, further
obscuring the actual security improvements achieved by existing techniques. Our
study serves as a guideline for a more rigorous and comprehensive evaluation of
secure code generation performance in future work.
|
2503.15557 | Inwoo Hwang | Inwoo Hwang, Jinseok Bae, Donggeun Lim, Young Min Kim | Motion Synthesis with Sparse and Flexible Keyjoint Control | 11 pages, Project Page: http://inwoohwang.me/SFControl | null | null | null | cs.GR cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Creating expressive character animations is labor-intensive, requiring
intricate manual adjustment of animators across space and time. Previous works
on controllable motion generation often rely on a predefined set of dense
spatio-temporal specifications (e.g., dense pelvis trajectories with exact
per-frame timing), limiting practicality for animators. To process high-level
intent and intuitive control in diverse scenarios, we propose a practical
controllable motions synthesis framework that respects sparse and flexible
keyjoint signals. Our approach employs a decomposed diffusion-based motion
synthesis framework that first synthesizes keyjoint movements from sparse input
control signals and then synthesizes full-body motion based on the completed
keyjoint trajectories. The low-dimensional keyjoint movements can easily adapt
to various control signal types, such as end-effector position for diverse
goal-driven motion synthesis, or incorporate functional constraints on a subset
of keyjoints. Additionally, we introduce a time-agnostic control formulation,
eliminating the need for frame-specific timing annotations and enhancing
control flexibility. Then, the shared second stage can synthesize a natural
whole-body motion that precisely satisfies the task requirement from dense
keyjoint movements. We demonstrate the effectiveness of sparse and flexible
keyjoint control through comprehensive experiments on diverse datasets and
scenarios.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 21:21:15 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hwang",
"Inwoo",
""
],
[
"Bae",
"Jinseok",
""
],
[
"Lim",
"Donggeun",
""
],
[
"Kim",
"Young Min",
""
]
] | TITLE: Motion Synthesis with Sparse and Flexible Keyjoint Control
ABSTRACT: Creating expressive character animations is labor-intensive, requiring
intricate manual adjustment of animators across space and time. Previous works
on controllable motion generation often rely on a predefined set of dense
spatio-temporal specifications (e.g., dense pelvis trajectories with exact
per-frame timing), limiting practicality for animators. To process high-level
intent and intuitive control in diverse scenarios, we propose a practical
controllable motions synthesis framework that respects sparse and flexible
keyjoint signals. Our approach employs a decomposed diffusion-based motion
synthesis framework that first synthesizes keyjoint movements from sparse input
control signals and then synthesizes full-body motion based on the completed
keyjoint trajectories. The low-dimensional keyjoint movements can easily adapt
to various control signal types, such as end-effector position for diverse
goal-driven motion synthesis, or incorporate functional constraints on a subset
of keyjoints. Additionally, we introduce a time-agnostic control formulation,
eliminating the need for frame-specific timing annotations and enhancing
control flexibility. Then, the shared second stage can synthesize a natural
whole-body motion that precisely satisfies the task requirement from dense
keyjoint movements. We demonstrate the effectiveness of sparse and flexible
keyjoint control through comprehensive experiments on diverse datasets and
scenarios.
|
2503.15562 | Melissa Robles | Nicol\'as Laverde, Melissa Robles, Johan Rodr\'iguez | Shap-MeD | null | null | null | null | cs.GR cs.CE cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present Shap-MeD, a text-to-3D object generative model specialized in the
biomedical domain. The objective of this study is to develop an assistant that
facilitates the 3D modeling of medical objects, thereby reducing development
time. 3D modeling in medicine has various applications, including surgical
procedure simulation and planning, the design of personalized prosthetic
implants, medical education, the creation of anatomical models, and the
development of research prototypes. To achieve this, we leverage Shap-e, an
open-source text-to-3D generative model developed by OpenAI, and fine-tune it
using a dataset of biomedical objects. Our model achieved a mean squared error
(MSE) of 0.089 in latent generation on the evaluation set, compared to Shap-e's
MSE of 0.147. Additionally, we conducted a qualitative evaluation, comparing
our model with others in the generation of biomedical objects. Our results
indicate that Shap-MeD demonstrates higher structural accuracy in biomedical
object generation.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 00:40:14 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Laverde",
"Nicolás",
""
],
[
"Robles",
"Melissa",
""
],
[
"Rodríguez",
"Johan",
""
]
] | TITLE: Shap-MeD
ABSTRACT: We present Shap-MeD, a text-to-3D object generative model specialized in the
biomedical domain. The objective of this study is to develop an assistant that
facilitates the 3D modeling of medical objects, thereby reducing development
time. 3D modeling in medicine has various applications, including surgical
procedure simulation and planning, the design of personalized prosthetic
implants, medical education, the creation of anatomical models, and the
development of research prototypes. To achieve this, we leverage Shap-e, an
open-source text-to-3D generative model developed by OpenAI, and fine-tune it
using a dataset of biomedical objects. Our model achieved a mean squared error
(MSE) of 0.089 in latent generation on the evaluation set, compared to Shap-e's
MSE of 0.147. Additionally, we conducted a qualitative evaluation, comparing
our model with others in the generation of biomedical objects. Our results
indicate that Shap-MeD demonstrates higher structural accuracy in biomedical
object generation.
|
2503.15564 | Tung Sum Thomas Kwok | Tung Sum Thomas Kwok and Chi-Hua Wang and Guang Cheng | GReaTER: Generate Realistic Tabular data after data Enhancement and
Reduction | Accepted by Data Engineering Meets Large Language Models: Challenges
and Opportunities Workshop@ICDE2025 Workshop at ICDE 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Tabular data synthesis involves not only multi-table synthesis but also
generating multi-modal data (e.g., strings and categories), which enables
diverse knowledge synthesis. However, separating numerical and categorical data
has limited the effectiveness of tabular data generation. The GReaT (Generate
Realistic Tabular Data) framework uses Large Language Models (LLMs) to encode
entire rows, eliminating the need to partition data types. Despite this, the
framework's performance is constrained by two issues: (1) tabular data entries
lack sufficient semantic meaning, limiting LLM's ability to leverage
pre-trained knowledge for in-context learning, and (2) complex multi-table
datasets struggle to establish effective relationships for collaboration. To
address these, we propose GReaTER (Generate Realistic Tabular Data after data
Enhancement and Reduction), which includes: (1) a data semantic enhancement
system that improves LLM's understanding of tabular data through mapping,
enabling better in-context learning, and (2) a cross-table connecting method to
establish efficient relationships across complex tables. Experimental results
show that GReaTER outperforms the GReaT framework.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 04:16:05 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Kwok",
"Tung Sum Thomas",
""
],
[
"Wang",
"Chi-Hua",
""
],
[
"Cheng",
"Guang",
""
]
] | TITLE: GReaTER: Generate Realistic Tabular data after data Enhancement and
Reduction
ABSTRACT: Tabular data synthesis involves not only multi-table synthesis but also
generating multi-modal data (e.g., strings and categories), which enables
diverse knowledge synthesis. However, separating numerical and categorical data
has limited the effectiveness of tabular data generation. The GReaT (Generate
Realistic Tabular Data) framework uses Large Language Models (LLMs) to encode
entire rows, eliminating the need to partition data types. Despite this, the
framework's performance is constrained by two issues: (1) tabular data entries
lack sufficient semantic meaning, limiting LLM's ability to leverage
pre-trained knowledge for in-context learning, and (2) complex multi-table
datasets struggle to establish effective relationships for collaboration. To
address these, we propose GReaTER (Generate Realistic Tabular Data after data
Enhancement and Reduction), which includes: (1) a data semantic enhancement
system that improves LLM's understanding of tabular data through mapping,
enabling better in-context learning, and (2) a cross-table connecting method to
establish efficient relationships across complex tables. Experimental results
show that GReaTER outperforms the GReaT framework.
|
2503.15568 | EL-MEHDI EL ARAR | El-Mehdi El Arar, Silviu-Ioan Filip (TARAN), Theo Mary (PEQUAN), Elisa
Riccietti (ENS de Lyon) | Mixed precision accumulation for neural network inference guided by
componentwise forward error analysis | null | null | null | null | cs.LG cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work proposes a mathematically founded mixed precision accumulation
strategy for the inference of neural networks. Our strategy is based on a new
componentwise forward error analysis that explains the propagation of errors in
the forward pass of neural networks. Specifically, our analysis shows that the
error in each component of the output of a layer is proportional to the
condition number of the inner product between the weights and the input,
multiplied by the condition number of the activation function. These condition
numbers can vary widely from one component to the other, thus creating a
significant opportunity to introduce mixed precision: each component should be
accumulated in a precision inversely proportional to the product of these
condition numbers. We propose a practical algorithm that exploits this
observation: it first computes all components in low precision, uses this
output to estimate the condition numbers, and recomputes in higher precision
only the components associated with large condition numbers. We test our
algorithm on various networks and datasets and confirm experimentally that it
can significantly improve the cost--accuracy tradeoff compared with uniform
precision accumulation baselines.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 09:19:11 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Arar",
"El-Mehdi El",
"",
"TARAN"
],
[
"Filip",
"Silviu-Ioan",
"",
"TARAN"
],
[
"Mary",
"Theo",
"",
"PEQUAN"
],
[
"Riccietti",
"Elisa",
"",
"ENS de Lyon"
]
] | TITLE: Mixed precision accumulation for neural network inference guided by
componentwise forward error analysis
ABSTRACT: This work proposes a mathematically founded mixed precision accumulation
strategy for the inference of neural networks. Our strategy is based on a new
componentwise forward error analysis that explains the propagation of errors in
the forward pass of neural networks. Specifically, our analysis shows that the
error in each component of the output of a layer is proportional to the
condition number of the inner product between the weights and the input,
multiplied by the condition number of the activation function. These condition
numbers can vary widely from one component to the other, thus creating a
significant opportunity to introduce mixed precision: each component should be
accumulated in a precision inversely proportional to the product of these
condition numbers. We propose a practical algorithm that exploits this
observation: it first computes all components in low precision, uses this
output to estimate the condition numbers, and recomputes in higher precision
only the components associated with large condition numbers. We test our
algorithm on various networks and datasets and confirm experimentally that it
can significantly improve the cost--accuracy tradeoff compared with uniform
precision accumulation baselines.
|
2503.15571 | Pankaj Thorat | Pankaj Thorat, Adnan Qidwai, Adrija Dhar, Aishwariya Chakraborty,
Anand Eswaran, Hima Patel, Praveen Jayachandran | LLM-Aided Customizable Profiling of Code Data Based On Programming
Language Concepts | 21 pages | null | null | null | cs.SE cs.ET cs.IR cs.LG cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data profiling is critical in machine learning for generating descriptive
statistics, supporting both deeper understanding and downstream tasks like data
valuation and curation. This work addresses profiling specifically in the
context of code datasets for Large Language Models (code-LLMs), where data
quality directly influences tasks such as code generation and summarization.
Characterizing code datasets in terms of programming language concepts enables
better insights and targeted data curation. Our proposed methodology decomposes
code data profiling into two phases: (1) an offline phase where LLMs are
leveraged to derive and learn rules for extracting syntactic and semantic
concepts across various programming languages, including previously unseen or
low-resource languages, and (2) an online deterministic phase applying these
derived rules for efficient real-time analysis. This hybrid approach is
customizable, extensible to new syntactic and semantic constructs, and scalable
to multiple languages. Experimentally, our LLM-aided method achieves a mean
accuracy of 90.33% for syntactic extraction rules and semantic classification
accuracies averaging 80% and 77% across languages and semantic concepts,
respectively.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:01:00 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Thorat",
"Pankaj",
""
],
[
"Qidwai",
"Adnan",
""
],
[
"Dhar",
"Adrija",
""
],
[
"Chakraborty",
"Aishwariya",
""
],
[
"Eswaran",
"Anand",
""
],
[
"Patel",
"Hima",
""
],
[
"Jayachandran",
"Praveen",
""
]
] | TITLE: LLM-Aided Customizable Profiling of Code Data Based On Programming
Language Concepts
ABSTRACT: Data profiling is critical in machine learning for generating descriptive
statistics, supporting both deeper understanding and downstream tasks like data
valuation and curation. This work addresses profiling specifically in the
context of code datasets for Large Language Models (code-LLMs), where data
quality directly influences tasks such as code generation and summarization.
Characterizing code datasets in terms of programming language concepts enables
better insights and targeted data curation. Our proposed methodology decomposes
code data profiling into two phases: (1) an offline phase where LLMs are
leveraged to derive and learn rules for extracting syntactic and semantic
concepts across various programming languages, including previously unseen or
low-resource languages, and (2) an online deterministic phase applying these
derived rules for efficient real-time analysis. This hybrid approach is
customizable, extensible to new syntactic and semantic constructs, and scalable
to multiple languages. Experimentally, our LLM-aided method achieves a mean
accuracy of 90.33% for syntactic extraction rules and semantic classification
accuracies averaging 80% and 77% across languages and semantic concepts,
respectively.
|
2503.15573 | Da Ma | Da Ma and Gonghu Shang and Zhi Chen and Libo Qin and Yijie Luo and Lei
Pan and Shuai Fan and Lu Chen and Kai Yu | Neuronal Activation States as Sample Embeddings for Data Selection in
Task-Specific Instruction Tuning | preprint | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Task-specific instruction tuning enhances the performance of large language
models (LLMs) on specialized tasks, yet efficiently selecting relevant data for
this purpose remains a challenge. Inspired by neural coactivation in the human
brain, we propose a novel data selection method called NAS, which leverages
neuronal activation states as embeddings for samples in the feature space.
Extensive experiments show that NAS outperforms classical data selection
methods in terms of both effectiveness and robustness across different models,
datasets, and selection ratios.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:35:57 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Ma",
"Da",
""
],
[
"Shang",
"Gonghu",
""
],
[
"Chen",
"Zhi",
""
],
[
"Qin",
"Libo",
""
],
[
"Luo",
"Yijie",
""
],
[
"Pan",
"Lei",
""
],
[
"Fan",
"Shuai",
""
],
[
"Chen",
"Lu",
""
],
[
"Yu",
"Kai",
""
]
] | TITLE: Neuronal Activation States as Sample Embeddings for Data Selection in
Task-Specific Instruction Tuning
ABSTRACT: Task-specific instruction tuning enhances the performance of large language
models (LLMs) on specialized tasks, yet efficiently selecting relevant data for
this purpose remains a challenge. Inspired by neural coactivation in the human
brain, we propose a novel data selection method called NAS, which leverages
neuronal activation states as embeddings for samples in the feature space.
Extensive experiments show that NAS outperforms classical data selection
methods in terms of both effectiveness and robustness across different models,
datasets, and selection ratios.
|
2503.15574 | Wenjia Xie | Wenjia Xie, Jinhui Li, Kai Zong and Luis Seco | Machine Learning Techniques for Multifactor Analysis of National Carbon
Dioxide Emissions | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a comprehensive study leveraging Support Vector Machine
(SVM) regression and Principal Component Regression (PCR) to analyze carbon
dioxide emissions in a global dataset of 62 countries and their dependence on
idiosyncratic, country-specific parameters. The objective is to understand the
factors contributing to carbon dioxide emissions and identify the most
predictive elements. The analysis provides country-specific emission estimates,
highlighting diverse national trajectories and pinpointing areas for targeted
interventions in climate change mitigation, sustainable development, and the
growing carbon credit markets and green finance sector. The study aims to
support policymaking with accurate representations of carbon dioxide emissions,
offering nuanced information for formulating effective strategies to address
climate change while informing initiatives related to carbon trading and
environmentally sustainable investments.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 11:36:08 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Xie",
"Wenjia",
""
],
[
"Li",
"Jinhui",
""
],
[
"Zong",
"Kai",
""
],
[
"Seco",
"Luis",
""
]
] | TITLE: Machine Learning Techniques for Multifactor Analysis of National Carbon
Dioxide Emissions
ABSTRACT: This paper presents a comprehensive study leveraging Support Vector Machine
(SVM) regression and Principal Component Regression (PCR) to analyze carbon
dioxide emissions in a global dataset of 62 countries and their dependence on
idiosyncratic, country-specific parameters. The objective is to understand the
factors contributing to carbon dioxide emissions and identify the most
predictive elements. The analysis provides country-specific emission estimates,
highlighting diverse national trajectories and pinpointing areas for targeted
interventions in climate change mitigation, sustainable development, and the
growing carbon credit markets and green finance sector. The study aims to
support policymaking with accurate representations of carbon dioxide emissions,
offering nuanced information for formulating effective strategies to address
climate change while informing initiatives related to carbon trading and
environmentally sustainable investments.
|
2503.15578 | Jiexia Ye | Jiexia Ye, Weiqi Zhang, Ziyue Li, Jia Li, Fugee Tsung | Sparseformer: a Transferable Transformer with Multi-granularity Token
Sparsification for Medical Time Series Classification | 3 figures, 16 pages, 5 tables | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical time series (MedTS) classification is crucial for improved diagnosis
in healthcare, and yet it is challenging due to the varying granularity of
patterns, intricate inter-channel correlation, information redundancy, and
label scarcity. While existing transformer-based models have shown promise in
time series analysis, they mainly focus on forecasting and fail to fully
exploit the distinctive characteristics of MedTS data. In this paper, we
introduce Sparseformer, a transformer specifically designed for MedTS
classification. We propose a sparse token-based dual-attention mechanism that
enables global modeling and token compression, allowing dynamic focus on the
most informative tokens while distilling redundant features. This mechanism is
then applied to the multi-granularity, cross-channel encoding of medical
signals, capturing intra- and inter-granularity correlations and inter-channel
connections. The sparsification design allows our model to handle heterogeneous
inputs of varying lengths and channels directly. Further, we introduce an
adaptive label encoder to address label space misalignment across datasets,
equipping our model with cross-dataset transferability to alleviate the medical
label scarcity issue. Our model outperforms 12 baselines across seven medical
datasets under supervised learning. In the few-shot learning experiments, our
model also achieves superior average results. In addition, the in-domain and
cross-domain experiments among three diagnostic scenarios demonstrate our
model's zero-shot learning capability. Collectively, these findings underscore
the robustness and transferability of our model in various medical
applications.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 13:22:42 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Ye",
"Jiexia",
""
],
[
"Zhang",
"Weiqi",
""
],
[
"Li",
"Ziyue",
""
],
[
"Li",
"Jia",
""
],
[
"Tsung",
"Fugee",
""
]
] | TITLE: Sparseformer: a Transferable Transformer with Multi-granularity Token
Sparsification for Medical Time Series Classification
ABSTRACT: Medical time series (MedTS) classification is crucial for improved diagnosis
in healthcare, and yet it is challenging due to the varying granularity of
patterns, intricate inter-channel correlation, information redundancy, and
label scarcity. While existing transformer-based models have shown promise in
time series analysis, they mainly focus on forecasting and fail to fully
exploit the distinctive characteristics of MedTS data. In this paper, we
introduce Sparseformer, a transformer specifically designed for MedTS
classification. We propose a sparse token-based dual-attention mechanism that
enables global modeling and token compression, allowing dynamic focus on the
most informative tokens while distilling redundant features. This mechanism is
then applied to the multi-granularity, cross-channel encoding of medical
signals, capturing intra- and inter-granularity correlations and inter-channel
connections. The sparsification design allows our model to handle heterogeneous
inputs of varying lengths and channels directly. Further, we introduce an
adaptive label encoder to address label space misalignment across datasets,
equipping our model with cross-dataset transferability to alleviate the medical
label scarcity issue. Our model outperforms 12 baselines across seven medical
datasets under supervised learning. In the few-shot learning experiments, our
model also achieves superior average results. In addition, the in-domain and
cross-domain experiments among three diagnostic scenarios demonstrate our
model's zero-shot learning capability. Collectively, these findings underscore
the robustness and transferability of our model in various medical
applications.
|
2503.15581 | Songqiao Hu | Songqiao Hu, Zeyi Liu, Xiao He | Performance-bounded Online Ensemble Learning Method Based on Multi-armed
bandits and Its Applications in Real-time Safety Assessment | 14 pages, 9 figures | null | null | null | cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensemble learning plays a crucial role in practical applications of online
learning due to its enhanced classification performance and adaptable
adjustment mechanisms. However, most weight allocation strategies in ensemble
learning are heuristic, making it challenging to theoretically guarantee that
the ensemble classifier outperforms its base classifiers. To address this
issue, a performance-bounded online ensemble learning method based on
multi-armed bandits, named PB-OEL, is proposed in this paper. Specifically,
multi-armed bandit with expert advice is incorporated into online ensemble
learning, aiming to update the weights of base classifiers and make
predictions. A theoretical framework is established to bound the performance of
the ensemble classifier relative to base classifiers. By setting expert advice
of bandits, the bound exceeds the performance of any base classifier when the
length of data stream is sufficiently large. Additionally, performance bounds
for scenarios with limited annotations are also derived. Numerous experiments
on benchmark datasets and a dataset of real-time safety assessment tasks are
conducted. The experimental results validate the theoretical bound to a certain
extent and demonstrate that the proposed method outperforms existing
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:57:53 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hu",
"Songqiao",
""
],
[
"Liu",
"Zeyi",
""
],
[
"He",
"Xiao",
""
]
] | TITLE: Performance-bounded Online Ensemble Learning Method Based on Multi-armed
bandits and Its Applications in Real-time Safety Assessment
ABSTRACT: Ensemble learning plays a crucial role in practical applications of online
learning due to its enhanced classification performance and adaptable
adjustment mechanisms. However, most weight allocation strategies in ensemble
learning are heuristic, making it challenging to theoretically guarantee that
the ensemble classifier outperforms its base classifiers. To address this
issue, a performance-bounded online ensemble learning method based on
multi-armed bandits, named PB-OEL, is proposed in this paper. Specifically,
multi-armed bandit with expert advice is incorporated into online ensemble
learning, aiming to update the weights of base classifiers and make
predictions. A theoretical framework is established to bound the performance of
the ensemble classifier relative to base classifiers. By setting expert advice
of bandits, the bound exceeds the performance of any base classifier when the
length of data stream is sufficiently large. Additionally, performance bounds
for scenarios with limited annotations are also derived. Numerous experiments
on benchmark datasets and a dataset of real-time safety assessment tasks are
conducted. The experimental results validate the theoretical bound to a certain
extent and demonstrate that the proposed method outperforms existing
state-of-the-art methods.
|
2503.15582 | Polina Turishcheva | Martin Ritzert, Polina Turishcheva, Laura Hansel, Paul Wollenhaupt,
Marissa Weis, Alexander Ecker | Hierarchical clustering with maximum density paths and mixture models | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Hierarchical clustering is an effective and interpretable technique for
analyzing structure in data, offering a nuanced understanding by revealing
insights at multiple scales and resolutions. It is particularly helpful in
settings where the exact number of clusters is unknown, and provides a robust
framework for exploring complex datasets. Additionally, hierarchical clustering
can uncover inner structures within clusters, capturing subtle relationships
and nested patterns that may be obscured by traditional flat clustering
methods. However, existing hierarchical clustering methods struggle with
high-dimensional data, especially when there are no clear density gaps between
modes. Our method addresses this limitation by leveraging a two-stage approach,
first employing a Gaussian or Student's t mixture model to overcluster the
data, and then hierarchically merging clusters based on the induced density
landscape. This approach yields state-of-the-art clustering performance while
also providing a meaningful hierarchy, making it a valuable tool for
exploratory data analysis. Code is available at
https://github.com/ecker-lab/tneb clustering.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:37:51 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Ritzert",
"Martin",
""
],
[
"Turishcheva",
"Polina",
""
],
[
"Hansel",
"Laura",
""
],
[
"Wollenhaupt",
"Paul",
""
],
[
"Weis",
"Marissa",
""
],
[
"Ecker",
"Alexander",
""
]
] | TITLE: Hierarchical clustering with maximum density paths and mixture models
ABSTRACT: Hierarchical clustering is an effective and interpretable technique for
analyzing structure in data, offering a nuanced understanding by revealing
insights at multiple scales and resolutions. It is particularly helpful in
settings where the exact number of clusters is unknown, and provides a robust
framework for exploring complex datasets. Additionally, hierarchical clustering
can uncover inner structures within clusters, capturing subtle relationships
and nested patterns that may be obscured by traditional flat clustering
methods. However, existing hierarchical clustering methods struggle with
high-dimensional data, especially when there are no clear density gaps between
modes. Our method addresses this limitation by leveraging a two-stage approach,
first employing a Gaussian or Student's t mixture model to overcluster the
data, and then hierarchically merging clusters based on the induced density
landscape. This approach yields state-of-the-art clustering performance while
also providing a meaningful hierarchy, making it a valuable tool for
exploratory data analysis. Code is available at
https://github.com/ecker-lab/tneb clustering.
|
2503.15586 | Zeqi Gu | Zeqi Gu, Difan Liu, Timothy Langlois, Matthew Fisher, Abe Davis | How to Train Your Dragon: Automatic Diffusion-Based Rigging for
Characters with Diverse Topologies | Accepted to Eurographics 2025 | null | null | null | cs.GR cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent diffusion-based methods have achieved impressive results on animating
images of human subjects. However, most of that success has built on
human-specific body pose representations and extensive training with labeled
real videos. In this work, we extend the ability of such models to animate
images of characters with more diverse skeletal topologies. Given a small
number (3-5) of example frames showing the character in different poses with
corresponding skeletal information, our model quickly infers a rig for that
character that can generate images corresponding to new skeleton poses. We
propose a procedural data generation pipeline that efficiently samples training
data with diverse topologies on the fly. We use it, along with a novel skeleton
representation, to train our model on articulated shapes spanning a large space
of textures and topologies. Then during fine-tuning, our model rapidly adapts
to unseen target characters and generalizes well to rendering new poses, both
for realistic and more stylized cartoon appearances. To better evaluate
performance on this novel and challenging task, we create the first 2D video
dataset that contains both humanoid and non-humanoid subjects with per-frame
keypoint annotations. With extensive experiments, we demonstrate the superior
quality of our results. Project page: https://traindragondiffusion.github.io/
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 17:46:36 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gu",
"Zeqi",
""
],
[
"Liu",
"Difan",
""
],
[
"Langlois",
"Timothy",
""
],
[
"Fisher",
"Matthew",
""
],
[
"Davis",
"Abe",
""
]
] | TITLE: How to Train Your Dragon: Automatic Diffusion-Based Rigging for
Characters with Diverse Topologies
ABSTRACT: Recent diffusion-based methods have achieved impressive results on animating
images of human subjects. However, most of that success has built on
human-specific body pose representations and extensive training with labeled
real videos. In this work, we extend the ability of such models to animate
images of characters with more diverse skeletal topologies. Given a small
number (3-5) of example frames showing the character in different poses with
corresponding skeletal information, our model quickly infers a rig for that
character that can generate images corresponding to new skeleton poses. We
propose a procedural data generation pipeline that efficiently samples training
data with diverse topologies on the fly. We use it, along with a novel skeleton
representation, to train our model on articulated shapes spanning a large space
of textures and topologies. Then during fine-tuning, our model rapidly adapts
to unseen target characters and generalizes well to rendering new poses, both
for realistic and more stylized cartoon appearances. To better evaluate
performance on this novel and challenging task, we create the first 2D video
dataset that contains both humanoid and non-humanoid subjects with per-frame
keypoint annotations. With extensive experiments, we demonstrate the superior
quality of our results. Project page: https://traindragondiffusion.github.io/
|
2503.15617 | Masud Ahmed | Masud Ahmed, Zahid Hasan, Syed Arefinul Haque, Abu Zaher Md Faridee,
Sanjay Purushotham, Suya You, Nirmalya Roy | CAM-Seg: A Continuous-valued Embedding Approach for Semantic Image
Generation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional transformer-based semantic segmentation relies on quantized
embeddings. However, our analysis reveals that autoencoder accuracy on
segmentation mask using quantized embeddings (e.g. VQ-VAE) is 8% lower than
continuous-valued embeddings (e.g. KL-VAE). Motivated by this, we propose a
continuous-valued embedding framework for semantic segmentation. By
reformulating semantic mask generation as a continuous image-to-embedding
diffusion process, our approach eliminates the need for discrete latent
representations while preserving fine-grained spatial and semantic details. Our
key contribution includes a diffusion-guided autoregressive transformer that
learns a continuous semantic embedding space by modeling long-range
dependencies in image features. Our framework contains a unified architecture
combining a VAE encoder for continuous feature extraction, a diffusion-guided
transformer for conditioned embedding generation, and a VAE decoder for
semantic mask reconstruction. Our setting facilitates zero-shot domain
adaptation capabilities enabled by the continuity of the embedding space.
Experiments across diverse datasets (e.g., Cityscapes and domain-shifted
variants) demonstrate state-of-the-art robustness to distribution shifts,
including adverse weather (e.g., fog, snow) and viewpoint variations. Our model
also exhibits strong noise resilience, achieving robust performance ($\approx$
95% AP compared to baseline) under gaussian noise, moderate motion blur, and
moderate brightness/contrast variations, while experiencing only a moderate
impact ($\approx$ 90% AP compared to baseline) from 50% salt and pepper noise,
saturation and hue shifts. Code available:
https://github.com/mahmed10/CAMSS.git
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 18:06:54 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Ahmed",
"Masud",
""
],
[
"Hasan",
"Zahid",
""
],
[
"Haque",
"Syed Arefinul",
""
],
[
"Faridee",
"Abu Zaher Md",
""
],
[
"Purushotham",
"Sanjay",
""
],
[
"You",
"Suya",
""
],
[
"Roy",
"Nirmalya",
""
]
] | TITLE: CAM-Seg: A Continuous-valued Embedding Approach for Semantic Image
Generation
ABSTRACT: Traditional transformer-based semantic segmentation relies on quantized
embeddings. However, our analysis reveals that autoencoder accuracy on
segmentation mask using quantized embeddings (e.g. VQ-VAE) is 8% lower than
continuous-valued embeddings (e.g. KL-VAE). Motivated by this, we propose a
continuous-valued embedding framework for semantic segmentation. By
reformulating semantic mask generation as a continuous image-to-embedding
diffusion process, our approach eliminates the need for discrete latent
representations while preserving fine-grained spatial and semantic details. Our
key contribution includes a diffusion-guided autoregressive transformer that
learns a continuous semantic embedding space by modeling long-range
dependencies in image features. Our framework contains a unified architecture
combining a VAE encoder for continuous feature extraction, a diffusion-guided
transformer for conditioned embedding generation, and a VAE decoder for
semantic mask reconstruction. Our setting facilitates zero-shot domain
adaptation capabilities enabled by the continuity of the embedding space.
Experiments across diverse datasets (e.g., Cityscapes and domain-shifted
variants) demonstrate state-of-the-art robustness to distribution shifts,
including adverse weather (e.g., fog, snow) and viewpoint variations. Our model
also exhibits strong noise resilience, achieving robust performance ($\approx$
95% AP compared to baseline) under gaussian noise, moderate motion blur, and
moderate brightness/contrast variations, while experiencing only a moderate
impact ($\approx$ 90% AP compared to baseline) from 50% salt and pepper noise,
saturation and hue shifts. Code available:
https://github.com/mahmed10/CAMSS.git
|
2503.15621 | Sara Sarto | Federico Cocchi, Nicholas Moratelli, Davide Caffagni, Sara Sarto,
Lorenzo Baraldi, Marcella Cornia, Rita Cucchiara | LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for
Enhanced Visual Instruction Tuning | null | null | null | null | cs.CV cs.AI cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent progress in Multimodal Large Language Models (MLLMs) has highlighted
the critical roles of both the visual backbone and the underlying language
model. While prior work has primarily focused on scaling these components to
billions of parameters, the trade-offs between model size, architecture, and
performance remain underexplored. Additionally, inconsistencies in training
data and evaluation protocols have hindered direct comparisons, making it
difficult to derive optimal design choices. In this paper, we introduce
LLaVA-MORE, a new family of MLLMs that integrates recent language models with
diverse visual backbones. To ensure fair comparisons, we employ a unified
training protocol applied consistently across all architectures. Our analysis
systematically explores both small- and medium-scale LLMs -- including Phi-4,
LLaMA-3.1, and Gemma-2 -- to evaluate multimodal reasoning, generation, and
instruction following, while examining the relationship between model size and
performance. Beyond evaluating the LLM impact on final results, we conduct a
comprehensive study of various visual encoders, ranging from CLIP-based
architectures to alternatives such as DINOv2, SigLIP, and SigLIP2. Additional
experiments investigate the effects of increased image resolution and
variations in pre-training datasets. Overall, our results provide insights into
the design of more effective MLLMs, offering a reproducible evaluation
framework that facilitates direct comparisons and can guide future model
development. Our source code and trained models are publicly available at:
https://github.com/aimagelab/LLaVA-MORE.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 18:10:12 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Cocchi",
"Federico",
""
],
[
"Moratelli",
"Nicholas",
""
],
[
"Caffagni",
"Davide",
""
],
[
"Sarto",
"Sara",
""
],
[
"Baraldi",
"Lorenzo",
""
],
[
"Cornia",
"Marcella",
""
],
[
"Cucchiara",
"Rita",
""
]
] | TITLE: LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for
Enhanced Visual Instruction Tuning
ABSTRACT: Recent progress in Multimodal Large Language Models (MLLMs) has highlighted
the critical roles of both the visual backbone and the underlying language
model. While prior work has primarily focused on scaling these components to
billions of parameters, the trade-offs between model size, architecture, and
performance remain underexplored. Additionally, inconsistencies in training
data and evaluation protocols have hindered direct comparisons, making it
difficult to derive optimal design choices. In this paper, we introduce
LLaVA-MORE, a new family of MLLMs that integrates recent language models with
diverse visual backbones. To ensure fair comparisons, we employ a unified
training protocol applied consistently across all architectures. Our analysis
systematically explores both small- and medium-scale LLMs -- including Phi-4,
LLaMA-3.1, and Gemma-2 -- to evaluate multimodal reasoning, generation, and
instruction following, while examining the relationship between model size and
performance. Beyond evaluating the LLM impact on final results, we conduct a
comprehensive study of various visual encoders, ranging from CLIP-based
architectures to alternatives such as DINOv2, SigLIP, and SigLIP2. Additional
experiments investigate the effects of increased image resolution and
variations in pre-training datasets. Overall, our results provide insights into
the design of more effective MLLMs, offering a reproducible evaluation
framework that facilitates direct comparisons and can guide future model
development. Our source code and trained models are publicly available at:
https://github.com/aimagelab/LLaVA-MORE.
|
2503.15625 | Matthew Massey | Matthew Massey and Abdullah-Al-Zubaer Imran | EarthScape: A Multimodal Dataset for Surficial Geologic Mapping and
Earth Surface Analysis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Surficial geologic mapping is essential for understanding Earth surface
processes, addressing modern challenges such as climate change and national
security, and supporting common applications in engineering and resource
management. However, traditional mapping methods are labor-intensive, limiting
spatial coverage and introducing potential biases. To address these
limitations, we introduce EarthScape, a novel, AI-ready multimodal dataset
specifically designed for surficial geologic mapping and Earth surface
analysis. EarthScape integrates high-resolution aerial RGB and near-infrared
(NIR) imagery, digital elevation models (DEM), multi-scale DEM-derived terrain
features, and hydrologic and infrastructure vector data. The dataset provides
detailed annotations for seven distinct surficial geologic classes encompassing
various geological processes. We present a comprehensive data processing
pipeline using open-sourced raw data and establish baseline benchmarks using
different spatial modalities to demonstrate the utility of EarthScape. As a
living dataset with a vision for expansion, EarthScape bridges the gap between
computer vision and Earth sciences, offering a valuable resource for advancing
research in multimodal learning, geospatial analysis, and geological mapping.
Our code is available at https://github.com/masseygeo/earthscape.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 18:23:48 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Massey",
"Matthew",
""
],
[
"Imran",
"Abdullah-Al-Zubaer",
""
]
] | TITLE: EarthScape: A Multimodal Dataset for Surficial Geologic Mapping and
Earth Surface Analysis
ABSTRACT: Surficial geologic mapping is essential for understanding Earth surface
processes, addressing modern challenges such as climate change and national
security, and supporting common applications in engineering and resource
management. However, traditional mapping methods are labor-intensive, limiting
spatial coverage and introducing potential biases. To address these
limitations, we introduce EarthScape, a novel, AI-ready multimodal dataset
specifically designed for surficial geologic mapping and Earth surface
analysis. EarthScape integrates high-resolution aerial RGB and near-infrared
(NIR) imagery, digital elevation models (DEM), multi-scale DEM-derived terrain
features, and hydrologic and infrastructure vector data. The dataset provides
detailed annotations for seven distinct surficial geologic classes encompassing
various geological processes. We present a comprehensive data processing
pipeline using open-sourced raw data and establish baseline benchmarks using
different spatial modalities to demonstrate the utility of EarthScape. As a
living dataset with a vision for expansion, EarthScape bridges the gap between
computer vision and Earth sciences, offering a valuable resource for advancing
research in multimodal learning, geospatial analysis, and geological mapping.
Our code is available at https://github.com/masseygeo/earthscape.
|
2503.15633 | Moritz B\"ohle | Am\'elie Royer, Moritz B\"ohle, Gabriel de Marmiesse, Laurent
Mazar\'e, Neil Zeghidour, Alexandre D\'efossez, Patrick P\'erez | Vision-Speech Models: Teaching Speech Models to Converse about Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The recent successes of Vision-Language models raise the question of how to
equivalently imbue a pretrained speech model with vision understanding, an
important milestone towards building a multimodal speech model able to freely
converse about images. Building such a conversational Vision-Speech model
brings its unique challenges: (i) paired image-speech datasets are much scarcer
than their image-text counterparts, (ii) ensuring real-time latency at
inference is crucial thus bringing compute and memory constraints, and (iii)
the model should preserve prosodic features (e.g., speaker tone) which cannot
be inferred from text alone. In this work, we introduce MoshiVis, augmenting a
recent dialogue speech LLM, Moshi, with visual inputs through lightweight
adaptation modules. An additional dynamic gating mechanism enables the model to
more easily switch between the visual inputs and unrelated conversation topics.
To reduce training costs, we design a simple one-stage, parameter-efficient
fine-tuning pipeline in which we leverage a mixture of image-text (i.e.,
"speechless") and image-speech samples. We evaluate the model on downstream
visual understanding tasks with both audio and text prompts, and report
qualitative samples of interactions with MoshiVis. Our inference code will be
made available, as well as the image-speech data used for audio evaluation.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 18:40:45 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Royer",
"Amélie",
""
],
[
"Böhle",
"Moritz",
""
],
[
"de Marmiesse",
"Gabriel",
""
],
[
"Mazaré",
"Laurent",
""
],
[
"Zeghidour",
"Neil",
""
],
[
"Défossez",
"Alexandre",
""
],
[
"Pérez",
"Patrick",
""
]
] | TITLE: Vision-Speech Models: Teaching Speech Models to Converse about Images
ABSTRACT: The recent successes of Vision-Language models raise the question of how to
equivalently imbue a pretrained speech model with vision understanding, an
important milestone towards building a multimodal speech model able to freely
converse about images. Building such a conversational Vision-Speech model
brings its unique challenges: (i) paired image-speech datasets are much scarcer
than their image-text counterparts, (ii) ensuring real-time latency at
inference is crucial thus bringing compute and memory constraints, and (iii)
the model should preserve prosodic features (e.g., speaker tone) which cannot
be inferred from text alone. In this work, we introduce MoshiVis, augmenting a
recent dialogue speech LLM, Moshi, with visual inputs through lightweight
adaptation modules. An additional dynamic gating mechanism enables the model to
more easily switch between the visual inputs and unrelated conversation topics.
To reduce training costs, we design a simple one-stage, parameter-efficient
fine-tuning pipeline in which we leverage a mixture of image-text (i.e.,
"speechless") and image-speech samples. We evaluate the model on downstream
visual understanding tasks with both audio and text prompts, and report
qualitative samples of interactions with MoshiVis. Our inference code will be
made available, as well as the image-speech data used for audio evaluation.
|
2503.15647 | Jumanh Atoum | Jumanh Atoum and Garrison L.H. Johnston and Nabil Simaan and Jie Ying
Wu | Multi-Modal Gesture Recognition from Video and Surgical Tool Pose
Information via Motion Invariants | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing surgical gestures in real-time is a stepping stone towards
automated activity recognition, skill assessment, intra-operative assistance,
and eventually surgical automation. The current robotic surgical systems
provide us with rich multi-modal data such as video and kinematics. While some
recent works in multi-modal neural networks learn the relationships between
vision and kinematics data, current approaches treat kinematics information as
independent signals, with no underlying relation between tool-tip poses.
However, instrument poses are geometrically related, and the underlying
geometry can aid neural networks in learning gesture representation. Therefore,
we propose combining motion invariant measures (curvature and torsion) with
vision and kinematics data using a relational graph network to capture the
underlying relations between different data streams. We show that gesture
recognition improves when combining invariant signals with tool position,
achieving 90.3\% frame-wise accuracy on the JIGSAWS suturing dataset. Our
results show that motion invariant signals coupled with position are better
representations of gesture motion compared to traditional position and
quaternion representations. Our results highlight the need for geometric-aware
modeling of kinematics for gesture recognition.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 19:02:58 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Atoum",
"Jumanh",
""
],
[
"Johnston",
"Garrison L. H.",
""
],
[
"Simaan",
"Nabil",
""
],
[
"Wu",
"Jie Ying",
""
]
] | TITLE: Multi-Modal Gesture Recognition from Video and Surgical Tool Pose
Information via Motion Invariants
ABSTRACT: Recognizing surgical gestures in real-time is a stepping stone towards
automated activity recognition, skill assessment, intra-operative assistance,
and eventually surgical automation. The current robotic surgical systems
provide us with rich multi-modal data such as video and kinematics. While some
recent works in multi-modal neural networks learn the relationships between
vision and kinematics data, current approaches treat kinematics information as
independent signals, with no underlying relation between tool-tip poses.
However, instrument poses are geometrically related, and the underlying
geometry can aid neural networks in learning gesture representation. Therefore,
we propose combining motion invariant measures (curvature and torsion) with
vision and kinematics data using a relational graph network to capture the
underlying relations between different data streams. We show that gesture
recognition improves when combining invariant signals with tool position,
achieving 90.3\% frame-wise accuracy on the JIGSAWS suturing dataset. Our
results show that motion invariant signals coupled with position are better
representations of gesture motion compared to traditional position and
quaternion representations. Our results highlight the need for geometric-aware
modeling of kinematics for gesture recognition.
|
2503.15653 | Miguel Ure\~na Pliego | Miguel Ure\~na Pliego, Rub\'en Mart\'inez Mar\'in, Nianfang Shi,
Takeru Shibayama, Ulrich Leth, Miguel Marchamalo Sacrist\'an | Transport-Related Surface Detection with Machine Learning: Analyzing
Temporal Trends in Madrid and Vienna | Preprint | null | 10.1016/j.rsase.2025.101503 | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | This study explores the integration of machine learning into urban aerial
image analysis, with a focus on identifying infrastructure surfaces for cars
and pedestrians and analyzing historical trends. It emphasizes the transition
from convolutional architectures to transformer-based pre-trained models,
underscoring their potential in global geospatial analysis. A workflow is
presented for automatically generating geospatial datasets, enabling the
creation of semantic segmentation datasets from various sources, including
WMS/WMTS links, vectorial cartography, and OpenStreetMap (OSM) overpass-turbo
requests. The developed code allows a fast dataset generation process for
training machine learning models using openly available data without manual
labelling. Using aerial imagery and vectorial data from the respective
geographical offices of Madrid and Vienna, two datasets were generated for car
and pedestrian surface detection. A transformer-based model was trained and
evaluated for each city, demonstrating good accuracy values. The historical
trend analysis involved applying the trained model to earlier images predating
the availability of vectorial data 10 to 20 years, successfully identifying
temporal trends in infrastructure for pedestrians and cars across different
city areas. This technique is applicable for municipal governments to gather
valuable data at a minimal cost.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 19:09:02 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Pliego",
"Miguel Ureña",
""
],
[
"Marín",
"Rubén Martínez",
""
],
[
"Shi",
"Nianfang",
""
],
[
"Shibayama",
"Takeru",
""
],
[
"Leth",
"Ulrich",
""
],
[
"Sacristán",
"Miguel Marchamalo",
""
]
] | TITLE: Transport-Related Surface Detection with Machine Learning: Analyzing
Temporal Trends in Madrid and Vienna
ABSTRACT: This study explores the integration of machine learning into urban aerial
image analysis, with a focus on identifying infrastructure surfaces for cars
and pedestrians and analyzing historical trends. It emphasizes the transition
from convolutional architectures to transformer-based pre-trained models,
underscoring their potential in global geospatial analysis. A workflow is
presented for automatically generating geospatial datasets, enabling the
creation of semantic segmentation datasets from various sources, including
WMS/WMTS links, vectorial cartography, and OpenStreetMap (OSM) overpass-turbo
requests. The developed code allows a fast dataset generation process for
training machine learning models using openly available data without manual
labelling. Using aerial imagery and vectorial data from the respective
geographical offices of Madrid and Vienna, two datasets were generated for car
and pedestrian surface detection. A transformer-based model was trained and
evaluated for each city, demonstrating good accuracy values. The historical
trend analysis involved applying the trained model to earlier images predating
the availability of vectorial data 10 to 20 years, successfully identifying
temporal trends in infrastructure for pedestrians and cars across different
city areas. This technique is applicable for municipal governments to gather
valuable data at a minimal cost.
|
2503.15676 | Taehyoung Kim | C\'edric Vincent, Taehyoung Kim, Henri Mee{\ss} | High Temporal Consistency through Semantic Similarity Propagation in
Semi-Supervised Video Semantic Segmentation for Autonomous Flight | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Semantic segmentation from RGB cameras is essential to the perception of
autonomous flying vehicles. The stability of predictions through the captured
videos is paramount to their reliability and, by extension, to the
trustworthiness of the agents. In this paper, we propose a lightweight video
semantic segmentation approach-suited to onboard real-time inference-achieving
high temporal consistency on aerial data through Semantic Similarity
Propagation across frames. SSP temporally propagates the predictions of an
efficient image segmentation model with global registration alignment to
compensate for camera movements. It combines the current estimation and the
prior prediction with linear interpolation using weights computed from the
features similarities of the two frames. Because data availability is a
challenge in this domain, we propose a consistency-aware Knowledge Distillation
training procedure for sparsely labeled datasets with few annotations. Using a
large image segmentation model as a teacher to train the efficient SSP, we
leverage the strong correlations between labeled and unlabeled frames in the
same training videos to obtain high-quality supervision on all frames. KD-SSP
obtains a significant temporal consistency increase over the base image
segmentation model of 12.5% and 6.7% TC on UAVid and RuralScapes respectively,
with higher accuracy and comparable inference speed. On these aerial datasets,
KD-SSP provides a superior segmentation quality and inference speed trade-off
than other video methods proposed for general applications and shows
considerably higher consistency. The code will be made publicly available upon
acceptance.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 20:12:07 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Vincent",
"Cédric",
""
],
[
"Kim",
"Taehyoung",
""
],
[
"Meeß",
"Henri",
""
]
] | TITLE: High Temporal Consistency through Semantic Similarity Propagation in
Semi-Supervised Video Semantic Segmentation for Autonomous Flight
ABSTRACT: Semantic segmentation from RGB cameras is essential to the perception of
autonomous flying vehicles. The stability of predictions through the captured
videos is paramount to their reliability and, by extension, to the
trustworthiness of the agents. In this paper, we propose a lightweight video
semantic segmentation approach-suited to onboard real-time inference-achieving
high temporal consistency on aerial data through Semantic Similarity
Propagation across frames. SSP temporally propagates the predictions of an
efficient image segmentation model with global registration alignment to
compensate for camera movements. It combines the current estimation and the
prior prediction with linear interpolation using weights computed from the
features similarities of the two frames. Because data availability is a
challenge in this domain, we propose a consistency-aware Knowledge Distillation
training procedure for sparsely labeled datasets with few annotations. Using a
large image segmentation model as a teacher to train the efficient SSP, we
leverage the strong correlations between labeled and unlabeled frames in the
same training videos to obtain high-quality supervision on all frames. KD-SSP
obtains a significant temporal consistency increase over the base image
segmentation model of 12.5% and 6.7% TC on UAVid and RuralScapes respectively,
with higher accuracy and comparable inference speed. On these aerial datasets,
KD-SSP provides a superior segmentation quality and inference speed trade-off
than other video methods proposed for general applications and shows
considerably higher consistency. The code will be made publicly available upon
acceptance.
|
2503.15681 | Fausto German | Fausto German, Brian Keith, Chris North | Narrative Trails: A Method for Coherent Storyline Extraction via Maximum
Capacity Path Optimization | Eighth Text2Story Workshop at the 47th European Conference on
Information Retrieval (ECIR 2025). The code for our algorithm, evaluations,
and examples are available at
https://github.com/faustogerman/narrative-trails | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Traditional information retrieval is primarily concerned with finding
relevant information from large datasets without imposing a structure within
the retrieved pieces of data. However, structuring information in the form of
narratives--ordered sets of documents that form coherent storylines--allows us
to identify, interpret, and share insights about the connections and
relationships between the ideas presented in the data. Despite their
significance, current approaches for algorithmically extracting storylines from
data are scarce, with existing methods primarily relying on intricate
word-based heuristics and auxiliary document structures. Moreover, many of
these methods are difficult to scale to large datasets and general contexts, as
they are designed to extract storylines for narrow tasks. In this paper, we
propose Narrative Trails, an efficient, general-purpose method for extracting
coherent storylines in large text corpora. Specifically, our method uses the
semantic-level information embedded in the latent space of deep learning models
to build a sparse coherence graph and extract narratives that maximize the
minimum coherence of the storylines. By quantitatively evaluating our proposed
methods on two distinct narrative extraction tasks, we show the
generalizability and scalability of Narrative Trails in multiple contexts while
also simplifying the extraction pipeline.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 20:25:56 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"German",
"Fausto",
""
],
[
"Keith",
"Brian",
""
],
[
"North",
"Chris",
""
]
] | TITLE: Narrative Trails: A Method for Coherent Storyline Extraction via Maximum
Capacity Path Optimization
ABSTRACT: Traditional information retrieval is primarily concerned with finding
relevant information from large datasets without imposing a structure within
the retrieved pieces of data. However, structuring information in the form of
narratives--ordered sets of documents that form coherent storylines--allows us
to identify, interpret, and share insights about the connections and
relationships between the ideas presented in the data. Despite their
significance, current approaches for algorithmically extracting storylines from
data are scarce, with existing methods primarily relying on intricate
word-based heuristics and auxiliary document structures. Moreover, many of
these methods are difficult to scale to large datasets and general contexts, as
they are designed to extract storylines for narrow tasks. In this paper, we
propose Narrative Trails, an efficient, general-purpose method for extracting
coherent storylines in large text corpora. Specifically, our method uses the
semantic-level information embedded in the latent space of deep learning models
to build a sparse coherence graph and extract narratives that maximize the
minimum coherence of the storylines. By quantitatively evaluating our proposed
methods on two distinct narrative extraction tasks, we show the
generalizability and scalability of Narrative Trails in multiple contexts while
also simplifying the extraction pipeline.
|
2503.15708 | Sam Narimani | Sam Narimani, Solveig Roth Hoff, Kathinka Dahli Kurz, Kjell-Inge
Gjesdal, Jurgen Geisler, Endre Grovik | Sustainable Deep Learning-Based Breast Lesion Segmentation: Impact of
Breast Region Segmentation on Performance | null | null | null | null | cs.CV physics.med-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Purpose: Segmentation of the breast lesion in dynamic contrast-enhanced
magnetic resonance imaging (DCE-MRI) is an essential step to accurately
diagnose and plan treatment and monitor progress. This study aims to highlight
the impact of breast region segmentation (BRS) on deep learning-based breast
lesion segmentation (BLS) in breast DCE-MRI.
Methods Using the Stavanger Dataset containing primarily 59 DCE-MRI scans and
UNet++ as deep learning models, four different process were conducted to
compare effect of BRS on BLS. These four approaches included the whole volume
without BRS and with BRS, BRS with the selected lesion slices and lastly
optimal volume with BRS. Preprocessing methods like augmentation and
oversampling were used to enhance the small dataset, data shape uniformity and
improve model performance. Optimal volume size were investigated by a precise
process to ensure that all lesions existed in slices. To evaluate the model, a
hybrid loss function including dice, focal and cross entropy along with 5-fold
cross validation method were used and lastly a test dataset which was randomly
split used to evaluate the model performance on unseen data for each of four
mentioned approaches.
Results Results demonstrate that using BRS considerably improved model
performance and validation. Significant improvement in last approach -- optimal
volume with BRS -- compared to the approach without BRS counting around 50
percent demonstrating how effective BRS has been in BLS. Moreover, huge
improvement in energy consumption, decreasing up to 450 percent, introduces a
green solution toward a more environmentally sustainable approach for future
work on large dataset.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 21:42:33 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Narimani",
"Sam",
""
],
[
"Hoff",
"Solveig Roth",
""
],
[
"Kurz",
"Kathinka Dahli",
""
],
[
"Gjesdal",
"Kjell-Inge",
""
],
[
"Geisler",
"Jurgen",
""
],
[
"Grovik",
"Endre",
""
]
] | TITLE: Sustainable Deep Learning-Based Breast Lesion Segmentation: Impact of
Breast Region Segmentation on Performance
ABSTRACT: Purpose: Segmentation of the breast lesion in dynamic contrast-enhanced
magnetic resonance imaging (DCE-MRI) is an essential step to accurately
diagnose and plan treatment and monitor progress. This study aims to highlight
the impact of breast region segmentation (BRS) on deep learning-based breast
lesion segmentation (BLS) in breast DCE-MRI.
Methods Using the Stavanger Dataset containing primarily 59 DCE-MRI scans and
UNet++ as deep learning models, four different process were conducted to
compare effect of BRS on BLS. These four approaches included the whole volume
without BRS and with BRS, BRS with the selected lesion slices and lastly
optimal volume with BRS. Preprocessing methods like augmentation and
oversampling were used to enhance the small dataset, data shape uniformity and
improve model performance. Optimal volume size were investigated by a precise
process to ensure that all lesions existed in slices. To evaluate the model, a
hybrid loss function including dice, focal and cross entropy along with 5-fold
cross validation method were used and lastly a test dataset which was randomly
split used to evaluate the model performance on unseen data for each of four
mentioned approaches.
Results Results demonstrate that using BRS considerably improved model
performance and validation. Significant improvement in last approach -- optimal
volume with BRS -- compared to the approach without BRS counting around 50
percent demonstrating how effective BRS has been in BLS. Moreover, huge
improvement in energy consumption, decreasing up to 450 percent, introduces a
green solution toward a more environmentally sustainable approach for future
work on large dataset.
|
2503.15711 | Yitong Yang | Yitong Yang, Muhammad Naeem, Marly Van Assen, Jerome Yerly, Davide
Piccini, Matthias Stuber, John Oshinski, Matthias Chung | 5D free-running, reconstruction, variable projection, ADMM, VPAL | null | null | null | null | physics.med-ph cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: Ferumoxytal-enhanced 5D free-running whole heart CMR provides image
quality comparable to CTA, but requires hours-long reconstruction time,
preventing clinical usage. This study developed a variable projection augmented
Lagrangian (VPAL) method for 5D motion-resolved image reconstruction and
compared it with alternating direction method of multipliers (ADMM) in five
numerical simulations and 15 in-vivo pediatric data set.
Approach: Relative error of the reconstructed images against the ground-truth
images was assessed in numerical simulations. In-vivo analysis compared
reconstruction time, mid-short axis (SA) blood-myocardium sharpness, left
ventricular ejection fraction (LVEF), and a radiologist's image quality ratings
between VPAL and ADMM. A paired t-test (p<0.05) was used to determine
statistical significance, while linear regression and Bland-Altman analysis for
agreement assessments.
Results: VPAL and ADMM had similar relative errors compared to the ground
truth, p = 0.07. In in-vivo datasets, VPAL reduced the reconstruction time from
16.3 +/- 3.6 hours (ADMM) to 4.7 +/- 1.1 hours (VPAL), p=1e-10.
Blood-myocardium border sharpness in VPAL closely correlates to ADMM , R^2 =
0.97. The LVEFs values measured by VPAL and ADMM reconstructions are largely
similar, 56 +/- 6 % in ADMM and 56 +/- 6 % in VPAL, p=0.55. Both VPAL and ADMM
reconstructions have good to excellent diagnostic ratings (VPAL vs. ADMM: 3.9
+/- 0.3 vs. 3.8 +/- 0.4 in 2-chamber; 3.9 +/- 0.4 vs. 3.9 +/- in 4-chamber; 3.7
+/- 0.5 vs. 3.7 +/- 0.5 in mid-SA reformatted views. Conclusion: VPAL enables
faster reconstruction than ADMM while maintaining equivalent image quality for
functional assessments, supporting its potential for clinical use.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 21:44:45 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yang",
"Yitong",
""
],
[
"Naeem",
"Muhammad",
""
],
[
"Van Assen",
"Marly",
""
],
[
"Yerly",
"Jerome",
""
],
[
"Piccini",
"Davide",
""
],
[
"Stuber",
"Matthias",
""
],
[
"Oshinski",
"John",
""
],
[
"Chung",
"Matthias",
""
]
] | TITLE: 5D free-running, reconstruction, variable projection, ADMM, VPAL
ABSTRACT: Purpose: Ferumoxytal-enhanced 5D free-running whole heart CMR provides image
quality comparable to CTA, but requires hours-long reconstruction time,
preventing clinical usage. This study developed a variable projection augmented
Lagrangian (VPAL) method for 5D motion-resolved image reconstruction and
compared it with alternating direction method of multipliers (ADMM) in five
numerical simulations and 15 in-vivo pediatric data set.
Approach: Relative error of the reconstructed images against the ground-truth
images was assessed in numerical simulations. In-vivo analysis compared
reconstruction time, mid-short axis (SA) blood-myocardium sharpness, left
ventricular ejection fraction (LVEF), and a radiologist's image quality ratings
between VPAL and ADMM. A paired t-test (p<0.05) was used to determine
statistical significance, while linear regression and Bland-Altman analysis for
agreement assessments.
Results: VPAL and ADMM had similar relative errors compared to the ground
truth, p = 0.07. In in-vivo datasets, VPAL reduced the reconstruction time from
16.3 +/- 3.6 hours (ADMM) to 4.7 +/- 1.1 hours (VPAL), p=1e-10.
Blood-myocardium border sharpness in VPAL closely correlates to ADMM , R^2 =
0.97. The LVEFs values measured by VPAL and ADMM reconstructions are largely
similar, 56 +/- 6 % in ADMM and 56 +/- 6 % in VPAL, p=0.55. Both VPAL and ADMM
reconstructions have good to excellent diagnostic ratings (VPAL vs. ADMM: 3.9
+/- 0.3 vs. 3.8 +/- 0.4 in 2-chamber; 3.9 +/- 0.4 vs. 3.9 +/- in 4-chamber; 3.7
+/- 0.5 vs. 3.7 +/- 0.5 in mid-SA reformatted views. Conclusion: VPAL enables
faster reconstruction than ADMM while maintaining equivalent image quality for
functional assessments, supporting its potential for clinical use.
|
2503.15712 | Weiwen Hu | Weiwen Hu, Niccol\`o Parodi, Marcus Zepp, Ingo Feldmann, Oliver
Schreer, Peter Eisert | SPNeRF: Open Vocabulary 3D Neural Scene Segmentation with Superpoints | In Proceedings of the 20th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications (2025) | null | 10.5220/0013255100003912 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-vocabulary segmentation, powered by large visual-language models like
CLIP, has expanded 2D segmentation capabilities beyond fixed classes predefined
by the dataset, enabling zero-shot understanding across diverse scenes.
Extending these capabilities to 3D segmentation introduces challenges, as
CLIP's image-based embeddings often lack the geometric detail necessary for 3D
scene segmentation. Recent methods tend to address this by introducing
additional segmentation models or replacing CLIP with variations trained on
segmentation data, which lead to redundancy or loss on CLIP's general language
capabilities. To overcome this limitation, we introduce SPNeRF, a NeRF based
zero-shot 3D segmentation approach that leverages geometric priors. We
integrate geometric primitives derived from the 3D scene into NeRF training to
produce primitive-wise CLIP features, avoiding the ambiguity of point-wise
features. Additionally, we propose a primitive-based merging mechanism enhanced
with affinity scores. Without relying on additional segmentation models, our
method further explores CLIP's capability for 3D segmentation and achieves
notable improvements over original LERF.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 21:45:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hu",
"Weiwen",
""
],
[
"Parodi",
"Niccolò",
""
],
[
"Zepp",
"Marcus",
""
],
[
"Feldmann",
"Ingo",
""
],
[
"Schreer",
"Oliver",
""
],
[
"Eisert",
"Peter",
""
]
] | TITLE: SPNeRF: Open Vocabulary 3D Neural Scene Segmentation with Superpoints
ABSTRACT: Open-vocabulary segmentation, powered by large visual-language models like
CLIP, has expanded 2D segmentation capabilities beyond fixed classes predefined
by the dataset, enabling zero-shot understanding across diverse scenes.
Extending these capabilities to 3D segmentation introduces challenges, as
CLIP's image-based embeddings often lack the geometric detail necessary for 3D
scene segmentation. Recent methods tend to address this by introducing
additional segmentation models or replacing CLIP with variations trained on
segmentation data, which lead to redundancy or loss on CLIP's general language
capabilities. To overcome this limitation, we introduce SPNeRF, a NeRF based
zero-shot 3D segmentation approach that leverages geometric priors. We
integrate geometric primitives derived from the 3D scene into NeRF training to
produce primitive-wise CLIP features, avoiding the ambiguity of point-wise
features. Additionally, we propose a primitive-based merging mechanism enhanced
with affinity scores. Without relying on additional segmentation models, our
method further explores CLIP's capability for 3D segmentation and achieves
notable improvements over original LERF.
|
2503.15715 | Ryota Takamido | Ryota Takamido and Jun Ota | Experience-based Optimal Motion Planning Algorithm for Solving Difficult
Planning Problems Using a Limited Dataset | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This study aims to address the key challenge of obtaining a high-quality
solution path within a short calculation time by generalizing a limited
dataset. In the informed experience-driven random trees connect star (IERTC*)
process, the algorithm flexibly explores the search trees by morphing the micro
paths generated from a single experience while reducing the path cost by
introducing a re-wiring process and an informed sampling process. The core idea
of this algorithm is to apply different strategies depending on the complexity
of the local environment; for example, it adopts a more complex curved
trajectory if obstacles are densely arranged near the search tree, and it
adopts a simpler straight line if the local environment is sparse. The results
of experiments using a general motion benchmark test revealed that IERTC*
significantly improved the planning success rate in difficult problems in the
cluttered environment (an average improvement of 49.3% compared to the
state-of-the-art algorithm) while also significantly reducing the solution cost
(a reduction of 56.3%) when using one hundred experiences. Furthermore, the
results demonstrated outstanding planning performance even when only one
experience was available (a 43.8% improvement in success rate and a 57.8%
reduction in solution cost).
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 21:52:18 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Takamido",
"Ryota",
""
],
[
"Ota",
"Jun",
""
]
] | TITLE: Experience-based Optimal Motion Planning Algorithm for Solving Difficult
Planning Problems Using a Limited Dataset
ABSTRACT: This study aims to address the key challenge of obtaining a high-quality
solution path within a short calculation time by generalizing a limited
dataset. In the informed experience-driven random trees connect star (IERTC*)
process, the algorithm flexibly explores the search trees by morphing the micro
paths generated from a single experience while reducing the path cost by
introducing a re-wiring process and an informed sampling process. The core idea
of this algorithm is to apply different strategies depending on the complexity
of the local environment; for example, it adopts a more complex curved
trajectory if obstacles are densely arranged near the search tree, and it
adopts a simpler straight line if the local environment is sparse. The results
of experiments using a general motion benchmark test revealed that IERTC*
significantly improved the planning success rate in difficult problems in the
cluttered environment (an average improvement of 49.3% compared to the
state-of-the-art algorithm) while also significantly reducing the solution cost
(a reduction of 56.3%) when using one hundred experiences. Furthermore, the
results demonstrated outstanding planning performance even when only one
experience was available (a 43.8% improvement in success rate and a 57.8%
reduction in solution cost).
|
2503.15718 | Mathilde Aguiar | Mathilde Aguiar, Pierre Zweigenbaum, Nona Naderi | Am I eligible? Natural Language Inference for Clinical Trial Patient
Recruitment: the Patient's Point of View | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recruiting patients to participate in clinical trials can be challenging and
time-consuming. Usually, participation in a clinical trial is initiated by a
healthcare professional and proposed to the patient. Promoting clinical trials
directly to patients via online recruitment might help to reach them more
efficiently. In this study, we address the case where a patient is initiating
their own recruitment process and wants to determine whether they are eligible
for a given clinical trial, using their own language to describe their medical
profile. To study whether this creates difficulties in the patient trial
matching process, we design a new dataset and task, Natural Language Inference
for Patient Recruitment (NLI4PR), in which patient language profiles must be
matched to clinical trials. We create it by adapting the TREC 2022 Clinical
Trial Track dataset, which provides patients' medical profiles, and rephrasing
them manually using patient language. We also use the associated clinical trial
reports where the patients are either eligible or excluded. We prompt several
open-source Large Language Models on our task and achieve from 56.5 to 71.8 of
F1 score using patient language, against 64.7 to 73.1 for the same task using
medical language. When using patient language, we observe only a small loss in
performance for the best model, suggesting that having the patient as a
starting point could be adopted to help recruit patients for clinical trials.
The corpus and code bases are all freely available on our Github and
HuggingFace repositories.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 22:07:19 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Aguiar",
"Mathilde",
""
],
[
"Zweigenbaum",
"Pierre",
""
],
[
"Naderi",
"Nona",
""
]
] | TITLE: Am I eligible? Natural Language Inference for Clinical Trial Patient
Recruitment: the Patient's Point of View
ABSTRACT: Recruiting patients to participate in clinical trials can be challenging and
time-consuming. Usually, participation in a clinical trial is initiated by a
healthcare professional and proposed to the patient. Promoting clinical trials
directly to patients via online recruitment might help to reach them more
efficiently. In this study, we address the case where a patient is initiating
their own recruitment process and wants to determine whether they are eligible
for a given clinical trial, using their own language to describe their medical
profile. To study whether this creates difficulties in the patient trial
matching process, we design a new dataset and task, Natural Language Inference
for Patient Recruitment (NLI4PR), in which patient language profiles must be
matched to clinical trials. We create it by adapting the TREC 2022 Clinical
Trial Track dataset, which provides patients' medical profiles, and rephrasing
them manually using patient language. We also use the associated clinical trial
reports where the patients are either eligible or excluded. We prompt several
open-source Large Language Models on our task and achieve from 56.5 to 71.8 of
F1 score using patient language, against 64.7 to 73.1 for the same task using
medical language. When using patient language, we observe only a small loss in
performance for the best model, suggesting that having the patient as a
starting point could be adopted to help recruit patients for clinical trials.
The corpus and code bases are all freely available on our Github and
HuggingFace repositories.
|
2503.15731 | Kun Zhan | Yuqing Zhang, Qi Han, Ligeng Wang, Kai Cheng, Bo Wang, Kun Zhan | Graph-Weighted Contrastive Learning for Semi-Supervised Hyperspectral
Image Classification | Journal of Electronic Imaging, 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Most existing graph-based semi-supervised hyperspectral image classification
methods rely on superpixel partitioning techniques. However, they suffer from
misclassification of certain pixels due to inaccuracies in superpixel
boundaries, \ie, the initial inaccuracies in superpixel partitioning limit
overall classification performance. In this paper, we propose a novel
graph-weighted contrastive learning approach that avoids the use of superpixel
partitioning and directly employs neural networks to learn hyperspectral image
representation. Furthermore, while many approaches require all graph nodes to
be available during training, our approach supports mini-batch training by
processing only a subset of nodes at a time, reducing computational complexity
and improving generalization to unseen nodes. Experimental results on three
widely-used datasets demonstrate the effectiveness of the proposed approach
compared to baselines relying on superpixel partitioning.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 22:55:52 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhang",
"Yuqing",
""
],
[
"Han",
"Qi",
""
],
[
"Wang",
"Ligeng",
""
],
[
"Cheng",
"Kai",
""
],
[
"Wang",
"Bo",
""
],
[
"Zhan",
"Kun",
""
]
] | TITLE: Graph-Weighted Contrastive Learning for Semi-Supervised Hyperspectral
Image Classification
ABSTRACT: Most existing graph-based semi-supervised hyperspectral image classification
methods rely on superpixel partitioning techniques. However, they suffer from
misclassification of certain pixels due to inaccuracies in superpixel
boundaries, \ie, the initial inaccuracies in superpixel partitioning limit
overall classification performance. In this paper, we propose a novel
graph-weighted contrastive learning approach that avoids the use of superpixel
partitioning and directly employs neural networks to learn hyperspectral image
representation. Furthermore, while many approaches require all graph nodes to
be available during training, our approach supports mini-batch training by
processing only a subset of nodes at a time, reducing computational complexity
and improving generalization to unseen nodes. Experimental results on three
widely-used datasets demonstrate the effectiveness of the proposed approach
compared to baselines relying on superpixel partitioning.
|
2503.15737 | Heming Zhang | Heming Zhang, Wenyu Li, Di Huang, Yinjie Tang, Yixin Chen, Philip
Payne, Fuhai Li | KoGNER: A Novel Framework for Knowledge Graph Distillation on Biomedical
Named Entity Recognition | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Named Entity Recognition (NER) is a fundamental task in Natural Language
Processing (NLP) that plays a crucial role in information extraction, question
answering, and knowledge-based systems. Traditional deep learning-based NER
models often struggle with domain-specific generalization and suffer from data
sparsity issues. In this work, we introduce Knowledge Graph distilled for Named
Entity Recognition (KoGNER), a novel approach that integrates Knowledge Graph
(KG) distillation into NER models to enhance entity recognition performance.
Our framework leverages structured knowledge representations from KGs to enrich
contextual embeddings, thereby improving entity classification and reducing
ambiguity in entity detection. KoGNER employs a two-step process: (1) Knowledge
Distillation, where external knowledge sources are distilled into a lightweight
representation for seamless integration with NER models, and (2) Entity-Aware
Augmentation, which integrates contextual embeddings that have been enriched
with knowledge graph information directly into GNN, thereby improving the
model's ability to understand and represent entity relationships. Experimental
results on benchmark datasets demonstrate that KoGNER achieves state-of-the-art
performance, outperforming finetuned NER models and LLMs by a significant
margin. These findings suggest that leveraging knowledge graphs as auxiliary
information can significantly improve NER accuracy, making KoGNER a promising
direction for future research in knowledge-aware NLP.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 22:59:36 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhang",
"Heming",
""
],
[
"Li",
"Wenyu",
""
],
[
"Huang",
"Di",
""
],
[
"Tang",
"Yinjie",
""
],
[
"Chen",
"Yixin",
""
],
[
"Payne",
"Philip",
""
],
[
"Li",
"Fuhai",
""
]
] | TITLE: KoGNER: A Novel Framework for Knowledge Graph Distillation on Biomedical
Named Entity Recognition
ABSTRACT: Named Entity Recognition (NER) is a fundamental task in Natural Language
Processing (NLP) that plays a crucial role in information extraction, question
answering, and knowledge-based systems. Traditional deep learning-based NER
models often struggle with domain-specific generalization and suffer from data
sparsity issues. In this work, we introduce Knowledge Graph distilled for Named
Entity Recognition (KoGNER), a novel approach that integrates Knowledge Graph
(KG) distillation into NER models to enhance entity recognition performance.
Our framework leverages structured knowledge representations from KGs to enrich
contextual embeddings, thereby improving entity classification and reducing
ambiguity in entity detection. KoGNER employs a two-step process: (1) Knowledge
Distillation, where external knowledge sources are distilled into a lightweight
representation for seamless integration with NER models, and (2) Entity-Aware
Augmentation, which integrates contextual embeddings that have been enriched
with knowledge graph information directly into GNN, thereby improving the
model's ability to understand and represent entity relationships. Experimental
results on benchmark datasets demonstrate that KoGNER achieves state-of-the-art
performance, outperforming finetuned NER models and LLMs by a significant
margin. These findings suggest that leveraging knowledge graphs as auxiliary
information can significantly improve NER accuracy, making KoGNER a promising
direction for future research in knowledge-aware NLP.
|
2503.15742 | Sarosij Bose | Sarosij Bose, Arindam Dutta, Sayak Nag, Junge Zhang, Jiachen Li,
Konstantinos Karydis, Amit K. Roy Chowdhury | Uncertainty-Aware Diffusion Guided Refinement of 3D Scenes | 13 pages, 7 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Reconstructing 3D scenes from a single image is a fundamentally ill-posed
task due to the severely under-constrained nature of the problem. Consequently,
when the scene is rendered from novel camera views, existing single image to 3D
reconstruction methods render incoherent and blurry views. This problem is
exacerbated when the unseen regions are far away from the input camera. In this
work, we address these inherent limitations in existing single image-to-3D
scene feedforward networks. To alleviate the poor performance due to
insufficient information beyond the input image's view, we leverage a strong
generative prior in the form of a pre-trained latent video diffusion model, for
iterative refinement of a coarse scene represented by optimizable Gaussian
parameters. To ensure that the style and texture of the generated images align
with that of the input image, we incorporate on-the-fly Fourier-style transfer
between the generated images and the input image. Additionally, we design a
semantic uncertainty quantification module that calculates the per-pixel
entropy and yields uncertainty maps used to guide the refinement process from
the most confident pixels while discarding the remaining highly uncertain ones.
We conduct extensive experiments on real-world scene datasets, including
in-domain RealEstate-10K and out-of-domain KITTI-v2, showing that our approach
can provide more realistic and high-fidelity novel view synthesis results
compared to existing state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 23:14:27 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Bose",
"Sarosij",
""
],
[
"Dutta",
"Arindam",
""
],
[
"Nag",
"Sayak",
""
],
[
"Zhang",
"Junge",
""
],
[
"Li",
"Jiachen",
""
],
[
"Karydis",
"Konstantinos",
""
],
[
"Chowdhury",
"Amit K. Roy",
""
]
] | TITLE: Uncertainty-Aware Diffusion Guided Refinement of 3D Scenes
ABSTRACT: Reconstructing 3D scenes from a single image is a fundamentally ill-posed
task due to the severely under-constrained nature of the problem. Consequently,
when the scene is rendered from novel camera views, existing single image to 3D
reconstruction methods render incoherent and blurry views. This problem is
exacerbated when the unseen regions are far away from the input camera. In this
work, we address these inherent limitations in existing single image-to-3D
scene feedforward networks. To alleviate the poor performance due to
insufficient information beyond the input image's view, we leverage a strong
generative prior in the form of a pre-trained latent video diffusion model, for
iterative refinement of a coarse scene represented by optimizable Gaussian
parameters. To ensure that the style and texture of the generated images align
with that of the input image, we incorporate on-the-fly Fourier-style transfer
between the generated images and the input image. Additionally, we design a
semantic uncertainty quantification module that calculates the per-pixel
entropy and yields uncertainty maps used to guide the refinement process from
the most confident pixels while discarding the remaining highly uncertain ones.
We conduct extensive experiments on real-world scene datasets, including
in-domain RealEstate-10K and out-of-domain KITTI-v2, showing that our approach
can provide more realistic and high-fidelity novel view synthesis results
compared to existing state-of-the-art methods.
|
2503.15761 | Mir Mohammad Khaleghi | Mir Mohammad Khaleghi, Mehran Safayani, Abdolreza Mirzaei | GraPLUS: Graph-based Placement Using Semantics for Image Composition | 17 pages, 3 figures, 6 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present GraPLUS (Graph-based Placement Using Semantics), a novel framework
for plausible object placement in images that leverages scene graphs and large
language models. Our approach uniquely combines graph-structured scene
representation with semantic understanding to determine contextually
appropriate object positions. The framework employs GPT-2 to transform
categorical node and edge labels into rich semantic embeddings that capture
both definitional characteristics and typical spatial contexts, enabling
nuanced understanding of object relationships and placement patterns. GraPLUS
achieves placement accuracy of 92.1% and an FID score of 28.83 on the OPA
dataset, outperforming state-of-the-art methods by 8.1% while maintaining
competitive visual quality. In human evaluation studies involving 964 samples
assessed by 19 participants, our method was preferred in 52.1% of cases,
significantly outperforming previous approaches. The framework's key
innovations include: (i) leveraging pre-trained scene graph models that
transfer knowledge from other domains, (ii) edge-aware graph neural networks
that process scene semantics through structured relationships, (iii) a
cross-modal attention mechanism that aligns categorical embeddings with
enhanced scene features, and (iv) a multiobjective training strategy
incorporating semantic consistency constraints.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 00:43:29 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Khaleghi",
"Mir Mohammad",
""
],
[
"Safayani",
"Mehran",
""
],
[
"Mirzaei",
"Abdolreza",
""
]
] | TITLE: GraPLUS: Graph-based Placement Using Semantics for Image Composition
ABSTRACT: We present GraPLUS (Graph-based Placement Using Semantics), a novel framework
for plausible object placement in images that leverages scene graphs and large
language models. Our approach uniquely combines graph-structured scene
representation with semantic understanding to determine contextually
appropriate object positions. The framework employs GPT-2 to transform
categorical node and edge labels into rich semantic embeddings that capture
both definitional characteristics and typical spatial contexts, enabling
nuanced understanding of object relationships and placement patterns. GraPLUS
achieves placement accuracy of 92.1% and an FID score of 28.83 on the OPA
dataset, outperforming state-of-the-art methods by 8.1% while maintaining
competitive visual quality. In human evaluation studies involving 964 samples
assessed by 19 participants, our method was preferred in 52.1% of cases,
significantly outperforming previous approaches. The framework's key
innovations include: (i) leveraging pre-trained scene graph models that
transfer knowledge from other domains, (ii) edge-aware graph neural networks
that process scene semantics through structured relationships, (iii) a
cross-modal attention mechanism that aligns categorical embeddings with
enhanced scene features, and (iv) a multiobjective training strategy
incorporating semantic consistency constraints.
|
2503.15766 | Peter Sharpe | Peter Sharpe, Rishikesh Ranade, Sanjay Choudhry | Accelerating Transient CFD through Machine Learning-Based Flow
Initialization | 17 pages, 8 figures | null | null | null | cs.LG physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | Transient computational fluid dynamics (CFD) simulations are essential for
many industrial applications, but a significant portion of their computational
cost stems from the time needed to reach statistical steadiness from initial
conditions. We present a novel machine learning-based initialization method
that reduces the cost of this subsequent transient solve substantially,
achieving a 50% reduction in time-to-convergence compared to traditional
uniform and potential flow-based initializations. Through a case study in
automotive aerodynamics using a 16.7M-cell unsteady RANS simulation, we
evaluate three ML-based initialization strategies. Two of these strategies are
recommended for general use: (1) a physics-informed hybrid method combining ML
predictions with potential flow solutions, and (2) a more versatile approach
integrating ML predictions with uniform flow. Both strategies enable CFD
solvers to achieve convergence times comparable to computationally expensive
steady RANS initializations, while requiring only seconds of computation. We
develop a robust statistical convergence metric based on windowed
time-averaging for performance comparison between initialization strategies.
Notably, these improvements are achieved using an ML model trained on a
different dataset of automotive geometries, demonstrating strong generalization
capabilities. The proposed methods integrate seamlessly with existing CFD
workflows without requiring modifications to the underlying flow solver,
providing a practical approach to accelerating industrial CFD simulations
through improved ML-based initialization strategies.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 00:51:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Sharpe",
"Peter",
""
],
[
"Ranade",
"Rishikesh",
""
],
[
"Choudhry",
"Sanjay",
""
]
] | TITLE: Accelerating Transient CFD through Machine Learning-Based Flow
Initialization
ABSTRACT: Transient computational fluid dynamics (CFD) simulations are essential for
many industrial applications, but a significant portion of their computational
cost stems from the time needed to reach statistical steadiness from initial
conditions. We present a novel machine learning-based initialization method
that reduces the cost of this subsequent transient solve substantially,
achieving a 50% reduction in time-to-convergence compared to traditional
uniform and potential flow-based initializations. Through a case study in
automotive aerodynamics using a 16.7M-cell unsteady RANS simulation, we
evaluate three ML-based initialization strategies. Two of these strategies are
recommended for general use: (1) a physics-informed hybrid method combining ML
predictions with potential flow solutions, and (2) a more versatile approach
integrating ML predictions with uniform flow. Both strategies enable CFD
solvers to achieve convergence times comparable to computationally expensive
steady RANS initializations, while requiring only seconds of computation. We
develop a robust statistical convergence metric based on windowed
time-averaging for performance comparison between initialization strategies.
Notably, these improvements are achieved using an ML model trained on a
different dataset of automotive geometries, demonstrating strong generalization
capabilities. The proposed methods integrate seamlessly with existing CFD
workflows without requiring modifications to the underlying flow solver,
providing a practical approach to accelerating industrial CFD simulations
through improved ML-based initialization strategies.
|
2503.15777 | Joanikij Chulev | Joanikij Chulev, Angela Mladenovska | Line Space Clustering (LSC): Feature-Based Clustering using K-medians
and Dynamic Time Warping for Versatility | 8 pages, 5 figures, 3 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Clustering high-dimensional data is a critical challenge in machine learning
due to the curse of dimensionality and the presence of noise. Traditional
clustering algorithms often fail to capture the intrinsic structures in such
data. This paper explores a combination of clustering methods, which we called
Line Space Clustering (LSC), a representation that transforms data points into
lines in a newly defined feature space, enabling clustering based on the
similarity of feature value patterns, essentially treating features as
sequences. LSC employs a combined distance metric that uses Euclidean and
Dynamic Time Warping (DTW) distances, weighted by a parameter {\alpha},
allowing flexibility in emphasizing shape or magnitude similarities. We delve
deeply into the mechanics of DTW and the Savitzky Golay filter, explaining
their roles in the algorithm. Extensive experiments demonstrate the efficacy of
LSC on synthetic and real-world datasets, showing that randomly experimenting
with time-series optimized methods sometimes might surprisingly work on a
complex dataset, particularly in noisy environments.
Source code and experiments are available at:
https://github.com/JoanikijChulev/LSC.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 01:27:10 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chulev",
"Joanikij",
""
],
[
"Mladenovska",
"Angela",
""
]
] | TITLE: Line Space Clustering (LSC): Feature-Based Clustering using K-medians
and Dynamic Time Warping for Versatility
ABSTRACT: Clustering high-dimensional data is a critical challenge in machine learning
due to the curse of dimensionality and the presence of noise. Traditional
clustering algorithms often fail to capture the intrinsic structures in such
data. This paper explores a combination of clustering methods, which we called
Line Space Clustering (LSC), a representation that transforms data points into
lines in a newly defined feature space, enabling clustering based on the
similarity of feature value patterns, essentially treating features as
sequences. LSC employs a combined distance metric that uses Euclidean and
Dynamic Time Warping (DTW) distances, weighted by a parameter {\alpha},
allowing flexibility in emphasizing shape or magnitude similarities. We delve
deeply into the mechanics of DTW and the Savitzky Golay filter, explaining
their roles in the algorithm. Extensive experiments demonstrate the efficacy of
LSC on synthetic and real-world datasets, showing that randomly experimenting
with time-series optimized methods sometimes might surprisingly work on a
complex dataset, particularly in noisy environments.
Source code and experiments are available at:
https://github.com/JoanikijChulev/LSC.
|
2503.15778 | Boshra Khalili | Boshra Khalili, Andrew W.Smyth | AutoDrive-QA- Automated Generation of Multiple-Choice Questions for
Autonomous Driving Datasets Using Large Vision-Language Models | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In autonomous driving, open-ended question answering often suffers from
unreliable evaluations because freeform responses require either complex
metrics or subjective human judgment. To address this challenge, we introduce
AutoDrive-QA, an automatic pipeline that converts existing driving QA datasets
(including DriveLM, NuScenes-QA, and LingoQA) into a structured multiple-choice
question (MCQ) format. This benchmark systematically assesses perception,
prediction, and planning tasks, providing a standardized and objective
evaluation framework. AutoDrive-QA employs an automated pipeline that leverages
large language models (LLMs) to generate high-quality, contextually relevant
distractors based on domain-specific error patterns commonly found in
autonomous driving scenarios. To evaluate both general capabilities and
generalization performance, we test the benchmark on three public datasets and
conduct zero-shot experiments on an unseen dataset. The zero-shot evaluations
reveal that GPT-4V leads with 69.57% accuracy -- achieving 74.94% in
Perception, 65.33% in Prediction, and 68.45% in Planning -- demonstrating that
while all models excel in Perception, they struggle in Prediction.
Consequently, AutoDrive-QA establishes a rigorous, unbiased standard for
integrating and evaluating different vision-language models across various
autonomous driving datasets, thereby improving generalization in this field. We
release all the codes in the AutoDrive-QA GitHub Repository.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 01:32:00 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Khalili",
"Boshra",
""
],
[
"Smyth",
"Andrew W.",
""
]
] | TITLE: AutoDrive-QA- Automated Generation of Multiple-Choice Questions for
Autonomous Driving Datasets Using Large Vision-Language Models
ABSTRACT: In autonomous driving, open-ended question answering often suffers from
unreliable evaluations because freeform responses require either complex
metrics or subjective human judgment. To address this challenge, we introduce
AutoDrive-QA, an automatic pipeline that converts existing driving QA datasets
(including DriveLM, NuScenes-QA, and LingoQA) into a structured multiple-choice
question (MCQ) format. This benchmark systematically assesses perception,
prediction, and planning tasks, providing a standardized and objective
evaluation framework. AutoDrive-QA employs an automated pipeline that leverages
large language models (LLMs) to generate high-quality, contextually relevant
distractors based on domain-specific error patterns commonly found in
autonomous driving scenarios. To evaluate both general capabilities and
generalization performance, we test the benchmark on three public datasets and
conduct zero-shot experiments on an unseen dataset. The zero-shot evaluations
reveal that GPT-4V leads with 69.57% accuracy -- achieving 74.94% in
Perception, 65.33% in Prediction, and 68.45% in Planning -- demonstrating that
while all models excel in Perception, they struggle in Prediction.
Consequently, AutoDrive-QA establishes a rigorous, unbiased standard for
integrating and evaluating different vision-language models across various
autonomous driving datasets, thereby improving generalization in this field. We
release all the codes in the AutoDrive-QA GitHub Repository.
|
2503.15779 | Haoxuan Ma | Haoxuan Ma, Xishun Liao, Yifan Liu, Qinhua Jiang, Chris Stanford,
Shangqing Cao, Jiaqi Ma | MobiFuse: Learning Universal Human Mobility Patterns through
Cross-domain Data Fusion | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human mobility modeling is critical for urban planning and transportation
management, yet existing datasets often lack the resolution and semantic
richness required for comprehensive analysis. To address this, we proposed a
cross-domain data fusion framework that integrates multi-modal data of distinct
nature and spatio-temporal resolution, including geographical, mobility,
socio-demographic, and traffic information, to construct a privacy-preserving
and semantically enriched human travel trajectory dataset. This framework is
demonstrated through two case studies in Los Angeles (LA) and Egypt, where a
domain adaptation algorithm ensures its transferability across diverse urban
contexts. Quantitative evaluation shows that the generated synthetic dataset
accurately reproduces mobility patterns observed in empirical data. Moreover,
large-scale traffic simulations for LA County based on the generated synthetic
demand align well with observed traffic. On California's I-405 corridor, the
simulation yields a Mean Absolute Percentage Error of 5.85% for traffic volume
and 4.36% for speed compared to Caltrans PeMS observations.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 01:41:28 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Ma",
"Haoxuan",
""
],
[
"Liao",
"Xishun",
""
],
[
"Liu",
"Yifan",
""
],
[
"Jiang",
"Qinhua",
""
],
[
"Stanford",
"Chris",
""
],
[
"Cao",
"Shangqing",
""
],
[
"Ma",
"Jiaqi",
""
]
] | TITLE: MobiFuse: Learning Universal Human Mobility Patterns through
Cross-domain Data Fusion
ABSTRACT: Human mobility modeling is critical for urban planning and transportation
management, yet existing datasets often lack the resolution and semantic
richness required for comprehensive analysis. To address this, we proposed a
cross-domain data fusion framework that integrates multi-modal data of distinct
nature and spatio-temporal resolution, including geographical, mobility,
socio-demographic, and traffic information, to construct a privacy-preserving
and semantically enriched human travel trajectory dataset. This framework is
demonstrated through two case studies in Los Angeles (LA) and Egypt, where a
domain adaptation algorithm ensures its transferability across diverse urban
contexts. Quantitative evaluation shows that the generated synthetic dataset
accurately reproduces mobility patterns observed in empirical data. Moreover,
large-scale traffic simulations for LA County based on the generated synthetic
demand align well with observed traffic. On California's I-405 corridor, the
simulation yields a Mean Absolute Percentage Error of 5.85% for traffic volume
and 4.36% for speed compared to Caltrans PeMS observations.
|
2503.15784 | Parham Saremi | Parham Saremi, Amar Kumar, Mohammed Mohammed, Zahra TehraniNasab, Tal
Arbel | RL4Med-DDPO: Reinforcement Learning for Controlled Guidance Towards
Diverse Medical Image Generation using Vision-Language Foundation Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-Language Foundation Models (VLFM) have shown a tremendous increase in
performance in terms of generating high-resolution, photorealistic natural
images. While VLFMs show a rich understanding of semantic content across
modalities, they often struggle with fine-grained alignment tasks that require
precise correspondence between image regions and textual descriptions a
limitation in medical imaging, where accurate localization and detection of
clinical features are essential for diagnosis and analysis. To address this
issue, we propose a multi-stage architecture where a pre-trained VLFM provides
a cursory semantic understanding, while a reinforcement learning (RL) algorithm
refines the alignment through an iterative process that optimizes for
understanding semantic context. The reward signal is designed to align the
semantic information of the text with synthesized images. We demonstrate the
effectiveness of our method on a medical imaging skin dataset where the
generated images exhibit improved generation quality and alignment with prompt
over the fine-tuned Stable Diffusion. We also show that the synthesized samples
could be used to improve disease classifier performance for underrepresented
subgroups through augmentation.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 01:51:05 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Saremi",
"Parham",
""
],
[
"Kumar",
"Amar",
""
],
[
"Mohammed",
"Mohammed",
""
],
[
"TehraniNasab",
"Zahra",
""
],
[
"Arbel",
"Tal",
""
]
] | TITLE: RL4Med-DDPO: Reinforcement Learning for Controlled Guidance Towards
Diverse Medical Image Generation using Vision-Language Foundation Models
ABSTRACT: Vision-Language Foundation Models (VLFM) have shown a tremendous increase in
performance in terms of generating high-resolution, photorealistic natural
images. While VLFMs show a rich understanding of semantic content across
modalities, they often struggle with fine-grained alignment tasks that require
precise correspondence between image regions and textual descriptions a
limitation in medical imaging, where accurate localization and detection of
clinical features are essential for diagnosis and analysis. To address this
issue, we propose a multi-stage architecture where a pre-trained VLFM provides
a cursory semantic understanding, while a reinforcement learning (RL) algorithm
refines the alignment through an iterative process that optimizes for
understanding semantic context. The reward signal is designed to align the
semantic information of the text with synthesized images. We demonstrate the
effectiveness of our method on a medical imaging skin dataset where the
generated images exhibit improved generation quality and alignment with prompt
over the fine-tuned Stable Diffusion. We also show that the synthesized samples
could be used to improve disease classifier performance for underrepresented
subgroups through augmentation.
|
2503.15788 | Yanmei Hu | Yanmei Hu and Yihang Wu and Biao Cai | A two-stage model leveraging friendship network for community evolution
prediction in interactive networks | 15 pages, 5 figures | null | null | null | cs.SI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interactive networks representing user participation and interactions in
specific "events" are highly dynamic, with communities reflecting collective
behaviors that evolve over time. Predicting these community evolutions is
crucial for forecasting the trajectory of the related "event". Some models for
community evolution prediction have been witnessed, but they primarily focused
on coarse-grained evolution types (e.g., expand, dissolve, merge, split), often
neglecting fine-grained evolution extents (e.g., the extent of community
expansion). Furthermore, these models typically utilize only one network data
(here is interactive network data) for dynamic community featurization,
overlooking the more stable friendship network that represents the friendships
between people to enrich community representations. To address these
limitations, we propose a two-stage model that predicts both the type and
extent of community evolution. Our model unifies multi-class classification for
evolution type and regression for evolution extent within a single framework
and fuses data from both interactive and friendship networks for a
comprehensive community featurization. We also introduce a hybrid strategy to
differentiate between evolution types that are difficult to distinguish.
Experimental results on three datasets show the significant superiority of the
proposed model over other models, confirming its efficacy in predicting
community evolution in interactive networks.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 02:05:36 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Hu",
"Yanmei",
""
],
[
"Wu",
"Yihang",
""
],
[
"Cai",
"Biao",
""
]
] | TITLE: A two-stage model leveraging friendship network for community evolution
prediction in interactive networks
ABSTRACT: Interactive networks representing user participation and interactions in
specific "events" are highly dynamic, with communities reflecting collective
behaviors that evolve over time. Predicting these community evolutions is
crucial for forecasting the trajectory of the related "event". Some models for
community evolution prediction have been witnessed, but they primarily focused
on coarse-grained evolution types (e.g., expand, dissolve, merge, split), often
neglecting fine-grained evolution extents (e.g., the extent of community
expansion). Furthermore, these models typically utilize only one network data
(here is interactive network data) for dynamic community featurization,
overlooking the more stable friendship network that represents the friendships
between people to enrich community representations. To address these
limitations, we propose a two-stage model that predicts both the type and
extent of community evolution. Our model unifies multi-class classification for
evolution type and regression for evolution extent within a single framework
and fuses data from both interactive and friendship networks for a
comprehensive community featurization. We also introduce a hybrid strategy to
differentiate between evolution types that are difficult to distinguish.
Experimental results on three datasets show the significant superiority of the
proposed model over other models, confirming its efficacy in predicting
community evolution in interactive networks.
|
2503.15796 | Xinlong Zhai | Xinlong Zhai, Chunchen Wang, Ruijia Wang, Jiazheng Kang, Shujie Li,
Boyu Chen, Tengfei Ma, Zikai Zhou, Cheng Yang, Chuan Shi | Blend the Separated: Mixture of Synergistic Experts for Data-Scarcity
Drug-Target Interaction Prediction | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Drug-target interaction prediction (DTI) is essential in various applications
including drug discovery and clinical application. There are two perspectives
of input data widely used in DTI prediction: Intrinsic data represents how
drugs or targets are constructed, and extrinsic data represents how drugs or
targets are related to other biological entities. However, any of the two
perspectives of input data can be scarce for some drugs or targets, especially
for those unpopular or newly discovered. Furthermore, ground-truth labels for
specific interaction types can also be scarce. Therefore, we propose the first
method to tackle DTI prediction under input data and/or label scarcity. To make
our model functional when only one perspective of input data is available, we
design two separate experts to process intrinsic and extrinsic data
respectively and fuse them adaptively according to different samples.
Furthermore, to make the two perspectives complement each other and remedy
label scarcity, two experts synergize with each other in a mutually supervised
way to exploit the enormous unlabeled data. Extensive experiments on 3
real-world datasets under different extents of input data scarcity and/or label
scarcity demonstrate our model outperforms states of the art significantly and
steadily, with a maximum improvement of 53.53%. We also test our model without
any data scarcity and it still outperforms current methods.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 02:27:16 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhai",
"Xinlong",
""
],
[
"Wang",
"Chunchen",
""
],
[
"Wang",
"Ruijia",
""
],
[
"Kang",
"Jiazheng",
""
],
[
"Li",
"Shujie",
""
],
[
"Chen",
"Boyu",
""
],
[
"Ma",
"Tengfei",
""
],
[
"Zhou",
"Zikai",
""
],
[
"Yang",
"Cheng",
""
],
[
"Shi",
"Chuan",
""
]
] | TITLE: Blend the Separated: Mixture of Synergistic Experts for Data-Scarcity
Drug-Target Interaction Prediction
ABSTRACT: Drug-target interaction prediction (DTI) is essential in various applications
including drug discovery and clinical application. There are two perspectives
of input data widely used in DTI prediction: Intrinsic data represents how
drugs or targets are constructed, and extrinsic data represents how drugs or
targets are related to other biological entities. However, any of the two
perspectives of input data can be scarce for some drugs or targets, especially
for those unpopular or newly discovered. Furthermore, ground-truth labels for
specific interaction types can also be scarce. Therefore, we propose the first
method to tackle DTI prediction under input data and/or label scarcity. To make
our model functional when only one perspective of input data is available, we
design two separate experts to process intrinsic and extrinsic data
respectively and fuse them adaptively according to different samples.
Furthermore, to make the two perspectives complement each other and remedy
label scarcity, two experts synergize with each other in a mutually supervised
way to exploit the enormous unlabeled data. Extensive experiments on 3
real-world datasets under different extents of input data scarcity and/or label
scarcity demonstrate our model outperforms states of the art significantly and
steadily, with a maximum improvement of 53.53%. We also test our model without
any data scarcity and it still outperforms current methods.
|
2503.15800 | Jingyun Liu | Jingyun Liu, Daiqin Yang, Zhenzhong Chen | Frequency Enhancement for Image Demosaicking | 14 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recovering high-frequency textures in image demosaicking remains a
challenging issue. While existing methods introduced elaborate spatial learning
methods, they still exhibit limited performance. To address this issue, a
frequency enhancement approach is proposed. Based on the frequency analysis of
color filter array (CFA)/demosaicked/ground truth images, we propose Dual-path
Frequency Enhancement Network (DFENet), which reconstructs RGB images in a
divide-and-conquer manner through fourier-domain frequency selection. In
DFENet, two frequency selectors are employed, each selecting a set of frequency
components for processing along separate paths. One path focuses on generating
missing information through detail refinement in spatial domain, while the
other aims at suppressing undesirable frequencies with the guidance of CFA
images in frequency domain. Multi-level frequency supervision with a stagewise
training strategy is employed to further improve the reconstruction
performance. With these designs, the proposed DFENet outperforms other
state-of-the-art algorithms on different datasets and demonstrates significant
advantages on hard cases. Moreover, to better assess algorithms' ability to
reconstruct high-frequency textures, a new dataset, LineSet37, is contributed,
which consists of 37 artificially designed and generated images. These images
feature complex line patterns and are prone to severe visual artifacts like
color moir\'e after demosaicking. Experiments on LineSet37 offer a more
targeted evaluation of performance on challenging cases. The code and dataset
are available at https://github.com/VelvetReverie/DFENet-demosaicking.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 02:37:10 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Liu",
"Jingyun",
""
],
[
"Yang",
"Daiqin",
""
],
[
"Chen",
"Zhenzhong",
""
]
] | TITLE: Frequency Enhancement for Image Demosaicking
ABSTRACT: Recovering high-frequency textures in image demosaicking remains a
challenging issue. While existing methods introduced elaborate spatial learning
methods, they still exhibit limited performance. To address this issue, a
frequency enhancement approach is proposed. Based on the frequency analysis of
color filter array (CFA)/demosaicked/ground truth images, we propose Dual-path
Frequency Enhancement Network (DFENet), which reconstructs RGB images in a
divide-and-conquer manner through fourier-domain frequency selection. In
DFENet, two frequency selectors are employed, each selecting a set of frequency
components for processing along separate paths. One path focuses on generating
missing information through detail refinement in spatial domain, while the
other aims at suppressing undesirable frequencies with the guidance of CFA
images in frequency domain. Multi-level frequency supervision with a stagewise
training strategy is employed to further improve the reconstruction
performance. With these designs, the proposed DFENet outperforms other
state-of-the-art algorithms on different datasets and demonstrates significant
advantages on hard cases. Moreover, to better assess algorithms' ability to
reconstruct high-frequency textures, a new dataset, LineSet37, is contributed,
which consists of 37 artificially designed and generated images. These images
feature complex line patterns and are prone to severe visual artifacts like
color moir\'e after demosaicking. Experiments on LineSet37 offer a more
targeted evaluation of performance on challenging cases. The code and dataset
are available at https://github.com/VelvetReverie/DFENet-demosaicking.
|
2503.15801 | Zhiyu An | Zhiyu An, Zhibo Hou, Wan Du | Disentangling Uncertainties by Learning Compressed Data Representation | Accepted by the 7th Annual Learning for Dynamics & Control Conference
(L4DC) 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We study aleatoric and epistemic uncertainty estimation in a learned
regressive system dynamics model. Disentangling aleatoric uncertainty (the
inherent randomness of the system) from epistemic uncertainty (the lack of
data) is crucial for downstream tasks such as risk-aware control and
reinforcement learning, efficient exploration, and robust policy transfer.
While existing approaches like Gaussian Processes, Bayesian networks, and model
ensembles are widely adopted, they suffer from either high computational
complexity or inaccurate uncertainty estimation. To address these limitations,
we propose the Compressed Data Representation Model (CDRM), a framework that
learns a neural network encoding of the data distribution and enables direct
sampling from the output distribution. Our approach incorporates a novel
inference procedure based on Langevin dynamics sampling, allowing CDRM to
predict arbitrary output distributions rather than being constrained to a
Gaussian prior. Theoretical analysis provides the conditions where CDRM
achieves better memory and computational complexity compared to bin-based
compression methods. Empirical evaluations show that CDRM demonstrates a
superior capability to identify aleatoric and epistemic uncertainties
separately, achieving AUROCs of 0.8876 and 0.9981 on a single test set
containing a mixture of both uncertainties. Qualitative results further show
that CDRM's capability extends to datasets with multimodal output
distributions, a challenging scenario where existing methods consistently fail.
Code and supplementary materials are available at
https://github.com/ryeii/CDRM.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 02:37:48 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"An",
"Zhiyu",
""
],
[
"Hou",
"Zhibo",
""
],
[
"Du",
"Wan",
""
]
] | TITLE: Disentangling Uncertainties by Learning Compressed Data Representation
ABSTRACT: We study aleatoric and epistemic uncertainty estimation in a learned
regressive system dynamics model. Disentangling aleatoric uncertainty (the
inherent randomness of the system) from epistemic uncertainty (the lack of
data) is crucial for downstream tasks such as risk-aware control and
reinforcement learning, efficient exploration, and robust policy transfer.
While existing approaches like Gaussian Processes, Bayesian networks, and model
ensembles are widely adopted, they suffer from either high computational
complexity or inaccurate uncertainty estimation. To address these limitations,
we propose the Compressed Data Representation Model (CDRM), a framework that
learns a neural network encoding of the data distribution and enables direct
sampling from the output distribution. Our approach incorporates a novel
inference procedure based on Langevin dynamics sampling, allowing CDRM to
predict arbitrary output distributions rather than being constrained to a
Gaussian prior. Theoretical analysis provides the conditions where CDRM
achieves better memory and computational complexity compared to bin-based
compression methods. Empirical evaluations show that CDRM demonstrates a
superior capability to identify aleatoric and epistemic uncertainties
separately, achieving AUROCs of 0.8876 and 0.9981 on a single test set
containing a mixture of both uncertainties. Qualitative results further show
that CDRM's capability extends to datasets with multimodal output
distributions, a challenging scenario where existing methods consistently fail.
Code and supplementary materials are available at
https://github.com/ryeii/CDRM.
|
2503.15809 | Xuan Gao | Xuan Gao, Jingtao Zhou, Dongyu Liu, Yuqi Zhou, Juyong Zhang | Controlling Avatar Diffusion with Learnable Gaussian Embedding | Project Page: https://ustc3dv.github.io/Learn2Control/ | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in diffusion models have made significant progress in digital
human generation. However, most existing models still struggle to maintain 3D
consistency, temporal coherence, and motion accuracy. A key reason for these
shortcomings is the limited representation ability of commonly used control
signals(e.g., landmarks, depth maps, etc.). In addition, the lack of diversity
in identity and pose variations in public datasets further hinders progress in
this area. In this paper, we analyze the shortcomings of current control
signals and introduce a novel control signal representation that is
optimizable, dense, expressive, and 3D consistent. Our method embeds a
learnable neural Gaussian onto a parametric head surface, which greatly
enhances the consistency and expressiveness of diffusion-based head models.
Regarding the dataset, we synthesize a large-scale dataset with multiple poses
and identities. In addition, we use real/synthetic labels to effectively
distinguish real and synthetic data, minimizing the impact of imperfections in
synthetic data on the generated head images. Extensive experiments show that
our model outperforms existing methods in terms of realism, expressiveness, and
3D consistency. Our code, synthetic datasets, and pre-trained models will be
released in our project page: https://ustc3dv.github.io/Learn2Control/
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 02:52:01 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gao",
"Xuan",
""
],
[
"Zhou",
"Jingtao",
""
],
[
"Liu",
"Dongyu",
""
],
[
"Zhou",
"Yuqi",
""
],
[
"Zhang",
"Juyong",
""
]
] | TITLE: Controlling Avatar Diffusion with Learnable Gaussian Embedding
ABSTRACT: Recent advances in diffusion models have made significant progress in digital
human generation. However, most existing models still struggle to maintain 3D
consistency, temporal coherence, and motion accuracy. A key reason for these
shortcomings is the limited representation ability of commonly used control
signals(e.g., landmarks, depth maps, etc.). In addition, the lack of diversity
in identity and pose variations in public datasets further hinders progress in
this area. In this paper, we analyze the shortcomings of current control
signals and introduce a novel control signal representation that is
optimizable, dense, expressive, and 3D consistent. Our method embeds a
learnable neural Gaussian onto a parametric head surface, which greatly
enhances the consistency and expressiveness of diffusion-based head models.
Regarding the dataset, we synthesize a large-scale dataset with multiple poses
and identities. In addition, we use real/synthetic labels to effectively
distinguish real and synthetic data, minimizing the impact of imperfections in
synthetic data on the generated head images. Extensive experiments show that
our model outperforms existing methods in terms of realism, expressiveness, and
3D consistency. Our code, synthetic datasets, and pre-trained models will be
released in our project page: https://ustc3dv.github.io/Learn2Control/
|
2503.15815 | Vishnu Dasu | Vishnu Asutosh Dasu, Md Rafi ur Rashid, Vipul Gupta, Saeid
Tizpaz-Niari, Gang Tan | Attention Pruning: Automated Fairness Repair of Language Models via
Surrogate Simulated Annealing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper explores pruning attention heads as a post-processing bias
mitigation method for large language models (LLMs). Modern AI systems such as
LLMs are expanding into sensitive social contexts where fairness concerns
become especially crucial. Since LLMs develop decision-making patterns by
training on massive datasets of human-generated content, they naturally encode
and perpetuate societal biases. While modifying training datasets and
algorithms is expensive and requires significant resources; post-processing
techniques-such as selectively deactivating neurons and attention heads in
pre-trained LLMs-can provide feasible and effective approaches to improve
fairness. However, identifying the optimal subset of parameters to prune
presents a combinatorial challenge within LLMs' immense parameter space,
requiring solutions that efficiently balance competing objectives across the
frontiers of model fairness and utility.
To address the computational challenges, we explore a search-based program
repair approach via randomized simulated annealing. Given the prohibitive
evaluation costs in billion-parameter LLMs, we develop surrogate deep neural
networks that efficiently model the relationship between attention head states
(active/inactive) and their corresponding fairness/utility metrics. This allows
us to perform optimization over the surrogate models and efficiently identify
optimal subsets of attention heads for selective pruning rather than directly
searching through the LLM parameter space. This paper introduces Attention
Pruning, a fairness-aware surrogate simulated annealing approach to prune
attention heads in LLMs that disproportionately contribute to bias while
minimally impacting overall model utility. Our experiments show that Attention
Pruning achieves up to $40\%$ reduction in gender bias and outperforms the
state-of-the-art bias mitigation strategies.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 03:02:32 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Dasu",
"Vishnu Asutosh",
""
],
[
"Rashid",
"Md Rafi ur",
""
],
[
"Gupta",
"Vipul",
""
],
[
"Tizpaz-Niari",
"Saeid",
""
],
[
"Tan",
"Gang",
""
]
] | TITLE: Attention Pruning: Automated Fairness Repair of Language Models via
Surrogate Simulated Annealing
ABSTRACT: This paper explores pruning attention heads as a post-processing bias
mitigation method for large language models (LLMs). Modern AI systems such as
LLMs are expanding into sensitive social contexts where fairness concerns
become especially crucial. Since LLMs develop decision-making patterns by
training on massive datasets of human-generated content, they naturally encode
and perpetuate societal biases. While modifying training datasets and
algorithms is expensive and requires significant resources; post-processing
techniques-such as selectively deactivating neurons and attention heads in
pre-trained LLMs-can provide feasible and effective approaches to improve
fairness. However, identifying the optimal subset of parameters to prune
presents a combinatorial challenge within LLMs' immense parameter space,
requiring solutions that efficiently balance competing objectives across the
frontiers of model fairness and utility.
To address the computational challenges, we explore a search-based program
repair approach via randomized simulated annealing. Given the prohibitive
evaluation costs in billion-parameter LLMs, we develop surrogate deep neural
networks that efficiently model the relationship between attention head states
(active/inactive) and their corresponding fairness/utility metrics. This allows
us to perform optimization over the surrogate models and efficiently identify
optimal subsets of attention heads for selective pruning rather than directly
searching through the LLM parameter space. This paper introduces Attention
Pruning, a fairness-aware surrogate simulated annealing approach to prune
attention heads in LLMs that disproportionately contribute to bias while
minimally impacting overall model utility. Our experiments show that Attention
Pruning achieves up to $40\%$ reduction in gender bias and outperforms the
state-of-the-art bias mitigation strategies.
|
2503.15817 | Suryani Lim | Suryani Lim, Henri Prade, Gilles Richard | Ranking Counterfactual Explanations | 15 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | AI-driven outcomes can be challenging for end-users to understand.
Explanations can address two key questions: "Why this outcome?" (factual) and
"Why not another?" (counterfactual). While substantial efforts have been made
to formalize factual explanations, a precise and comprehensive study of
counterfactual explanations is still lacking. This paper proposes a formal
definition of counterfactual explanations, proving some properties they
satisfy, and examining the relationship with factual explanations. Given that
multiple counterfactual explanations generally exist for a specific case, we
also introduce a rigorous method to rank these counterfactual explanations,
going beyond a simple minimality condition, and to identify the optimal ones.
Our experiments with 12 real-world datasets highlight that, in most cases, a
single optimal counterfactual explanation emerges. We also demonstrate, via
three metrics, that the selected optimal explanation exhibits higher
representativeness and can explain a broader range of elements than a random
minimal counterfactual. This result highlights the effectiveness of our
approach in identifying more robust and comprehensive counterfactual
explanations.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 03:04:05 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Lim",
"Suryani",
""
],
[
"Prade",
"Henri",
""
],
[
"Richard",
"Gilles",
""
]
] | TITLE: Ranking Counterfactual Explanations
ABSTRACT: AI-driven outcomes can be challenging for end-users to understand.
Explanations can address two key questions: "Why this outcome?" (factual) and
"Why not another?" (counterfactual). While substantial efforts have been made
to formalize factual explanations, a precise and comprehensive study of
counterfactual explanations is still lacking. This paper proposes a formal
definition of counterfactual explanations, proving some properties they
satisfy, and examining the relationship with factual explanations. Given that
multiple counterfactual explanations generally exist for a specific case, we
also introduce a rigorous method to rank these counterfactual explanations,
going beyond a simple minimality condition, and to identify the optimal ones.
Our experiments with 12 real-world datasets highlight that, in most cases, a
single optimal counterfactual explanation emerges. We also demonstrate, via
three metrics, that the selected optimal explanation exhibits higher
representativeness and can explain a broader range of elements than a random
minimal counterfactual. This result highlights the effectiveness of our
approach in identifying more robust and comprehensive counterfactual
explanations.
|
2503.15835 | Yiren Lu | Yiren Lu, Yunlai Zhou, Disheng Liu, Tuo Liang, Yu Yin | BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian
Splatting | CVPR2025. Project page at https://vulab-ai.github.io/BARD-GS/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D Gaussian Splatting (3DGS) has shown remarkable potential for static scene
reconstruction, and recent advancements have extended its application to
dynamic scenes. However, the quality of reconstructions depends heavily on
high-quality input images and precise camera poses, which are not that trivial
to fulfill in real-world scenarios. Capturing dynamic scenes with handheld
monocular cameras, for instance, typically involves simultaneous movement of
both the camera and objects within a single exposure. This combined motion
frequently results in image blur that existing methods cannot adequately
handle. To address these challenges, we introduce BARD-GS, a novel approach for
robust dynamic scene reconstruction that effectively handles blurry inputs and
imprecise camera poses. Our method comprises two main components: 1) camera
motion deblurring and 2) object motion deblurring. By explicitly decomposing
motion blur into camera motion blur and object motion blur and modeling them
separately, we achieve significantly improved rendering results in dynamic
regions. In addition, we collect a real-world motion blur dataset of dynamic
scenes to evaluate our approach. Extensive experiments demonstrate that BARD-GS
effectively reconstructs high-quality dynamic scenes under realistic
conditions, significantly outperforming existing methods.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 04:23:52 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Lu",
"Yiren",
""
],
[
"Zhou",
"Yunlai",
""
],
[
"Liu",
"Disheng",
""
],
[
"Liang",
"Tuo",
""
],
[
"Yin",
"Yu",
""
]
] | TITLE: BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian
Splatting
ABSTRACT: 3D Gaussian Splatting (3DGS) has shown remarkable potential for static scene
reconstruction, and recent advancements have extended its application to
dynamic scenes. However, the quality of reconstructions depends heavily on
high-quality input images and precise camera poses, which are not that trivial
to fulfill in real-world scenarios. Capturing dynamic scenes with handheld
monocular cameras, for instance, typically involves simultaneous movement of
both the camera and objects within a single exposure. This combined motion
frequently results in image blur that existing methods cannot adequately
handle. To address these challenges, we introduce BARD-GS, a novel approach for
robust dynamic scene reconstruction that effectively handles blurry inputs and
imprecise camera poses. Our method comprises two main components: 1) camera
motion deblurring and 2) object motion deblurring. By explicitly decomposing
motion blur into camera motion blur and object motion blur and modeling them
separately, we achieve significantly improved rendering results in dynamic
regions. In addition, we collect a real-world motion blur dataset of dynamic
scenes to evaluate our approach. Extensive experiments demonstrate that BARD-GS
effectively reconstructs high-quality dynamic scenes under realistic
conditions, significantly outperforming existing methods.
|
2503.15838 | Tarek Mahmud | Tarek Mahmud, Bin Duan, Corina Pasareanu, Guowei Yang | Enhancing LLM Code Generation with Ensembles: A Similarity-Based
Selection Approach | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensemble learning has been widely used in machine learning to improve model
robustness, accuracy, and generalization, but has not yet been applied to code
generation tasks with large language models (LLMs). We propose an ensemble
approach for LLMs in code generation. Instead of relying on the output of a
single model, we generate multiple candidate programs from different LLMs and
apply a structured voting mechanism to select the most reliable solution. For
voting, we compute syntactic and semantic similarity using CodeBLEU and
behavioral equivalence using CrossHair's differential behavior analysis. By
aggregating these similarity scores, we select the program that best aligns
with the consensus among the candidates. We show through experiments that our
ensemble approach consistently outperforms standalone LLMs on the well-known
HumanEval and the more challenging LiveCodeBench datasets, achieving an
accuracy of 90.2% and 50.2%, respectively, on the two datasets. In comparison,
the best-performing LLM (GPT-4o) has an accuracy of 83.5% and 43.4%,
respectively. Furthermore, even when restricted to free open-source models, our
method achieves an accuracy of 80.5% and 41.6%, respectively, demonstrating the
viability of our approach in resource-constrained settings.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 04:38:56 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Mahmud",
"Tarek",
""
],
[
"Duan",
"Bin",
""
],
[
"Pasareanu",
"Corina",
""
],
[
"Yang",
"Guowei",
""
]
] | TITLE: Enhancing LLM Code Generation with Ensembles: A Similarity-Based
Selection Approach
ABSTRACT: Ensemble learning has been widely used in machine learning to improve model
robustness, accuracy, and generalization, but has not yet been applied to code
generation tasks with large language models (LLMs). We propose an ensemble
approach for LLMs in code generation. Instead of relying on the output of a
single model, we generate multiple candidate programs from different LLMs and
apply a structured voting mechanism to select the most reliable solution. For
voting, we compute syntactic and semantic similarity using CodeBLEU and
behavioral equivalence using CrossHair's differential behavior analysis. By
aggregating these similarity scores, we select the program that best aligns
with the consensus among the candidates. We show through experiments that our
ensemble approach consistently outperforms standalone LLMs on the well-known
HumanEval and the more challenging LiveCodeBench datasets, achieving an
accuracy of 90.2% and 50.2%, respectively, on the two datasets. In comparison,
the best-performing LLM (GPT-4o) has an accuracy of 83.5% and 43.4%,
respectively. Furthermore, even when restricted to free open-source models, our
method achieves an accuracy of 80.5% and 41.6%, respectively, demonstrating the
viability of our approach in resource-constrained settings.
|
2503.15842 | Changlong Shi | Changlong Shi, He Zhao, Bingjie Zhang, Mingyuan Zhou, Dandan Guo, Yi
Chang | FedAWA: Adaptive Optimization of Aggregation Weights in Federated
Learning Using Client Vectors | Accepted in CVPR 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) has emerged as a promising framework for distributed
machine learning, enabling collaborative model training without sharing local
data, thereby preserving privacy and enhancing security. However, data
heterogeneity resulting from differences across user behaviors, preferences,
and device characteristics poses a significant challenge for federated
learning. Most previous works overlook the adjustment of aggregation weights,
relying solely on dataset size for weight assignment, which often leads to
unstable convergence and reduced model performance. Recently, several studies
have sought to refine aggregation strategies by incorporating dataset
characteristics and model alignment. However, adaptively adjusting aggregation
weights while ensuring data security-without requiring additional proxy
data-remains a significant challenge. In this work, we propose Federated
learning with Adaptive Weight Aggregation (FedAWA), a novel method that
adaptively adjusts aggregation weights based on client vectors during the
learning process. The client vector captures the direction of model updates,
reflecting local data variations, and is used to optimize the aggregation
weight without requiring additional datasets or violating privacy. By assigning
higher aggregation weights to local models whose updates align closely with the
global optimization direction, FedAWA enhances the stability and generalization
of the global model. Extensive experiments under diverse scenarios demonstrate
the superiority of our method, providing a promising solution to the challenges
of data heterogeneity in federated learning.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 04:49:40 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Shi",
"Changlong",
""
],
[
"Zhao",
"He",
""
],
[
"Zhang",
"Bingjie",
""
],
[
"Zhou",
"Mingyuan",
""
],
[
"Guo",
"Dandan",
""
],
[
"Chang",
"Yi",
""
]
] | TITLE: FedAWA: Adaptive Optimization of Aggregation Weights in Federated
Learning Using Client Vectors
ABSTRACT: Federated Learning (FL) has emerged as a promising framework for distributed
machine learning, enabling collaborative model training without sharing local
data, thereby preserving privacy and enhancing security. However, data
heterogeneity resulting from differences across user behaviors, preferences,
and device characteristics poses a significant challenge for federated
learning. Most previous works overlook the adjustment of aggregation weights,
relying solely on dataset size for weight assignment, which often leads to
unstable convergence and reduced model performance. Recently, several studies
have sought to refine aggregation strategies by incorporating dataset
characteristics and model alignment. However, adaptively adjusting aggregation
weights while ensuring data security-without requiring additional proxy
data-remains a significant challenge. In this work, we propose Federated
learning with Adaptive Weight Aggregation (FedAWA), a novel method that
adaptively adjusts aggregation weights based on client vectors during the
learning process. The client vector captures the direction of model updates,
reflecting local data variations, and is used to optimize the aggregation
weight without requiring additional datasets or violating privacy. By assigning
higher aggregation weights to local models whose updates align closely with the
global optimization direction, FedAWA enhances the stability and generalization
of the global model. Extensive experiments under diverse scenarios demonstrate
the superiority of our method, providing a promising solution to the challenges
of data heterogeneity in federated learning.
|
2503.15845 | Qishen Zhou | Qishen Zhou, Yifan Zhang, Michail A. Makridis, Anastasios Kouvelas,
Yibing Wang, and Simon Hu | Network-wide Freeway Traffic Estimation Using Sparse Sensor Data: A
Dirichlet Graph Auto-Encoder Approach | This work has been submitted to the IEEE for possible publication | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network-wide Traffic State Estimation (TSE), which aims to infer a complete
image of network traffic states with sparsely deployed sensors, plays a vital
role in intelligent transportation systems. With the development of data-driven
methods, traffic dynamics modeling has advanced significantly. However, TSE
poses fundamental challenges for data-driven approaches, since historical
patterns cannot be learned locally at sensor-free segments. Although inductive
graph learning shows promise in estimating states at locations without sensor,
existing methods typically handle unobserved locations by filling them with
zeros, introducing bias to the sensitive graph message propagation. The
recently proposed Dirichlet Energy-based Feature Propagation (DEFP) method
achieves State-Of-The-Art (SOTA) performance in unobserved node classification
by eliminating the need for zero-filling. However, applying it to TSE faces
three key challenges: inability to handle directed traffic networks, strong
assumptions in traffic spatial correlation modeling, and overlooks distinct
propagation rules of different patterns (e.g., congestion and free flow). We
propose DGAE, a novel inductive graph representation model that addresses these
challenges through theoretically derived DEFP for Directed graph (DEFP4D),
enhanced spatial representation learning via DEFP4D-guided latent space
encoding, and physics-guided propagation mechanisms that separately handles
congested and free-flow patterns. Experiments on three traffic datasets
demonstrate that DGAE outperforms existing SOTA methods and exhibits strong
cross-city transferability. Furthermore, DEFP4D can serve as a standalone
lightweight solution, showing superior performance under extremely sparse
sensor conditions.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 04:58:50 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhou",
"Qishen",
""
],
[
"Zhang",
"Yifan",
""
],
[
"Makridis",
"Michail A.",
""
],
[
"Kouvelas",
"Anastasios",
""
],
[
"Wang",
"Yibing",
""
],
[
"Hu",
"Simon",
""
]
] | TITLE: Network-wide Freeway Traffic Estimation Using Sparse Sensor Data: A
Dirichlet Graph Auto-Encoder Approach
ABSTRACT: Network-wide Traffic State Estimation (TSE), which aims to infer a complete
image of network traffic states with sparsely deployed sensors, plays a vital
role in intelligent transportation systems. With the development of data-driven
methods, traffic dynamics modeling has advanced significantly. However, TSE
poses fundamental challenges for data-driven approaches, since historical
patterns cannot be learned locally at sensor-free segments. Although inductive
graph learning shows promise in estimating states at locations without sensor,
existing methods typically handle unobserved locations by filling them with
zeros, introducing bias to the sensitive graph message propagation. The
recently proposed Dirichlet Energy-based Feature Propagation (DEFP) method
achieves State-Of-The-Art (SOTA) performance in unobserved node classification
by eliminating the need for zero-filling. However, applying it to TSE faces
three key challenges: inability to handle directed traffic networks, strong
assumptions in traffic spatial correlation modeling, and overlooks distinct
propagation rules of different patterns (e.g., congestion and free flow). We
propose DGAE, a novel inductive graph representation model that addresses these
challenges through theoretically derived DEFP for Directed graph (DEFP4D),
enhanced spatial representation learning via DEFP4D-guided latent space
encoding, and physics-guided propagation mechanisms that separately handles
congested and free-flow patterns. Experiments on three traffic datasets
demonstrate that DGAE outperforms existing SOTA methods and exhibits strong
cross-city transferability. Furthermore, DEFP4D can serve as a standalone
lightweight solution, showing superior performance under extremely sparse
sensor conditions.
|
2503.15848 | Jinghan Zhang | Jinghan Zhang, Xiting Wang, Fengran Mo, Yeyang Zhou, Wanfu Gao,
Kunpeng Liu | Entropy-based Exploration Conduction for Multi-step Reasoning | null | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In large language model (LLM) reasoning, multi-step processes have proven
effective for solving complex tasks. However, the depth of exploration can
significantly affect the reasoning performance. Existing methods to
automatically decide the depth often bring high costs and lack flexibility, and
thus undermine the model's reasoning accuracy. To address these issues, we
propose Entropy-based Exploration Depth Conduction (Entro-duction), a novel
method that dynamically adjusts the exploration depth during multi-step
reasoning by monitoring LLM's output entropy and variance entropy. We employ
these two metrics to capture the model's current uncertainty and the
fluctuation of uncertainty across consecutive reasoning steps. Based on the
observed changes, the LLM selects whether to deepen, expand or stop exploration
according to the probability. In this way, we balance the reasoning accuracy
and exploration effectiveness. Experimental results across four benchmark
datasets demonstrate the efficacy of Entro-duction. We further conduct
experiments and analysis on the components of Entro-duction to discuss their
contributions to reasoning performance.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:03:26 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zhang",
"Jinghan",
""
],
[
"Wang",
"Xiting",
""
],
[
"Mo",
"Fengran",
""
],
[
"Zhou",
"Yeyang",
""
],
[
"Gao",
"Wanfu",
""
],
[
"Liu",
"Kunpeng",
""
]
] | TITLE: Entropy-based Exploration Conduction for Multi-step Reasoning
ABSTRACT: In large language model (LLM) reasoning, multi-step processes have proven
effective for solving complex tasks. However, the depth of exploration can
significantly affect the reasoning performance. Existing methods to
automatically decide the depth often bring high costs and lack flexibility, and
thus undermine the model's reasoning accuracy. To address these issues, we
propose Entropy-based Exploration Depth Conduction (Entro-duction), a novel
method that dynamically adjusts the exploration depth during multi-step
reasoning by monitoring LLM's output entropy and variance entropy. We employ
these two metrics to capture the model's current uncertainty and the
fluctuation of uncertainty across consecutive reasoning steps. Based on the
observed changes, the LLM selects whether to deepen, expand or stop exploration
according to the probability. In this way, we balance the reasoning accuracy
and exploration effectiveness. Experimental results across four benchmark
datasets demonstrate the efficacy of Entro-duction. We further conduct
experiments and analysis on the components of Entro-duction to discuss their
contributions to reasoning performance.
|
2503.15855 | Hyojun Go | Hyojun Go, Byeongjun Park, Hyelin Nam, Byung-Hoon Kim, Hyungjin Chung,
Changick Kim | VideoRFSplat: Direct Scene-Level Text-to-3D Gaussian Splatting
Generation with Flexible Pose and Multi-View Joint Modeling | Project page: https://gohyojun15.github.io/VideoRFSplat/ | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose VideoRFSplat, a direct text-to-3D model leveraging a video
generation model to generate realistic 3D Gaussian Splatting (3DGS) for
unbounded real-world scenes. To generate diverse camera poses and unbounded
spatial extent of real-world scenes, while ensuring generalization to arbitrary
text prompts, previous methods fine-tune 2D generative models to jointly model
camera poses and multi-view images. However, these methods suffer from
instability when extending 2D generative models to joint modeling due to the
modality gap, which necessitates additional models to stabilize training and
inference. In this work, we propose an architecture and a sampling strategy to
jointly model multi-view images and camera poses when fine-tuning a video
generation model. Our core idea is a dual-stream architecture that attaches a
dedicated pose generation model alongside a pre-trained video generation model
via communication blocks, generating multi-view images and camera poses through
separate streams. This design reduces interference between the pose and image
modalities. Additionally, we propose an asynchronous sampling strategy that
denoises camera poses faster than multi-view images, allowing rapidly denoised
poses to condition multi-view generation, reducing mutual ambiguity and
enhancing cross-modal consistency. Trained on multiple large-scale real-world
datasets (RealEstate10K, MVImgNet, DL3DV-10K, ACID), VideoRFSplat outperforms
existing text-to-3D direct generation methods that heavily depend on post-hoc
refinement via score distillation sampling, achieving superior results without
such refinement.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:26:09 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Go",
"Hyojun",
""
],
[
"Park",
"Byeongjun",
""
],
[
"Nam",
"Hyelin",
""
],
[
"Kim",
"Byung-Hoon",
""
],
[
"Chung",
"Hyungjin",
""
],
[
"Kim",
"Changick",
""
]
] | TITLE: VideoRFSplat: Direct Scene-Level Text-to-3D Gaussian Splatting
Generation with Flexible Pose and Multi-View Joint Modeling
ABSTRACT: We propose VideoRFSplat, a direct text-to-3D model leveraging a video
generation model to generate realistic 3D Gaussian Splatting (3DGS) for
unbounded real-world scenes. To generate diverse camera poses and unbounded
spatial extent of real-world scenes, while ensuring generalization to arbitrary
text prompts, previous methods fine-tune 2D generative models to jointly model
camera poses and multi-view images. However, these methods suffer from
instability when extending 2D generative models to joint modeling due to the
modality gap, which necessitates additional models to stabilize training and
inference. In this work, we propose an architecture and a sampling strategy to
jointly model multi-view images and camera poses when fine-tuning a video
generation model. Our core idea is a dual-stream architecture that attaches a
dedicated pose generation model alongside a pre-trained video generation model
via communication blocks, generating multi-view images and camera poses through
separate streams. This design reduces interference between the pose and image
modalities. Additionally, we propose an asynchronous sampling strategy that
denoises camera poses faster than multi-view images, allowing rapidly denoised
poses to condition multi-view generation, reducing mutual ambiguity and
enhancing cross-modal consistency. Trained on multiple large-scale real-world
datasets (RealEstate10K, MVImgNet, DL3DV-10K, ACID), VideoRFSplat outperforms
existing text-to-3D direct generation methods that heavily depend on post-hoc
refinement via score distillation sampling, achieving superior results without
such refinement.
|
2503.15861 | Zhuonan Liang | Jie Gan, Zhuonan Liang, Jianan Fan, Lisa Mcguire, Caterina Watson,
Jacqueline Spurway, Jillian Clarke, Weidong Cai | Sequential Spatial-Temporal Network for Interpretable Automatic
Ultrasonic Assessment of Fetal Head during labor | This work has been accepted to 2025 IEEE 22nd International Symposium
on Biomedical Imaging (ISBI) | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The intrapartum ultrasound guideline established by ISUOG highlights the
Angle of Progression (AoP) and Head Symphysis Distance (HSD) as pivotal metrics
for assessing fetal head descent and predicting delivery outcomes. Accurate
measurement of the AoP and HSD requires a structured process. This begins with
identifying standardized ultrasound planes, followed by the detection of
specific anatomical landmarks within the regions of the pubic symphysis and
fetal head that correlate with the delivery parameters AoP and HSD. Finally,
these measurements are derived based on the identified anatomical landmarks.
Addressing the clinical demands and standard operation process outlined in the
ISUOG guideline, we introduce the Sequential Spatial-Temporal Network (SSTN),
the first interpretable model specifically designed for the video of
intrapartum ultrasound analysis. The SSTN operates by first identifying
ultrasound planes, then segmenting anatomical structures such as the pubic
symphysis and fetal head, and finally detecting key landmarks for precise
measurement of HSD and AoP. Furthermore, the cohesive framework leverages
task-related information to improve accuracy and reliability. Experimental
evaluations on clinical datasets demonstrate that SSTN significantly surpasses
existing models, reducing the mean absolute error by 18% for AoP and 22% for
HSD.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:33:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gan",
"Jie",
""
],
[
"Liang",
"Zhuonan",
""
],
[
"Fan",
"Jianan",
""
],
[
"Mcguire",
"Lisa",
""
],
[
"Watson",
"Caterina",
""
],
[
"Spurway",
"Jacqueline",
""
],
[
"Clarke",
"Jillian",
""
],
[
"Cai",
"Weidong",
""
]
] | TITLE: Sequential Spatial-Temporal Network for Interpretable Automatic
Ultrasonic Assessment of Fetal Head during labor
ABSTRACT: The intrapartum ultrasound guideline established by ISUOG highlights the
Angle of Progression (AoP) and Head Symphysis Distance (HSD) as pivotal metrics
for assessing fetal head descent and predicting delivery outcomes. Accurate
measurement of the AoP and HSD requires a structured process. This begins with
identifying standardized ultrasound planes, followed by the detection of
specific anatomical landmarks within the regions of the pubic symphysis and
fetal head that correlate with the delivery parameters AoP and HSD. Finally,
these measurements are derived based on the identified anatomical landmarks.
Addressing the clinical demands and standard operation process outlined in the
ISUOG guideline, we introduce the Sequential Spatial-Temporal Network (SSTN),
the first interpretable model specifically designed for the video of
intrapartum ultrasound analysis. The SSTN operates by first identifying
ultrasound planes, then segmenting anatomical structures such as the pubic
symphysis and fetal head, and finally detecting key landmarks for precise
measurement of HSD and AoP. Furthermore, the cohesive framework leverages
task-related information to improve accuracy and reliability. Experimental
evaluations on clinical datasets demonstrate that SSTN significantly surpasses
existing models, reducing the mean absolute error by 18% for AoP and 22% for
HSD.
|
2503.15866 | Vinod Puthuvath | Dincy R Arikkat, Vinod P., Rafidha Rehiman K. A., Serena Nicolazzo,
Marco Arazzi, Antonino Nocera, Mauro Conti | DroidTTP: Mapping Android Applications with TTP for Cyber Threat
Intelligence | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread adoption of Android devices for sensitive operations like
banking and communication has made them prime targets for cyber threats,
particularly Advanced Persistent Threats (APT) and sophisticated malware
attacks. Traditional malware detection methods rely on binary classification,
failing to provide insights into adversarial Tactics, Techniques, and
Procedures (TTPs). Understanding malware behavior is crucial for enhancing
cybersecurity defenses. To address this gap, we introduce DroidTTP, a framework
mapping Android malware behaviors to TTPs based on the MITRE ATT&CK framework.
Our curated dataset explicitly links MITRE TTPs to Android applications. We
developed an automated solution leveraging the Problem Transformation Approach
(PTA) and Large Language Models (LLMs) to map applications to both Tactics and
Techniques. Additionally, we employed Retrieval-Augmented Generation (RAG) with
prompt engineering and LLM fine-tuning for TTP predictions. Our structured
pipeline includes dataset creation, hyperparameter tuning, data augmentation,
feature selection, model development, and SHAP-based model interpretability.
Among LLMs, Llama achieved the highest performance in Tactic classification
with a Jaccard Similarity of 0.9583 and Hamming Loss of 0.0182, and in
Technique classification with a Jaccard Similarity of 0.9348 and Hamming Loss
of 0.0127. However, the Label Powerset XGBoost model outperformed LLMs,
achieving a Jaccard Similarity of 0.9893 for Tactic classification and 0.9753
for Technique classification, with a Hamming Loss of 0.0054 and 0.0050,
respectively. While XGBoost showed superior performance, the narrow margin
highlights the potential of LLM-based approaches in TTP classification.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:38:24 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Arikkat",
"Dincy R",
""
],
[
"P.",
"Vinod",
""
],
[
"A.",
"Rafidha Rehiman K.",
""
],
[
"Nicolazzo",
"Serena",
""
],
[
"Arazzi",
"Marco",
""
],
[
"Nocera",
"Antonino",
""
],
[
"Conti",
"Mauro",
""
]
] | TITLE: DroidTTP: Mapping Android Applications with TTP for Cyber Threat
Intelligence
ABSTRACT: The widespread adoption of Android devices for sensitive operations like
banking and communication has made them prime targets for cyber threats,
particularly Advanced Persistent Threats (APT) and sophisticated malware
attacks. Traditional malware detection methods rely on binary classification,
failing to provide insights into adversarial Tactics, Techniques, and
Procedures (TTPs). Understanding malware behavior is crucial for enhancing
cybersecurity defenses. To address this gap, we introduce DroidTTP, a framework
mapping Android malware behaviors to TTPs based on the MITRE ATT&CK framework.
Our curated dataset explicitly links MITRE TTPs to Android applications. We
developed an automated solution leveraging the Problem Transformation Approach
(PTA) and Large Language Models (LLMs) to map applications to both Tactics and
Techniques. Additionally, we employed Retrieval-Augmented Generation (RAG) with
prompt engineering and LLM fine-tuning for TTP predictions. Our structured
pipeline includes dataset creation, hyperparameter tuning, data augmentation,
feature selection, model development, and SHAP-based model interpretability.
Among LLMs, Llama achieved the highest performance in Tactic classification
with a Jaccard Similarity of 0.9583 and Hamming Loss of 0.0182, and in
Technique classification with a Jaccard Similarity of 0.9348 and Hamming Loss
of 0.0127. However, the Label Powerset XGBoost model outperformed LLMs,
achieving a Jaccard Similarity of 0.9893 for Tactic classification and 0.9753
for Technique classification, with a Hamming Loss of 0.0054 and 0.0050,
respectively. While XGBoost showed superior performance, the narrow margin
highlights the potential of LLM-based approaches in TTP classification.
|
2503.15867 | Rohit Kundu | Rohit Kundu, Athula Balachandran, Amit K. Roy-Chowdhury | TruthLens: Explainable DeepFake Detection for Face Manipulated and Fully
Synthetic Data | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Detecting DeepFakes has become a crucial research area as the widespread use
of AI image generators enables the effortless creation of face-manipulated and
fully synthetic content, yet existing methods are often limited to binary
classification (real vs. fake) and lack interpretability. To address these
challenges, we propose TruthLens, a novel and highly generalizable framework
for DeepFake detection that not only determines whether an image is real or
fake but also provides detailed textual reasoning for its predictions. Unlike
traditional methods, TruthLens effectively handles both face-manipulated
DeepFakes and fully AI-generated content while addressing fine-grained queries
such as "Does the eyes/nose/mouth look real or fake?"
The architecture of TruthLens combines the global contextual understanding of
multimodal large language models like PaliGemma2 with the localized feature
extraction capabilities of vision-only models like DINOv2. This hybrid design
leverages the complementary strengths of both models, enabling robust detection
of subtle manipulations while maintaining interpretability. Extensive
experiments on diverse datasets demonstrate that TruthLens outperforms
state-of-the-art methods in detection accuracy (by 2-14%) and explainability,
in both in-domain and cross-data settings, generalizing effectively across
traditional and emerging manipulation techniques.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:40:42 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Kundu",
"Rohit",
""
],
[
"Balachandran",
"Athula",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
]
] | TITLE: TruthLens: Explainable DeepFake Detection for Face Manipulated and Fully
Synthetic Data
ABSTRACT: Detecting DeepFakes has become a crucial research area as the widespread use
of AI image generators enables the effortless creation of face-manipulated and
fully synthetic content, yet existing methods are often limited to binary
classification (real vs. fake) and lack interpretability. To address these
challenges, we propose TruthLens, a novel and highly generalizable framework
for DeepFake detection that not only determines whether an image is real or
fake but also provides detailed textual reasoning for its predictions. Unlike
traditional methods, TruthLens effectively handles both face-manipulated
DeepFakes and fully AI-generated content while addressing fine-grained queries
such as "Does the eyes/nose/mouth look real or fake?"
The architecture of TruthLens combines the global contextual understanding of
multimodal large language models like PaliGemma2 with the localized feature
extraction capabilities of vision-only models like DINOv2. This hybrid design
leverages the complementary strengths of both models, enabling robust detection
of subtle manipulations while maintaining interpretability. Extensive
experiments on diverse datasets demonstrate that TruthLens outperforms
state-of-the-art methods in detection accuracy (by 2-14%) and explainability,
in both in-domain and cross-data settings, generalizing effectively across
traditional and emerging manipulation techniques.
|
2503.15870 | Ali Anaissi | Yuxin Miao, Xinyuan Yang, Hongda Fan, Yichun Li, Yishu Hong, Xiechen
Guo, Ali Braytee, Weidong Huang, Ali Anaissi | FedSAF: A Federated Learning Framework for Enhanced Gastric Cancer
Detection and Privacy Preservation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Gastric cancer is one of the most commonly diagnosed cancers and has a high
mortality rate. Due to limited medical resources, developing machine learning
models for gastric cancer recognition provides an efficient solution for
medical institutions. However, such models typically require large sample sizes
for training and testing, which can challenge patient privacy. Federated
learning offers an effective alternative by enabling model training across
multiple institutions without sharing sensitive patient data. This paper
addresses the limited sample size of publicly available gastric cancer data
with a modified data processing method. This paper introduces FedSAF, a novel
federated learning algorithm designed to improve the performance of existing
methods, particularly in non-independent and identically distributed (non-IID)
data scenarios. FedSAF incorporates attention-based message passing and the
Fisher Information Matrix to enhance model accuracy, while a model splitting
function reduces computation and transmission costs. Hyperparameter tuning and
ablation studies demonstrate the effectiveness of this new algorithm, showing
improvements in test accuracy on gastric cancer datasets, with FedSAF
outperforming existing federated learning methods like FedAMP, FedAvg, and
FedProx. The framework's robustness and generalization ability were further
validated across additional datasets (SEED, BOT, FashionMNIST, and CIFAR-10),
achieving high performance in diverse environments.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:48:48 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Miao",
"Yuxin",
""
],
[
"Yang",
"Xinyuan",
""
],
[
"Fan",
"Hongda",
""
],
[
"Li",
"Yichun",
""
],
[
"Hong",
"Yishu",
""
],
[
"Guo",
"Xiechen",
""
],
[
"Braytee",
"Ali",
""
],
[
"Huang",
"Weidong",
""
],
[
"Anaissi",
"Ali",
""
]
] | TITLE: FedSAF: A Federated Learning Framework for Enhanced Gastric Cancer
Detection and Privacy Preservation
ABSTRACT: Gastric cancer is one of the most commonly diagnosed cancers and has a high
mortality rate. Due to limited medical resources, developing machine learning
models for gastric cancer recognition provides an efficient solution for
medical institutions. However, such models typically require large sample sizes
for training and testing, which can challenge patient privacy. Federated
learning offers an effective alternative by enabling model training across
multiple institutions without sharing sensitive patient data. This paper
addresses the limited sample size of publicly available gastric cancer data
with a modified data processing method. This paper introduces FedSAF, a novel
federated learning algorithm designed to improve the performance of existing
methods, particularly in non-independent and identically distributed (non-IID)
data scenarios. FedSAF incorporates attention-based message passing and the
Fisher Information Matrix to enhance model accuracy, while a model splitting
function reduces computation and transmission costs. Hyperparameter tuning and
ablation studies demonstrate the effectiveness of this new algorithm, showing
improvements in test accuracy on gastric cancer datasets, with FedSAF
outperforming existing federated learning methods like FedAMP, FedAvg, and
FedProx. The framework's robustness and generalization ability were further
validated across additional datasets (SEED, BOT, FashionMNIST, and CIFAR-10),
achieving high performance in diverse environments.
|
2503.15875 | Daqi Liu | Haiguang Wang, Daqi Liu, Hongwei Xie, Haisong Liu, Enhui Ma, Kaicheng
Yu, Limin Wang, Bing Wang | MiLA: Multi-view Intensive-fidelity Long-term Video Generation World
Model for Autonomous Driving | project website: https://github.com/xiaomi-mlab/mila.github.io | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In recent years, data-driven techniques have greatly advanced autonomous
driving systems, but the need for rare and diverse training data remains a
challenge, requiring significant investment in equipment and labor. World
models, which predict and generate future environmental states, offer a
promising solution by synthesizing annotated video data for training. However,
existing methods struggle to generate long, consistent videos without
accumulating errors, especially in dynamic scenes. To address this, we propose
MiLA, a novel framework for generating high-fidelity, long-duration videos up
to one minute. MiLA utilizes a Coarse-to-Re(fine) approach to both stabilize
video generation and correct distortion of dynamic objects. Additionally, we
introduce a Temporal Progressive Denoising Scheduler and Joint Denoising and
Correcting Flow modules to improve the quality of generated videos. Extensive
experiments on the nuScenes dataset show that MiLA achieves state-of-the-art
performance in video generation quality. For more information, visit the
project website: https://github.com/xiaomi-mlab/mila.github.io.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:58:32 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wang",
"Haiguang",
""
],
[
"Liu",
"Daqi",
""
],
[
"Xie",
"Hongwei",
""
],
[
"Liu",
"Haisong",
""
],
[
"Ma",
"Enhui",
""
],
[
"Yu",
"Kaicheng",
""
],
[
"Wang",
"Limin",
""
],
[
"Wang",
"Bing",
""
]
] | TITLE: MiLA: Multi-view Intensive-fidelity Long-term Video Generation World
Model for Autonomous Driving
ABSTRACT: In recent years, data-driven techniques have greatly advanced autonomous
driving systems, but the need for rare and diverse training data remains a
challenge, requiring significant investment in equipment and labor. World
models, which predict and generate future environmental states, offer a
promising solution by synthesizing annotated video data for training. However,
existing methods struggle to generate long, consistent videos without
accumulating errors, especially in dynamic scenes. To address this, we propose
MiLA, a novel framework for generating high-fidelity, long-duration videos up
to one minute. MiLA utilizes a Coarse-to-Re(fine) approach to both stabilize
video generation and correct distortion of dynamic objects. Additionally, we
introduce a Temporal Progressive Denoising Scheduler and Joint Denoising and
Correcting Flow modules to improve the quality of generated videos. Extensive
experiments on the nuScenes dataset show that MiLA achieves state-of-the-art
performance in video generation quality. For more information, visit the
project website: https://github.com/xiaomi-mlab/mila.github.io.
|
2503.15876 | Kai Chen | Kai Chen, Zebing Sun | DeepPsy-Agent: A Stage-Aware and Deep-Thinking Emotional Support Agent
System | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces DeepPsy-Agent, an innovative psychological support
system that combines the three-stage helping theory in psychology with deep
learning techniques. The system consists of two core components: (1) a
multi-stage response-capable dialogue model (\textit{deeppsy-chat}), which
enhances reasoning capabilities through stage-awareness and deep-thinking
analysis to generate high-quality responses; and (2) a real-time stage
transition detection model that identifies contextual shifts to guide the
dialogue towards more effective intervention stages. Based on 30,000 real
psychological hotline conversations, we employ AI-simulated dialogues and
expert re-annotation strategies to construct a high-quality multi-turn dialogue
dataset. Experimental results demonstrate that DeepPsy-Agent outperforms
general-purpose large language models (LLMs) in key metrics such as problem
exposure completeness, cognitive restructuring success rate, and action
adoption rate. Ablation studies further validate the effectiveness of
stage-awareness and deep-thinking modules, showing that stage information
contributes 42.3\% to performance, while the deep-thinking module increases
root-cause identification by 58.3\% and reduces ineffective suggestions by
72.1\%. This system addresses critical challenges in AI-based psychological
support through dynamic dialogue management and deep reasoning, advancing
intelligent mental health services.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:59:29 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chen",
"Kai",
""
],
[
"Sun",
"Zebing",
""
]
] | TITLE: DeepPsy-Agent: A Stage-Aware and Deep-Thinking Emotional Support Agent
System
ABSTRACT: This paper introduces DeepPsy-Agent, an innovative psychological support
system that combines the three-stage helping theory in psychology with deep
learning techniques. The system consists of two core components: (1) a
multi-stage response-capable dialogue model (\textit{deeppsy-chat}), which
enhances reasoning capabilities through stage-awareness and deep-thinking
analysis to generate high-quality responses; and (2) a real-time stage
transition detection model that identifies contextual shifts to guide the
dialogue towards more effective intervention stages. Based on 30,000 real
psychological hotline conversations, we employ AI-simulated dialogues and
expert re-annotation strategies to construct a high-quality multi-turn dialogue
dataset. Experimental results demonstrate that DeepPsy-Agent outperforms
general-purpose large language models (LLMs) in key metrics such as problem
exposure completeness, cognitive restructuring success rate, and action
adoption rate. Ablation studies further validate the effectiveness of
stage-awareness and deep-thinking modules, showing that stage information
contributes 42.3\% to performance, while the deep-thinking module increases
root-cause identification by 58.3\% and reduces ineffective suggestions by
72.1\%. This system addresses critical challenges in AI-based psychological
support through dynamic dialogue management and deep reasoning, advancing
intelligent mental health services.
|
2503.15877 | Tiange Xiang | Tiange Xiang, Kai Li, Chengjiang Long, Christian H\"ane, Peihong Guo,
Scott Delp, Ehsan Adeli, Li Fei-Fei | Repurposing 2D Diffusion Models with Gaussian Atlas for 3D Generation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in text-to-image diffusion models have been driven by the
increasing availability of paired 2D data. However, the development of 3D
diffusion models has been hindered by the scarcity of high-quality 3D data,
resulting in less competitive performance compared to their 2D counterparts. To
address this challenge, we propose repurposing pre-trained 2D diffusion models
for 3D object generation. We introduce Gaussian Atlas, a novel representation
that utilizes dense 2D grids, enabling the fine-tuning of 2D diffusion models
to generate 3D Gaussians. Our approach demonstrates successful transfer
learning from a pre-trained 2D diffusion model to a 2D manifold flattened from
3D structures. To support model training, we compile GaussianVerse, a
large-scale dataset comprising 205K high-quality 3D Gaussian fittings of
various 3D objects. Our experimental results show that text-to-image diffusion
models can be effectively adapted for 3D content generation, bridging the gap
between 2D and 3D modeling.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 05:59:41 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Xiang",
"Tiange",
""
],
[
"Li",
"Kai",
""
],
[
"Long",
"Chengjiang",
""
],
[
"Häne",
"Christian",
""
],
[
"Guo",
"Peihong",
""
],
[
"Delp",
"Scott",
""
],
[
"Adeli",
"Ehsan",
""
],
[
"Fei-Fei",
"Li",
""
]
] | TITLE: Repurposing 2D Diffusion Models with Gaussian Atlas for 3D Generation
ABSTRACT: Recent advances in text-to-image diffusion models have been driven by the
increasing availability of paired 2D data. However, the development of 3D
diffusion models has been hindered by the scarcity of high-quality 3D data,
resulting in less competitive performance compared to their 2D counterparts. To
address this challenge, we propose repurposing pre-trained 2D diffusion models
for 3D object generation. We introduce Gaussian Atlas, a novel representation
that utilizes dense 2D grids, enabling the fine-tuning of 2D diffusion models
to generate 3D Gaussians. Our approach demonstrates successful transfer
learning from a pre-trained 2D diffusion model to a 2D manifold flattened from
3D structures. To support model training, we compile GaussianVerse, a
large-scale dataset comprising 205K high-quality 3D Gaussian fittings of
various 3D objects. Our experimental results show that text-to-image diffusion
models can be effectively adapted for 3D content generation, bridging the gap
between 2D and 3D modeling.
|
2503.15887 | Haochen Wang | Haochen Wang and Kai Hu and Liangcai Gao | DocVideoQA: Towards Comprehensive Understanding of Document-Centric
Videos through Question Answering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote work and online courses have become important methods of knowledge
dissemination, leading to a large number of document-based instructional
videos. Unlike traditional video datasets, these videos mainly feature
rich-text images and audio that are densely packed with information closely
tied to the visual content, requiring advanced multimodal understanding
capabilities. However, this domain remains underexplored due to dataset
availability and its inherent complexity. In this paper, we introduce the
DocVideoQA task and dataset for the first time, comprising 1454 videos across
23 categories with a total duration of about 828 hours. The dataset is
annotated with 154k question-answer pairs generated manually and via GPT,
assessing models' comprehension, temporal awareness, and modality integration
capabilities. Initially, we establish a baseline using open-source MLLMs.
Recognizing the challenges in modality comprehension for document-centric
videos, we present DV-LLaMA, a robust video MLLM baseline. Our method enhances
unimodal feature extraction with diverse instruction-tuning data and employs
contrastive learning to strengthen modality integration. Through fine-tuning,
the LLM is equipped with audio-visual capabilities, leading to significant
improvements in document-centric video understanding. Extensive testing on the
DocVideoQA dataset shows that DV-LLaMA significantly outperforms existing
models. We'll release the code and dataset to facilitate future research.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 06:21:25 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wang",
"Haochen",
""
],
[
"Hu",
"Kai",
""
],
[
"Gao",
"Liangcai",
""
]
] | TITLE: DocVideoQA: Towards Comprehensive Understanding of Document-Centric
Videos through Question Answering
ABSTRACT: Remote work and online courses have become important methods of knowledge
dissemination, leading to a large number of document-based instructional
videos. Unlike traditional video datasets, these videos mainly feature
rich-text images and audio that are densely packed with information closely
tied to the visual content, requiring advanced multimodal understanding
capabilities. However, this domain remains underexplored due to dataset
availability and its inherent complexity. In this paper, we introduce the
DocVideoQA task and dataset for the first time, comprising 1454 videos across
23 categories with a total duration of about 828 hours. The dataset is
annotated with 154k question-answer pairs generated manually and via GPT,
assessing models' comprehension, temporal awareness, and modality integration
capabilities. Initially, we establish a baseline using open-source MLLMs.
Recognizing the challenges in modality comprehension for document-centric
videos, we present DV-LLaMA, a robust video MLLM baseline. Our method enhances
unimodal feature extraction with diverse instruction-tuning data and employs
contrastive learning to strengthen modality integration. Through fine-tuning,
the LLM is equipped with audio-visual capabilities, leading to significant
improvements in document-centric video understanding. Extensive testing on the
DocVideoQA dataset shows that DV-LLaMA significantly outperforms existing
models. We'll release the code and dataset to facilitate future research.
|
2503.15892 | Haiyang Yu | Haiyang Yu, Siyang Yi, Ke Niu, Minghan Zhuo, Bin Li | UMIT: Unifying Medical Imaging Tasks via Vision-Language Models | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | With the rapid advancement of deep learning, particularly in the field of
medical image analysis, an increasing number of Vision-Language Models (VLMs)
are being widely applied to solve complex health and biomedical challenges.
However, existing research has primarily focused on specific tasks or single
modalities, which limits their applicability and generalization across diverse
medical scenarios. To address this challenge, we propose UMIT, a unified
multi-modal, multi-task VLM designed specifically for medical imaging tasks.
UMIT is able to solve various tasks, including visual question answering,
disease detection, and medical report generation. In addition, it is applicable
to multiple imaging modalities (e.g., X-ray, CT and PET), covering a wide range
of applications from basic diagnostics to complex lesion analysis. Moreover,
UMIT supports both English and Chinese, expanding its applicability globally
and ensuring accessibility to healthcare services in different linguistic
contexts. To enhance the model's adaptability and task-handling capability, we
design a unique two-stage training strategy and fine-tune UMIT with designed
instruction templates. Through extensive empirical evaluation, UMIT outperforms
previous methods in five tasks across multiple datasets. The performance of
UMIT indicates that it can significantly enhance diagnostic accuracy and
workflow efficiency, thus providing effective solutions for medical imaging
applications.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 06:43:36 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yu",
"Haiyang",
""
],
[
"Yi",
"Siyang",
""
],
[
"Niu",
"Ke",
""
],
[
"Zhuo",
"Minghan",
""
],
[
"Li",
"Bin",
""
]
] | TITLE: UMIT: Unifying Medical Imaging Tasks via Vision-Language Models
ABSTRACT: With the rapid advancement of deep learning, particularly in the field of
medical image analysis, an increasing number of Vision-Language Models (VLMs)
are being widely applied to solve complex health and biomedical challenges.
However, existing research has primarily focused on specific tasks or single
modalities, which limits their applicability and generalization across diverse
medical scenarios. To address this challenge, we propose UMIT, a unified
multi-modal, multi-task VLM designed specifically for medical imaging tasks.
UMIT is able to solve various tasks, including visual question answering,
disease detection, and medical report generation. In addition, it is applicable
to multiple imaging modalities (e.g., X-ray, CT and PET), covering a wide range
of applications from basic diagnostics to complex lesion analysis. Moreover,
UMIT supports both English and Chinese, expanding its applicability globally
and ensuring accessibility to healthcare services in different linguistic
contexts. To enhance the model's adaptability and task-handling capability, we
design a unique two-stage training strategy and fine-tune UMIT with designed
instruction templates. Through extensive empirical evaluation, UMIT outperforms
previous methods in five tasks across multiple datasets. The performance of
UMIT indicates that it can significantly enhance diagnostic accuracy and
workflow efficiency, thus providing effective solutions for medical imaging
applications.
|
2503.15898 | Wen Boran | Boran Wen, Dingbang Huang, Zichen Zhang, Jiahong Zhou, Jianbin Deng,
Jingyu Gong, Yulong Chen, Lizhuang Ma, Yong-Lu Li | Reconstructing In-the-Wild Open-Vocabulary Human-Object Interactions | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Reconstructing human-object interactions (HOI) from single images is
fundamental in computer vision. Existing methods are primarily trained and
tested on indoor scenes due to the lack of 3D data, particularly constrained by
the object variety, making it challenging to generalize to real-world scenes
with a wide range of objects. The limitations of previous 3D HOI datasets were
primarily due to the difficulty in acquiring 3D object assets. However, with
the development of 3D reconstruction from single images, recently it has become
possible to reconstruct various objects from 2D HOI images. We therefore
propose a pipeline for annotating fine-grained 3D humans, objects, and their
interactions from single images. We annotated 2.5k+ 3D HOI assets from existing
2D HOI datasets and built the first open-vocabulary in-the-wild 3D HOI dataset
Open3DHOI, to serve as a future test set. Moreover, we design a novel
Gaussian-HOI optimizer, which efficiently reconstructs the spatial interactions
between humans and objects while learning the contact regions. Besides the 3D
HOI reconstruction, we also propose several new tasks for 3D HOI understanding
to pave the way for future work. Data and code will be publicly available at
https://wenboran2002.github.io/3dhoi.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 06:50:18 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wen",
"Boran",
""
],
[
"Huang",
"Dingbang",
""
],
[
"Zhang",
"Zichen",
""
],
[
"Zhou",
"Jiahong",
""
],
[
"Deng",
"Jianbin",
""
],
[
"Gong",
"Jingyu",
""
],
[
"Chen",
"Yulong",
""
],
[
"Ma",
"Lizhuang",
""
],
[
"Li",
"Yong-Lu",
""
]
] | TITLE: Reconstructing In-the-Wild Open-Vocabulary Human-Object Interactions
ABSTRACT: Reconstructing human-object interactions (HOI) from single images is
fundamental in computer vision. Existing methods are primarily trained and
tested on indoor scenes due to the lack of 3D data, particularly constrained by
the object variety, making it challenging to generalize to real-world scenes
with a wide range of objects. The limitations of previous 3D HOI datasets were
primarily due to the difficulty in acquiring 3D object assets. However, with
the development of 3D reconstruction from single images, recently it has become
possible to reconstruct various objects from 2D HOI images. We therefore
propose a pipeline for annotating fine-grained 3D humans, objects, and their
interactions from single images. We annotated 2.5k+ 3D HOI assets from existing
2D HOI datasets and built the first open-vocabulary in-the-wild 3D HOI dataset
Open3DHOI, to serve as a future test set. Moreover, we design a novel
Gaussian-HOI optimizer, which efficiently reconstructs the spatial interactions
between humans and objects while learning the contact regions. Besides the 3D
HOI reconstruction, we also propose several new tasks for 3D HOI understanding
to pave the way for future work. Data and code will be publicly available at
https://wenboran2002.github.io/3dhoi.
|
2503.15902 | Jose Miguel Lara Rangel | Jose Lara-Rangel, Clare Heinbaugh | On the Limits of Applying Graph Transformers for Brain Connectome
Classification | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Brain connectomes offer detailed maps of neural connections within the brain.
Recent studies have proposed novel connectome graph datasets and attempted to
improve connectome classification by using graph deep learning. With recent
advances demonstrating transformers' ability to model intricate relationships
and outperform in various domains, this work explores their performance on the
novel NeuroGraph benchmark datasets and synthetic variants derived from
probabilistically removing edges to simulate noisy data. Our findings suggest
that graph transformers offer no major advantage over traditional GNNs on this
dataset. Furthermore, both traditional and transformer GNN models maintain
accuracy even with all edges removed, suggesting that the dataset's graph
structures may not significantly impact predictions. We propose further
assessing NeuroGraph as a brain connectome benchmark, emphasizing the need for
well-curated datasets and improved preprocessing strategies to obtain
meaningful edge connections.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 07:03:13 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Lara-Rangel",
"Jose",
""
],
[
"Heinbaugh",
"Clare",
""
]
] | TITLE: On the Limits of Applying Graph Transformers for Brain Connectome
Classification
ABSTRACT: Brain connectomes offer detailed maps of neural connections within the brain.
Recent studies have proposed novel connectome graph datasets and attempted to
improve connectome classification by using graph deep learning. With recent
advances demonstrating transformers' ability to model intricate relationships
and outperform in various domains, this work explores their performance on the
novel NeuroGraph benchmark datasets and synthetic variants derived from
probabilistically removing edges to simulate noisy data. Our findings suggest
that graph transformers offer no major advantage over traditional GNNs on this
dataset. Furthermore, both traditional and transformer GNN models maintain
accuracy even with all edges removed, suggesting that the dataset's graph
structures may not significantly impact predictions. We propose further
assessing NeuroGraph as a brain connectome benchmark, emphasizing the need for
well-curated datasets and improved preprocessing strategies to obtain
meaningful edge connections.
|
2503.15905 | Wang Jiyuan | Jiyuan Wang, Chunyu Lin, Cheng Guan, Lang Nie, Jing He, Haodong Li,
Kang Liao, Yao Zhao | Jasmine: Harnessing Diffusion Prior for Self-supervised Depth Estimation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose Jasmine, the first Stable Diffusion (SD)-based
self-supervised framework for monocular depth estimation, which effectively
harnesses SD's visual priors to enhance the sharpness and generalization of
unsupervised prediction. Previous SD-based methods are all supervised since
adapting diffusion models for dense prediction requires high-precision
supervision. In contrast, self-supervised reprojection suffers from inherent
challenges (e.g., occlusions, texture-less regions, illumination variance), and
the predictions exhibit blurs and artifacts that severely compromise SD's
latent priors. To resolve this, we construct a novel surrogate task of hybrid
image reconstruction. Without any additional supervision, it preserves the
detail priors of SD models by reconstructing the images themselves while
preventing depth estimation from degradation. Furthermore, to address the
inherent misalignment between SD's scale and shift invariant estimation and
self-supervised scale-invariant depth estimation, we build the Scale-Shift GRU.
It not only bridges this distribution gap but also isolates the fine-grained
texture of SD output against the interference of reprojection loss. Extensive
experiments demonstrate that Jasmine achieves SoTA performance on the KITTI
benchmark and exhibits superior zero-shot generalization across multiple
datasets.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 07:15:49 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Wang",
"Jiyuan",
""
],
[
"Lin",
"Chunyu",
""
],
[
"Guan",
"Cheng",
""
],
[
"Nie",
"Lang",
""
],
[
"He",
"Jing",
""
],
[
"Li",
"Haodong",
""
],
[
"Liao",
"Kang",
""
],
[
"Zhao",
"Yao",
""
]
] | TITLE: Jasmine: Harnessing Diffusion Prior for Self-supervised Depth Estimation
ABSTRACT: In this paper, we propose Jasmine, the first Stable Diffusion (SD)-based
self-supervised framework for monocular depth estimation, which effectively
harnesses SD's visual priors to enhance the sharpness and generalization of
unsupervised prediction. Previous SD-based methods are all supervised since
adapting diffusion models for dense prediction requires high-precision
supervision. In contrast, self-supervised reprojection suffers from inherent
challenges (e.g., occlusions, texture-less regions, illumination variance), and
the predictions exhibit blurs and artifacts that severely compromise SD's
latent priors. To resolve this, we construct a novel surrogate task of hybrid
image reconstruction. Without any additional supervision, it preserves the
detail priors of SD models by reconstructing the images themselves while
preventing depth estimation from degradation. Furthermore, to address the
inherent misalignment between SD's scale and shift invariant estimation and
self-supervised scale-invariant depth estimation, we build the Scale-Shift GRU.
It not only bridges this distribution gap but also isolates the fine-grained
texture of SD output against the interference of reprojection loss. Extensive
experiments demonstrate that Jasmine achieves SoTA performance on the KITTI
benchmark and exhibits superior zero-shot generalization across multiple
datasets.
|
2503.15908 | Jiatong Xia | Jiatong Xia, Libo Sun, Lingqiao Liu | Enhancing Close-up Novel View Synthesis via Pseudo-labeling | Accepted by AAAI 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent methods, such as Neural Radiance Fields (NeRF) and 3D Gaussian
Splatting (3DGS), have demonstrated remarkable capabilities in novel view
synthesis. However, despite their success in producing high-quality images for
viewpoints similar to those seen during training, they struggle when generating
detailed images from viewpoints that significantly deviate from the training
set, particularly in close-up views. The primary challenge stems from the lack
of specific training data for close-up views, leading to the inability of
current methods to render these views accurately. To address this issue, we
introduce a novel pseudo-label-based learning strategy. This approach leverages
pseudo-labels derived from existing training data to provide targeted
supervision across a wide range of close-up viewpoints. Recognizing the absence
of benchmarks for this specific challenge, we also present a new dataset
designed to assess the effectiveness of both current and future methods in this
area. Our extensive experiments demonstrate the efficacy of our approach.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 07:27:46 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Xia",
"Jiatong",
""
],
[
"Sun",
"Libo",
""
],
[
"Liu",
"Lingqiao",
""
]
] | TITLE: Enhancing Close-up Novel View Synthesis via Pseudo-labeling
ABSTRACT: Recent methods, such as Neural Radiance Fields (NeRF) and 3D Gaussian
Splatting (3DGS), have demonstrated remarkable capabilities in novel view
synthesis. However, despite their success in producing high-quality images for
viewpoints similar to those seen during training, they struggle when generating
detailed images from viewpoints that significantly deviate from the training
set, particularly in close-up views. The primary challenge stems from the lack
of specific training data for close-up views, leading to the inability of
current methods to render these views accurately. To address this issue, we
introduce a novel pseudo-label-based learning strategy. This approach leverages
pseudo-labels derived from existing training data to provide targeted
supervision across a wide range of close-up viewpoints. Recognizing the absence
of benchmarks for this specific challenge, we also present a new dataset
designed to assess the effectiveness of both current and future methods in this
area. Our extensive experiments demonstrate the efficacy of our approach.
|
2503.15917 | Beilei Cui | Beilei Cui, Long Bai, Mobarakol Islam, An Wang, Zhiqi Ma, Yiming
Huang, Feng Li, Zhen Chen, Zhongliang Jiang, Nassir Navab, Hongliang Ren | Learning to Efficiently Adapt Foundation Models for Self-Supervised
Endoscopic 3D Scene Reconstruction from Any Cameras | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate 3D scene reconstruction is essential for numerous medical tasks.
Given the challenges in obtaining ground truth data, there has been an
increasing focus on self-supervised learning (SSL) for endoscopic depth
estimation as a basis for scene reconstruction. While foundation models have
shown remarkable progress in visual tasks, their direct application to the
medical domain often leads to suboptimal results. However, the visual features
from these models can still enhance endoscopic tasks, emphasizing the need for
efficient adaptation strategies, which still lack exploration currently. In
this paper, we introduce Endo3DAC, a unified framework for endoscopic scene
reconstruction that efficiently adapts foundation models. We design an
integrated network capable of simultaneously estimating depth maps, relative
poses, and camera intrinsic parameters. By freezing the backbone foundation
model and training only the specially designed Gated Dynamic Vector-Based
Low-Rank Adaptation (GDV-LoRA) with separate decoder heads, Endo3DAC achieves
superior depth and pose estimation while maintaining training efficiency.
Additionally, we propose a 3D scene reconstruction pipeline that optimizes
depth maps' scales, shifts, and a few parameters based on our integrated
network. Extensive experiments across four endoscopic datasets demonstrate that
Endo3DAC significantly outperforms other state-of-the-art methods while
requiring fewer trainable parameters. To our knowledge, we are the first to
utilize a single network that only requires surgical videos to perform both SSL
depth estimation and scene reconstruction tasks. The code will be released upon
acceptance.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 07:49:04 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Cui",
"Beilei",
""
],
[
"Bai",
"Long",
""
],
[
"Islam",
"Mobarakol",
""
],
[
"Wang",
"An",
""
],
[
"Ma",
"Zhiqi",
""
],
[
"Huang",
"Yiming",
""
],
[
"Li",
"Feng",
""
],
[
"Chen",
"Zhen",
""
],
[
"Jiang",
"Zhongliang",
""
],
[
"Navab",
"Nassir",
""
],
[
"Ren",
"Hongliang",
""
]
] | TITLE: Learning to Efficiently Adapt Foundation Models for Self-Supervised
Endoscopic 3D Scene Reconstruction from Any Cameras
ABSTRACT: Accurate 3D scene reconstruction is essential for numerous medical tasks.
Given the challenges in obtaining ground truth data, there has been an
increasing focus on self-supervised learning (SSL) for endoscopic depth
estimation as a basis for scene reconstruction. While foundation models have
shown remarkable progress in visual tasks, their direct application to the
medical domain often leads to suboptimal results. However, the visual features
from these models can still enhance endoscopic tasks, emphasizing the need for
efficient adaptation strategies, which still lack exploration currently. In
this paper, we introduce Endo3DAC, a unified framework for endoscopic scene
reconstruction that efficiently adapts foundation models. We design an
integrated network capable of simultaneously estimating depth maps, relative
poses, and camera intrinsic parameters. By freezing the backbone foundation
model and training only the specially designed Gated Dynamic Vector-Based
Low-Rank Adaptation (GDV-LoRA) with separate decoder heads, Endo3DAC achieves
superior depth and pose estimation while maintaining training efficiency.
Additionally, we propose a 3D scene reconstruction pipeline that optimizes
depth maps' scales, shifts, and a few parameters based on our integrated
network. Extensive experiments across four endoscopic datasets demonstrate that
Endo3DAC significantly outperforms other state-of-the-art methods while
requiring fewer trainable parameters. To our knowledge, we are the first to
utilize a single network that only requires surgical videos to perform both SSL
depth estimation and scene reconstruction tasks. The code will be released upon
acceptance.
|
2503.15926 | Paolo Burelli | Meisam J. Seikavandi, Maria J. Barrett and Paolo Burelli | Modeling Face Emotion Perception from Naturalistic Face Viewing:
Insights from Fixational Events and Gaze Strategies | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | Face Emotion Recognition (FER) is essential for social interactions and
understanding others' mental states. Utilizing eye tracking to investigate FER
has yielded insights into cognitive processes. In this study, we utilized an
instructionless paradigm to collect eye movement data from 21 participants,
examining two FER processes: free viewing and grounded FER. We analyzed
fixational, pupillary, and microsaccadic events from eye movements,
establishing their correlation with emotion perception and performance in the
grounded task. By identifying regions of interest on the face, we explored the
impact of eye-gaze strategies on face processing, their connection to emotions,
and performance in emotion perception. During free viewing, participants
displayed specific attention patterns for various emotions. In grounded tasks,
where emotions were interpreted based on words, we assessed performance and
contextual understanding. Notably, gaze patterns during free viewing predicted
success in grounded FER tasks, underscoring the significance of initial gaze
behavior. We also employed features from pre-trained deep-learning models for
face recognition to enhance the scalability and comparability of attention
analysis during free viewing across different datasets and populations. This
method facilitated the prediction and modeling of individual emotion perception
performance from minimal observations. Our findings advance the understanding
of the link between eye movements and emotion perception, with implications for
psychology, human-computer interaction, and affective computing, and pave the
way for developing precise emotion recognition systems.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 08:01:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Seikavandi",
"Meisam J.",
""
],
[
"Barrett",
"Maria J.",
""
],
[
"Burelli",
"Paolo",
""
]
] | TITLE: Modeling Face Emotion Perception from Naturalistic Face Viewing:
Insights from Fixational Events and Gaze Strategies
ABSTRACT: Face Emotion Recognition (FER) is essential for social interactions and
understanding others' mental states. Utilizing eye tracking to investigate FER
has yielded insights into cognitive processes. In this study, we utilized an
instructionless paradigm to collect eye movement data from 21 participants,
examining two FER processes: free viewing and grounded FER. We analyzed
fixational, pupillary, and microsaccadic events from eye movements,
establishing their correlation with emotion perception and performance in the
grounded task. By identifying regions of interest on the face, we explored the
impact of eye-gaze strategies on face processing, their connection to emotions,
and performance in emotion perception. During free viewing, participants
displayed specific attention patterns for various emotions. In grounded tasks,
where emotions were interpreted based on words, we assessed performance and
contextual understanding. Notably, gaze patterns during free viewing predicted
success in grounded FER tasks, underscoring the significance of initial gaze
behavior. We also employed features from pre-trained deep-learning models for
face recognition to enhance the scalability and comparability of attention
analysis during free viewing across different datasets and populations. This
method facilitated the prediction and modeling of individual emotion perception
performance from minimal observations. Our findings advance the understanding
of the link between eye movements and emotion perception, with implications for
psychology, human-computer interaction, and affective computing, and pave the
way for developing precise emotion recognition systems.
|
2503.15940 | Lichao Mou | Yaxiong Chen, Chuang Du, Chunlei Li, Jingliang Hu, Yilei Shi, Shengwu
Xiong, Xiao Xiang Zhu, Lichao Mou | UniCrossAdapter: Multimodal Adaptation of CLIP for Radiology Report
Generation | MICCAI 2024 Workshop | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated radiology report generation aims to expedite the tedious and
error-prone reporting process for radiologists. While recent works have made
progress, learning to align medical images and textual findings remains
challenging due to the relative scarcity of labeled medical data. For example,
datasets for this task are much smaller than those used for image captioning in
computer vision. In this work, we propose to transfer representations from
CLIP, a large-scale pre-trained vision-language model, to better capture
cross-modal semantics between images and texts. However, directly applying CLIP
is suboptimal due to the domain gap between natural images and radiology. To
enable efficient adaptation, we introduce UniCrossAdapter, lightweight adapter
modules that are incorporated into CLIP and fine-tuned on the target task while
keeping base parameters fixed. The adapters are distributed across modalities
and their interaction to enhance vision-language alignment. Experiments on two
public datasets demonstrate the effectiveness of our approach, advancing
state-of-the-art in radiology report generation. The proposed transfer learning
framework provides a means of harnessing semantic knowledge from large-scale
pre-trained models to tackle data-scarce medical vision-language tasks. Code is
available at https://github.com/chauncey-tow/MRG-CLIP.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 08:28:53 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chen",
"Yaxiong",
""
],
[
"Du",
"Chuang",
""
],
[
"Li",
"Chunlei",
""
],
[
"Hu",
"Jingliang",
""
],
[
"Shi",
"Yilei",
""
],
[
"Xiong",
"Shengwu",
""
],
[
"Zhu",
"Xiao Xiang",
""
],
[
"Mou",
"Lichao",
""
]
] | TITLE: UniCrossAdapter: Multimodal Adaptation of CLIP for Radiology Report
Generation
ABSTRACT: Automated radiology report generation aims to expedite the tedious and
error-prone reporting process for radiologists. While recent works have made
progress, learning to align medical images and textual findings remains
challenging due to the relative scarcity of labeled medical data. For example,
datasets for this task are much smaller than those used for image captioning in
computer vision. In this work, we propose to transfer representations from
CLIP, a large-scale pre-trained vision-language model, to better capture
cross-modal semantics between images and texts. However, directly applying CLIP
is suboptimal due to the domain gap between natural images and radiology. To
enable efficient adaptation, we introduce UniCrossAdapter, lightweight adapter
modules that are incorporated into CLIP and fine-tuned on the target task while
keeping base parameters fixed. The adapters are distributed across modalities
and their interaction to enhance vision-language alignment. Experiments on two
public datasets demonstrate the effectiveness of our approach, advancing
state-of-the-art in radiology report generation. The proposed transfer learning
framework provides a means of harnessing semantic knowledge from large-scale
pre-trained models to tackle data-scarce medical vision-language tasks. Code is
available at https://github.com/chauncey-tow/MRG-CLIP.
|
2503.15948 | Vasily Konovalov | Elisei Rykov, Kseniia Petrushina, Kseniia Titova, Alexander Panchenko,
Vasily Konovalov | Don't Fight Hallucinations, Use Them: Estimating Image Realism using NLI
over Atomic Facts | Proceedings of De-Factify 4: 4nd Workshop on Multimodal Fact Checking
and Hate Speech Detection, co-located with AAAI-2025 | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantifying the realism of images remains a challenging problem in the field
of artificial intelligence. For example, an image of Albert Einstein holding a
smartphone violates common-sense because modern smartphone were invented after
Einstein's death. We introduce a novel method for assessing image realism using
Large Vision-Language Models (LVLMs) and Natural Language Inference (NLI). Our
approach is based on the premise that LVLMs may generate hallucinations when
confronted with images that defy common sense. Using LVLM to extract atomic
facts from these images, we obtain a mix of accurate facts and erroneous
hallucinations. We proceed by calculating pairwise entailment scores among
these facts, subsequently aggregating these values to yield a singular reality
score. This process serves to identify contradictions between genuine facts and
hallucinatory elements, signaling the presence of images that violate common
sense. Our approach has achieved a new state-of-the-art performance in
zero-shot mode on the WHOOPS! dataset.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 08:44:10 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Rykov",
"Elisei",
""
],
[
"Petrushina",
"Kseniia",
""
],
[
"Titova",
"Kseniia",
""
],
[
"Panchenko",
"Alexander",
""
],
[
"Konovalov",
"Vasily",
""
]
] | TITLE: Don't Fight Hallucinations, Use Them: Estimating Image Realism using NLI
over Atomic Facts
ABSTRACT: Quantifying the realism of images remains a challenging problem in the field
of artificial intelligence. For example, an image of Albert Einstein holding a
smartphone violates common-sense because modern smartphone were invented after
Einstein's death. We introduce a novel method for assessing image realism using
Large Vision-Language Models (LVLMs) and Natural Language Inference (NLI). Our
approach is based on the premise that LVLMs may generate hallucinations when
confronted with images that defy common sense. Using LVLM to extract atomic
facts from these images, we obtain a mix of accurate facts and erroneous
hallucinations. We proceed by calculating pairwise entailment scores among
these facts, subsequently aggregating these values to yield a singular reality
score. This process serves to identify contradictions between genuine facts and
hallucinatory elements, signaling the presence of images that violate common
sense. Our approach has achieved a new state-of-the-art performance in
zero-shot mode on the WHOOPS! dataset.
|
2503.15969 | Benedikt Blumenstiel | Clive Tinashe Marimo, Benedikt Blumenstiel, Maximilian Nitsche,
Johannes Jakubik, Thomas Brunschwiler | Beyond the Visible: Multispectral Vision-Language Learning for Earth
Observation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Vision-language models for Earth observation (EO) typically rely on the
visual spectrum of data as the only model input, thus failing to leverage the
rich spectral information available in the multispectral channels recorded by
satellites. Therefore, in this paper, we introduce Llama3-MS-CLIP, the first
vision-language model pre-trained with contrastive learning on a large-scale
multispectral dataset and report on the performance gains due to the extended
spectral range. Furthermore, we present the largest-to-date image-caption
dataset for multispectral data, consisting of one million Sentinel-2 samples
and corresponding textual descriptions generated with Llama3-LLaVA-Next and
Overture Maps data. We develop a scalable captioning pipeline, which is
validated by domain experts. We evaluate Llama3-MS-CLIP on multispectral
zero-shot image classification and retrieval using three datasets of varying
complexity. Our results demonstrate that Llama3-MS-CLIP significantly
outperforms other RGB-based approaches, improving classification accuracy by
6.77% on average and retrieval performance by 4.63% mAP compared to the
second-best model. Our results emphasize the relevance of multispectral
vision-language learning. We release the image-caption dataset, code, and model
weights under an open-source license.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 09:13:31 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Marimo",
"Clive Tinashe",
""
],
[
"Blumenstiel",
"Benedikt",
""
],
[
"Nitsche",
"Maximilian",
""
],
[
"Jakubik",
"Johannes",
""
],
[
"Brunschwiler",
"Thomas",
""
]
] | TITLE: Beyond the Visible: Multispectral Vision-Language Learning for Earth
Observation
ABSTRACT: Vision-language models for Earth observation (EO) typically rely on the
visual spectrum of data as the only model input, thus failing to leverage the
rich spectral information available in the multispectral channels recorded by
satellites. Therefore, in this paper, we introduce Llama3-MS-CLIP, the first
vision-language model pre-trained with contrastive learning on a large-scale
multispectral dataset and report on the performance gains due to the extended
spectral range. Furthermore, we present the largest-to-date image-caption
dataset for multispectral data, consisting of one million Sentinel-2 samples
and corresponding textual descriptions generated with Llama3-LLaVA-Next and
Overture Maps data. We develop a scalable captioning pipeline, which is
validated by domain experts. We evaluate Llama3-MS-CLIP on multispectral
zero-shot image classification and retrieval using three datasets of varying
complexity. Our results demonstrate that Llama3-MS-CLIP significantly
outperforms other RGB-based approaches, improving classification accuracy by
6.77% on average and retrieval performance by 4.63% mAP compared to the
second-best model. Our results emphasize the relevance of multispectral
vision-language learning. We release the image-caption dataset, code, and model
weights under an open-source license.
|
2503.15970 | JunGyu Lee | JunGyu Lee, Kunyoung Lee, Haesol Park, Ig-Jae Kim, Gi Pyo Nam | V-NAW: Video-based Noise-aware Adaptive Weighting for Facial Expression
Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial Expression Recognition (FER) plays a crucial role in human affective
analysis and has been widely applied in computer vision tasks such as
human-computer interaction and psychological assessment. The 8th Affective
Behavior Analysis in-the-Wild (ABAW) Challenge aims to assess human emotions
using the video-based Aff-Wild2 dataset. This challenge includes various tasks,
including the video-based EXPR recognition track, which is our primary focus.
In this paper, we demonstrate that addressing label ambiguity and class
imbalance, which are known to cause performance degradation, can lead to
meaningful performance improvements. Specifically, we propose Video-based
Noise-aware Adaptive Weighting (V-NAW), which adaptively assigns importance to
each frame in a clip to address label ambiguity and effectively capture
temporal variations in facial expressions. Furthermore, we introduce a simple
and effective augmentation strategy to reduce redundancy between consecutive
frames, which is a primary cause of overfitting. Through extensive experiments,
we validate the effectiveness of our approach, demonstrating significant
improvements in video-based FER performance.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 09:13:34 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Lee",
"JunGyu",
""
],
[
"Lee",
"Kunyoung",
""
],
[
"Park",
"Haesol",
""
],
[
"Kim",
"Ig-Jae",
""
],
[
"Nam",
"Gi Pyo",
""
]
] | TITLE: V-NAW: Video-based Noise-aware Adaptive Weighting for Facial Expression
Recognition
ABSTRACT: Facial Expression Recognition (FER) plays a crucial role in human affective
analysis and has been widely applied in computer vision tasks such as
human-computer interaction and psychological assessment. The 8th Affective
Behavior Analysis in-the-Wild (ABAW) Challenge aims to assess human emotions
using the video-based Aff-Wild2 dataset. This challenge includes various tasks,
including the video-based EXPR recognition track, which is our primary focus.
In this paper, we demonstrate that addressing label ambiguity and class
imbalance, which are known to cause performance degradation, can lead to
meaningful performance improvements. Specifically, we propose Video-based
Noise-aware Adaptive Weighting (V-NAW), which adaptively assigns importance to
each frame in a clip to address label ambiguity and effectively capture
temporal variations in facial expressions. Furthermore, we introduce a simple
and effective augmentation strategy to reduce redundancy between consecutive
frames, which is a primary cause of overfitting. Through extensive experiments,
we validate the effectiveness of our approach, demonstrating significant
improvements in video-based FER performance.
|
2503.15978 | Pengyu Liu | Pengyu Liu, Guohua Dong, Dan Guo, Kun Li, Fengling Li, Xun Yang, Meng
Wang, Xiaomin Ying | A Survey on fMRI-based Brain Decoding for Reconstructing Multimodal
Stimuli | 31 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In daily life, we encounter diverse external stimuli, such as images, sounds,
and videos. As research in multimodal stimuli and neuroscience advances,
fMRI-based brain decoding has become a key tool for understanding brain
perception and its complex cognitive processes. Decoding brain signals to
reconstruct stimuli not only reveals intricate neural mechanisms but also
drives progress in AI, disease treatment, and brain-computer interfaces. Recent
advancements in neuroimaging and image generation models have significantly
improved fMRI-based decoding. While fMRI offers high spatial resolution for
precise brain activity mapping, its low temporal resolution and signal noise
pose challenges. Meanwhile, techniques like GANs, VAEs, and Diffusion Models
have enhanced reconstructed image quality, and multimodal pre-trained models
have boosted cross-modal decoding tasks. This survey systematically reviews
recent progress in fMRI-based brain decoding, focusing on stimulus
reconstruction from passive brain signals. It summarizes datasets, relevant
brain regions, and categorizes existing methods by model structure.
Additionally, it evaluates model performance and discusses their effectiveness.
Finally, it identifies key challenges and proposes future research directions,
offering valuable insights for the field. For more information and resources
related to this survey, visit https://github.com/LpyNow/BrainDecodingImage.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 09:23:07 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Liu",
"Pengyu",
""
],
[
"Dong",
"Guohua",
""
],
[
"Guo",
"Dan",
""
],
[
"Li",
"Kun",
""
],
[
"Li",
"Fengling",
""
],
[
"Yang",
"Xun",
""
],
[
"Wang",
"Meng",
""
],
[
"Ying",
"Xiaomin",
""
]
] | TITLE: A Survey on fMRI-based Brain Decoding for Reconstructing Multimodal
Stimuli
ABSTRACT: In daily life, we encounter diverse external stimuli, such as images, sounds,
and videos. As research in multimodal stimuli and neuroscience advances,
fMRI-based brain decoding has become a key tool for understanding brain
perception and its complex cognitive processes. Decoding brain signals to
reconstruct stimuli not only reveals intricate neural mechanisms but also
drives progress in AI, disease treatment, and brain-computer interfaces. Recent
advancements in neuroimaging and image generation models have significantly
improved fMRI-based decoding. While fMRI offers high spatial resolution for
precise brain activity mapping, its low temporal resolution and signal noise
pose challenges. Meanwhile, techniques like GANs, VAEs, and Diffusion Models
have enhanced reconstructed image quality, and multimodal pre-trained models
have boosted cross-modal decoding tasks. This survey systematically reviews
recent progress in fMRI-based brain decoding, focusing on stimulus
reconstruction from passive brain signals. It summarizes datasets, relevant
brain regions, and categorizes existing methods by model structure.
Additionally, it evaluates model performance and discusses their effectiveness.
Finally, it identifies key challenges and proposes future research directions,
offering valuable insights for the field. For more information and resources
related to this survey, visit https://github.com/LpyNow/BrainDecodingImage.
|
2503.15985 | Han Yuan | Han Yuan, Li Zhang, Zheng Ma | Exploring the Reliability of Self-explanation and its Relationship with
Classification in Language Model-driven Financial Analysis | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Language models (LMs) have exhibited exceptional versatility in reasoning and
in-depth financial analysis through their proprietary information processing
capabilities. Previous research focused on evaluating classification
performance while often overlooking explainability or pre-conceived that
refined explanation corresponds to higher classification accuracy. Using a
public dataset in finance domain, we quantitatively evaluated self-explanations
by LMs, focusing on their factuality and causality. We identified the
statistically significant relationship between the accuracy of classifications
and the factuality or causality of self-explanations. Our study built an
empirical foundation for approximating classification confidence through
self-explanations and for optimizing classification via proprietary reasoning.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 09:33:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Yuan",
"Han",
""
],
[
"Zhang",
"Li",
""
],
[
"Ma",
"Zheng",
""
]
] | TITLE: Exploring the Reliability of Self-explanation and its Relationship with
Classification in Language Model-driven Financial Analysis
ABSTRACT: Language models (LMs) have exhibited exceptional versatility in reasoning and
in-depth financial analysis through their proprietary information processing
capabilities. Previous research focused on evaluating classification
performance while often overlooking explainability or pre-conceived that
refined explanation corresponds to higher classification accuracy. Using a
public dataset in finance domain, we quantitatively evaluated self-explanations
by LMs, focusing on their factuality and causality. We identified the
statistically significant relationship between the accuracy of classifications
and the factuality or causality of self-explanations. Our study built an
empirical foundation for approximating classification confidence through
self-explanations and for optimizing classification via proprietary reasoning.
|
2503.15986 | Zeqi Zheng | Zeqi Zheng, Yanchen Huang, Yingchao Yu, Zizheng Zhu, Junfeng Tang,
Zhaofei Yu, Yaochu Jin | SpiLiFormer: Enhancing Spiking Transformers with Lateral Inhibition | 16 pages, 7 figures | null | null | null | cs.NE cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spiking Neural Networks (SNNs) based on Transformers have garnered
significant attention due to their superior performance and high energy
efficiency. However, the spiking attention modules of most existing
Transformer-based SNNs are adapted from those of analog Transformers, failing
to fully address the issue of over-allocating attention to irrelevant contexts.
To fix this fundamental yet overlooked issue, we propose a Lateral
Inhibition-inspired Spiking Transformer (SpiLiFormer). It emulates the brain's
lateral inhibition mechanism, guiding the model to enhance attention to
relevant tokens while suppressing attention to irrelevant ones. Our model
achieves state-of-the-art (SOTA) performance across multiple datasets,
including CIFAR-10 (+0.45%), CIFAR-100 (+0.48%), CIFAR10-DVS (+2.70%),
N-Caltech101 (+1.94%), and ImageNet-1K (+1.6%). Notably, on the ImageNet-1K
dataset, SpiLiFormer (69.9M parameters, 4 time steps, 384 resolution)
outperforms E-SpikeFormer (173.0M parameters, 8 time steps, 384 resolution), a
SOTA spiking Transformer, by 0.46% using only 39% of the parameters and half
the time steps. Our code and training checkpoints will be released upon
acceptance.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 09:36:31 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zheng",
"Zeqi",
""
],
[
"Huang",
"Yanchen",
""
],
[
"Yu",
"Yingchao",
""
],
[
"Zhu",
"Zizheng",
""
],
[
"Tang",
"Junfeng",
""
],
[
"Yu",
"Zhaofei",
""
],
[
"Jin",
"Yaochu",
""
]
] | TITLE: SpiLiFormer: Enhancing Spiking Transformers with Lateral Inhibition
ABSTRACT: Spiking Neural Networks (SNNs) based on Transformers have garnered
significant attention due to their superior performance and high energy
efficiency. However, the spiking attention modules of most existing
Transformer-based SNNs are adapted from those of analog Transformers, failing
to fully address the issue of over-allocating attention to irrelevant contexts.
To fix this fundamental yet overlooked issue, we propose a Lateral
Inhibition-inspired Spiking Transformer (SpiLiFormer). It emulates the brain's
lateral inhibition mechanism, guiding the model to enhance attention to
relevant tokens while suppressing attention to irrelevant ones. Our model
achieves state-of-the-art (SOTA) performance across multiple datasets,
including CIFAR-10 (+0.45%), CIFAR-100 (+0.48%), CIFAR10-DVS (+2.70%),
N-Caltech101 (+1.94%), and ImageNet-1K (+1.6%). Notably, on the ImageNet-1K
dataset, SpiLiFormer (69.9M parameters, 4 time steps, 384 resolution)
outperforms E-SpikeFormer (173.0M parameters, 8 time steps, 384 resolution), a
SOTA spiking Transformer, by 0.46% using only 39% of the parameters and half
the time steps. Our code and training checkpoints will be released upon
acceptance.
|
2503.15990 | Langming Liu | Langming Liu, Haibin Chen, Yuhao Wang, Yujin Yuan, Shilei Liu, Wenbo
Su, Xiangyu Zhao, Bo Zheng | ECKGBench: Benchmarking Large Language Models in E-commerce Leveraging
Knowledge Graph | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated their capabilities across
various NLP tasks. Their potential in e-commerce is also substantial, evidenced
by practical implementations such as platform search, personalized
recommendations, and customer service. One primary concern associated with LLMs
is their factuality (e.g., hallucination), which is urgent in e-commerce due to
its significant impact on user experience and revenue. Despite some methods
proposed to evaluate LLMs' factuality, issues such as lack of reliability, high
consumption, and lack of domain expertise leave a gap between effective
assessment in e-commerce. To bridge the evaluation gap, we propose ECKGBench, a
dataset specifically designed to evaluate the capacities of LLMs in e-commerce
knowledge. Specifically, we adopt a standardized workflow to automatically
generate questions based on a large-scale knowledge graph, guaranteeing
sufficient reliability. We employ the simple question-answering paradigm,
substantially improving the evaluation efficiency by the least input and output
tokens. Furthermore, we inject abundant e-commerce expertise in each evaluation
stage, including human annotation, prompt design, negative sampling, and
verification. Besides, we explore the LLMs' knowledge boundaries in e-commerce
from a novel perspective. Through comprehensive evaluations of several advanced
LLMs on ECKGBench, we provide meticulous analysis and insights into leveraging
LLMs for e-commerce.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 09:49:15 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Liu",
"Langming",
""
],
[
"Chen",
"Haibin",
""
],
[
"Wang",
"Yuhao",
""
],
[
"Yuan",
"Yujin",
""
],
[
"Liu",
"Shilei",
""
],
[
"Su",
"Wenbo",
""
],
[
"Zhao",
"Xiangyu",
""
],
[
"Zheng",
"Bo",
""
]
] | TITLE: ECKGBench: Benchmarking Large Language Models in E-commerce Leveraging
Knowledge Graph
ABSTRACT: Large language models (LLMs) have demonstrated their capabilities across
various NLP tasks. Their potential in e-commerce is also substantial, evidenced
by practical implementations such as platform search, personalized
recommendations, and customer service. One primary concern associated with LLMs
is their factuality (e.g., hallucination), which is urgent in e-commerce due to
its significant impact on user experience and revenue. Despite some methods
proposed to evaluate LLMs' factuality, issues such as lack of reliability, high
consumption, and lack of domain expertise leave a gap between effective
assessment in e-commerce. To bridge the evaluation gap, we propose ECKGBench, a
dataset specifically designed to evaluate the capacities of LLMs in e-commerce
knowledge. Specifically, we adopt a standardized workflow to automatically
generate questions based on a large-scale knowledge graph, guaranteeing
sufficient reliability. We employ the simple question-answering paradigm,
substantially improving the evaluation efficiency by the least input and output
tokens. Furthermore, we inject abundant e-commerce expertise in each evaluation
stage, including human annotation, prompt design, negative sampling, and
verification. Besides, we explore the LLMs' knowledge boundaries in e-commerce
from a novel perspective. Through comprehensive evaluations of several advanced
LLMs on ECKGBench, we provide meticulous analysis and insights into leveraging
LLMs for e-commerce.
|
2503.15997 | Paul Schulz | P. Schulz, T. Hempel, A. Al-Hamadi | Automating 3D Dataset Generation with Neural Radiance Fields | Accepted and presented at ROBOVIS 2025 (5th International Conference
on Robotics, Computer Vision and Intelligent Systems) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D detection is a critical task to understand spatial characteristics of the
environment and is used in a variety of applications including robotics,
augmented reality, and image retrieval. Training performant detection models
require diverse, precisely annotated, and large scale datasets that involve
complex and expensive creation processes. Hence, there are only few public 3D
datasets that are additionally limited in their range of classes. In this work,
we propose a pipeline for automatic generation of 3D datasets for arbitrary
objects. By utilizing the universal 3D representation and rendering
capabilities of Radiance Fields, our pipeline generates high quality 3D models
for arbitrary objects. These 3D models serve as input for a synthetic dataset
generator. Our pipeline is fast, easy to use and has a high degree of
automation. Our experiments demonstrate, that 3D pose estimation networks,
trained with our generated datasets, archive strong performance in typical
application scenarios.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 10:01:32 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Schulz",
"P.",
""
],
[
"Hempel",
"T.",
""
],
[
"Al-Hamadi",
"A.",
""
]
] | TITLE: Automating 3D Dataset Generation with Neural Radiance Fields
ABSTRACT: 3D detection is a critical task to understand spatial characteristics of the
environment and is used in a variety of applications including robotics,
augmented reality, and image retrieval. Training performant detection models
require diverse, precisely annotated, and large scale datasets that involve
complex and expensive creation processes. Hence, there are only few public 3D
datasets that are additionally limited in their range of classes. In this work,
we propose a pipeline for automatic generation of 3D datasets for arbitrary
objects. By utilizing the universal 3D representation and rendering
capabilities of Radiance Fields, our pipeline generates high quality 3D models
for arbitrary objects. These 3D models serve as input for a synthetic dataset
generator. Our pipeline is fast, easy to use and has a high degree of
automation. Our experiments demonstrate, that 3D pose estimation networks,
trained with our generated datasets, archive strong performance in typical
application scenarios.
|
2503.16000 | Haohua Que | Haojia Gao, Haohua Que, Hoiian Au, Weihao Shan, Mingkai Liu, Yusen
Qin, Lei Mu, Rong Zhao, Xinghua Yang, Qi Wei and Fei Qiao | SenseExpo: Efficient Autonomous Exploration with Prediction Information
from Lightweight Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes SenseExpo, an efficient autonomous exploration framework
based on a lightweight prediction network, which addresses the limitations of
traditional methods in computational overhead and environmental generalization.
By integrating Generative Adversarial Networks (GANs), Transformer, and Fast
Fourier Convolution (FFC), we designed a lightweight prediction model with
merely 709k parameters. Our smallest model achieves better performance on the
KTH dataset than U-net (24.5M) and LaMa (51M), delivering PSNR 9.026 and SSIM
0.718, particularly representing a 38.7% PSNR improvement over the
51M-parameter LaMa model. Cross-domain testing demonstrates its strong
generalization capability, with an FID score of 161.55 on the HouseExpo
dataset, significantly outperforming comparable methods. Regarding exploration
efficiency, on the KTH dataset,SenseExpo demonstrates approximately a 67.9%
time reduction in exploration time compared to MapEx. On the MRPB 1.0 dataset,
SenseExpo achieves 77.1% time reduction roughly compared to MapEx. Deployed as
a plug-and-play ROS node, the framework seamlessly integrates with existing
navigation systems, providing an efficient solution for resource-constrained
devices.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 10:07:51 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Gao",
"Haojia",
""
],
[
"Que",
"Haohua",
""
],
[
"Au",
"Hoiian",
""
],
[
"Shan",
"Weihao",
""
],
[
"Liu",
"Mingkai",
""
],
[
"Qin",
"Yusen",
""
],
[
"Mu",
"Lei",
""
],
[
"Zhao",
"Rong",
""
],
[
"Yang",
"Xinghua",
""
],
[
"Wei",
"Qi",
""
],
[
"Qiao",
"Fei",
""
]
] | TITLE: SenseExpo: Efficient Autonomous Exploration with Prediction Information
from Lightweight Neural Networks
ABSTRACT: This paper proposes SenseExpo, an efficient autonomous exploration framework
based on a lightweight prediction network, which addresses the limitations of
traditional methods in computational overhead and environmental generalization.
By integrating Generative Adversarial Networks (GANs), Transformer, and Fast
Fourier Convolution (FFC), we designed a lightweight prediction model with
merely 709k parameters. Our smallest model achieves better performance on the
KTH dataset than U-net (24.5M) and LaMa (51M), delivering PSNR 9.026 and SSIM
0.718, particularly representing a 38.7% PSNR improvement over the
51M-parameter LaMa model. Cross-domain testing demonstrates its strong
generalization capability, with an FID score of 161.55 on the HouseExpo
dataset, significantly outperforming comparable methods. Regarding exploration
efficiency, on the KTH dataset,SenseExpo demonstrates approximately a 67.9%
time reduction in exploration time compared to MapEx. On the MRPB 1.0 dataset,
SenseExpo achieves 77.1% time reduction roughly compared to MapEx. Deployed as
a plug-and-play ROS node, the framework seamlessly integrates with existing
navigation systems, providing an efficient solution for resource-constrained
devices.
|
2503.16012 | Stijn Groenen | Stijn Groenen, Marzieh Hassanshahi Varposhti, Mahyar Shahsavari | GazeSCRNN: Event-based Near-eye Gaze Tracking using a Spiking Neural
Network | null | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work introduces GazeSCRNN, a novel spiking convolutional recurrent
neural network designed for event-based near-eye gaze tracking. Leveraging the
high temporal resolution, energy efficiency, and compatibility of Dynamic
Vision Sensor (DVS) cameras with event-based systems, GazeSCRNN uses a spiking
neural network (SNN) to address the limitations of traditional gaze-tracking
systems in capturing dynamic movements. The proposed model processes event
streams from DVS cameras using Adaptive Leaky-Integrate-and-Fire (ALIF) neurons
and a hybrid architecture optimized for spatio-temporal data. Extensive
evaluations on the EV-Eye dataset demonstrate the model's accuracy in
predicting gaze vectors. In addition, we conducted ablation studies to reveal
the importance of the ALIF neurons, dynamic event framing, and training
techniques, such as Forward-Propagation-Through-Time, in enhancing overall
system performance. The most accurate model achieved a Mean Angle Error (MAE)
of 6.034{\deg} and a Mean Pupil Error (MPE) of 2.094 mm. Consequently, this
work is pioneering in demonstrating the feasibility of using SNNs for
event-based gaze tracking, while shedding light on critical challenges and
opportunities for further improvement.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 10:32:15 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Groenen",
"Stijn",
""
],
[
"Varposhti",
"Marzieh Hassanshahi",
""
],
[
"Shahsavari",
"Mahyar",
""
]
] | TITLE: GazeSCRNN: Event-based Near-eye Gaze Tracking using a Spiking Neural
Network
ABSTRACT: This work introduces GazeSCRNN, a novel spiking convolutional recurrent
neural network designed for event-based near-eye gaze tracking. Leveraging the
high temporal resolution, energy efficiency, and compatibility of Dynamic
Vision Sensor (DVS) cameras with event-based systems, GazeSCRNN uses a spiking
neural network (SNN) to address the limitations of traditional gaze-tracking
systems in capturing dynamic movements. The proposed model processes event
streams from DVS cameras using Adaptive Leaky-Integrate-and-Fire (ALIF) neurons
and a hybrid architecture optimized for spatio-temporal data. Extensive
evaluations on the EV-Eye dataset demonstrate the model's accuracy in
predicting gaze vectors. In addition, we conducted ablation studies to reveal
the importance of the ALIF neurons, dynamic event framing, and training
techniques, such as Forward-Propagation-Through-Time, in enhancing overall
system performance. The most accurate model achieved a Mean Angle Error (MAE)
of 6.034{\deg} and a Mean Pupil Error (MPE) of 2.094 mm. Consequently, this
work is pioneering in demonstrating the feasibility of using SNNs for
event-based gaze tracking, while shedding light on critical challenges and
opportunities for further improvement.
|
2503.16013 | Xiaomeng Chu | Xiaomeng Chu, Jiajun Deng, Guoliang You, Wei Liu, Xingchen Li, Jianmin
Ji, Yanyong Zhang | GraspCoT: Integrating Physical Property Reasoning for 6-DoF Grasping
under Flexible Language Instructions | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Flexible instruction-guided 6-DoF grasping is a significant yet challenging
task for real-world robotic systems. Existing methods utilize the contextual
understanding capabilities of the large language models (LLMs) to establish
mappings between expressions and targets, allowing robots to comprehend users'
intentions in the instructions. However, the LLM's knowledge about objects'
physical properties remains underexplored despite its tight relevance to
grasping. In this work, we propose GraspCoT, a 6-DoF grasp detection framework
that integrates a Chain-of-Thought (CoT) reasoning mechanism oriented to
physical properties, guided by auxiliary question-answering (QA) tasks.
Particularly, we design a set of QA templates to enable hierarchical reasoning
that includes three stages: target parsing, physical property analysis, and
grasp action selection. Moreover, GraspCoT presents a unified multimodal LLM
architecture, which encodes multi-view observations of 3D scenes into 3D-aware
visual tokens, and then jointly embeds these visual tokens with CoT-derived
textual tokens within LLMs to generate grasp pose predictions. Furthermore, we
present IntentGrasp, a large-scale benchmark that fills the gap in public
datasets for multi-object grasp detection under diverse and indirect verbal
commands. Extensive experiments on IntentGrasp demonstrate the superiority of
our method, with additional validation in real-world robotic applications
confirming its practicality. Codes and data will be released.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 10:32:38 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Chu",
"Xiaomeng",
""
],
[
"Deng",
"Jiajun",
""
],
[
"You",
"Guoliang",
""
],
[
"Liu",
"Wei",
""
],
[
"Li",
"Xingchen",
""
],
[
"Ji",
"Jianmin",
""
],
[
"Zhang",
"Yanyong",
""
]
] | TITLE: GraspCoT: Integrating Physical Property Reasoning for 6-DoF Grasping
under Flexible Language Instructions
ABSTRACT: Flexible instruction-guided 6-DoF grasping is a significant yet challenging
task for real-world robotic systems. Existing methods utilize the contextual
understanding capabilities of the large language models (LLMs) to establish
mappings between expressions and targets, allowing robots to comprehend users'
intentions in the instructions. However, the LLM's knowledge about objects'
physical properties remains underexplored despite its tight relevance to
grasping. In this work, we propose GraspCoT, a 6-DoF grasp detection framework
that integrates a Chain-of-Thought (CoT) reasoning mechanism oriented to
physical properties, guided by auxiliary question-answering (QA) tasks.
Particularly, we design a set of QA templates to enable hierarchical reasoning
that includes three stages: target parsing, physical property analysis, and
grasp action selection. Moreover, GraspCoT presents a unified multimodal LLM
architecture, which encodes multi-view observations of 3D scenes into 3D-aware
visual tokens, and then jointly embeds these visual tokens with CoT-derived
textual tokens within LLMs to generate grasp pose predictions. Furthermore, we
present IntentGrasp, a large-scale benchmark that fills the gap in public
datasets for multi-object grasp detection under diverse and indirect verbal
commands. Extensive experiments on IntentGrasp demonstrate the superiority of
our method, with additional validation in real-world robotic applications
confirming its practicality. Codes and data will be released.
|
2503.16031 | Sai Kartheek Reddy Kasu | Sai Kartheek Reddy Kasu, Shankar Biradar, Sunil Saumya | Deceptive Humor: A Synthetic Multilingual Benchmark Dataset for Bridging
Fabricated Claims with Humorous Content | 15 Pages, 4 figures, 8 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper presents the Deceptive Humor Dataset (DHD), a novel resource for
studying humor derived from fabricated claims and misinformation. In an era of
rampant misinformation, understanding how humor intertwines with deception is
essential. DHD consists of humor-infused comments generated from false
narratives, incorporating fabricated claims and manipulated information using
the ChatGPT-4o model. Each instance is labeled with a Satire Level, ranging
from 1 for subtle satire to 3 for high-level satire and classified into five
distinct Humor Categories: Dark Humor, Irony, Social Commentary, Wordplay, and
Absurdity. The dataset spans multiple languages including English, Telugu,
Hindi, Kannada, Tamil, and their code-mixed variants (Te-En, Hi-En, Ka-En,
Ta-En), making it a valuable multilingual benchmark. By introducing DHD, we
establish a structured foundation for analyzing humor in deceptive contexts,
paving the way for a new research direction that explores how humor not only
interacts with misinformation but also influences its perception and spread. We
establish strong baselines for the proposed dataset, providing a foundation for
future research to benchmark and advance deceptive humor detection models.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 10:58:02 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Kasu",
"Sai Kartheek Reddy",
""
],
[
"Biradar",
"Shankar",
""
],
[
"Saumya",
"Sunil",
""
]
] | TITLE: Deceptive Humor: A Synthetic Multilingual Benchmark Dataset for Bridging
Fabricated Claims with Humorous Content
ABSTRACT: This paper presents the Deceptive Humor Dataset (DHD), a novel resource for
studying humor derived from fabricated claims and misinformation. In an era of
rampant misinformation, understanding how humor intertwines with deception is
essential. DHD consists of humor-infused comments generated from false
narratives, incorporating fabricated claims and manipulated information using
the ChatGPT-4o model. Each instance is labeled with a Satire Level, ranging
from 1 for subtle satire to 3 for high-level satire and classified into five
distinct Humor Categories: Dark Humor, Irony, Social Commentary, Wordplay, and
Absurdity. The dataset spans multiple languages including English, Telugu,
Hindi, Kannada, Tamil, and their code-mixed variants (Te-En, Hi-En, Ka-En,
Ta-En), making it a valuable multilingual benchmark. By introducing DHD, we
establish a structured foundation for analyzing humor in deceptive contexts,
paving the way for a new research direction that explores how humor not only
interacts with misinformation but also influences its perception and spread. We
establish strong baselines for the proposed dataset, providing a foundation for
future research to benchmark and advance deceptive humor detection models.
|
2503.16032 | Sunqi Fan | Sunqi Fan, Meng-Hao Guo, Shuojin Yang | Agentic Keyframe Search for Video Question Answering | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Video question answering (VideoQA) enables machines to extract and comprehend
key information from videos through natural language interaction, which is a
critical step towards achieving intelligence. However, the demand for a
thorough understanding of videos and high computational costs still limit the
widespread applications of VideoQA. To address it, we propose Agentic Keyframe
Search (AKeyS), a simple yet powerful algorithm for identifying keyframes in
the VideoQA task. It can effectively distinguish key information from
redundant, irrelevant content by leveraging modern language agents to direct
classical search algorithms. Specifically, we first segment the video and
organize it as a tree structure. Then, AKeyS uses a language agent to estimate
heuristics and movement costs while dynamically expanding nodes. Finally, the
agent determines if sufficient keyframes have been collected based on
termination conditions and provides answers. Extensive experiments on the
EgoSchema and NExT-QA datasets show that AKeyS outperforms all previous methods
with the highest keyframe searching efficiency, which means it can accurately
identify key information and conduct effective visual reasoning with minimal
computational overhead. For example, on the EgoSchema subset, it achieves 1.8%
higher accuracy while processing only 43.5% of the frames compared to
VideoTree. We believe that AKeyS represents a significant step towards building
intelligent agents for video understanding. The code is publicly available at
https://github.com/fansunqi/AKeyS.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 10:58:12 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Fan",
"Sunqi",
""
],
[
"Guo",
"Meng-Hao",
""
],
[
"Yang",
"Shuojin",
""
]
] | TITLE: Agentic Keyframe Search for Video Question Answering
ABSTRACT: Video question answering (VideoQA) enables machines to extract and comprehend
key information from videos through natural language interaction, which is a
critical step towards achieving intelligence. However, the demand for a
thorough understanding of videos and high computational costs still limit the
widespread applications of VideoQA. To address it, we propose Agentic Keyframe
Search (AKeyS), a simple yet powerful algorithm for identifying keyframes in
the VideoQA task. It can effectively distinguish key information from
redundant, irrelevant content by leveraging modern language agents to direct
classical search algorithms. Specifically, we first segment the video and
organize it as a tree structure. Then, AKeyS uses a language agent to estimate
heuristics and movement costs while dynamically expanding nodes. Finally, the
agent determines if sufficient keyframes have been collected based on
termination conditions and provides answers. Extensive experiments on the
EgoSchema and NExT-QA datasets show that AKeyS outperforms all previous methods
with the highest keyframe searching efficiency, which means it can accurately
identify key information and conduct effective visual reasoning with minimal
computational overhead. For example, on the EgoSchema subset, it achieves 1.8%
higher accuracy while processing only 43.5% of the frames compared to
VideoTree. We believe that AKeyS represents a significant step towards building
intelligent agents for video understanding. The code is publicly available at
https://github.com/fansunqi/AKeyS.
|
2503.16036 | Zhihang Liu | Zhihang Liu and Chen-Wei Xie and Pandeng Li and Liming Zhao and
Longxiang Tang and Yun Zheng and Chuanbin Liu and Hongtao Xie | Hybrid-Level Instruction Injection for Video Token Compression in
Multi-modal Large Language Models | Accepted to CVPR2025 | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent Multi-modal Large Language Models (MLLMs) have been challenged by the
computational overhead resulting from massive video frames, often alleviated
through compression strategies. However, the visual content is not equally
contributed to user instructions, existing strategies (\eg, average pool)
inevitably lead to the loss of potentially useful information. To tackle this,
we propose the Hybrid-level Instruction Injection Strategy for Conditional
Token Compression in MLLMs (HICom), utilizing the instruction as a condition to
guide the compression from both local and global levels. This encourages the
compression to retain the maximum amount of user-focused information while
reducing visual tokens to minimize computational burden. Specifically, the
instruction condition is injected into the grouped visual tokens at the local
level and the learnable tokens at the global level, and we conduct the
attention mechanism to complete the conditional compression. From the
hybrid-level compression, the instruction-relevant visual parts are highlighted
while the temporal-spatial structure is also preserved for easier understanding
of LLMs. To further unleash the potential of HICom, we introduce a new
conditional pre-training stage with our proposed dataset HICom-248K.
Experiments show that our HICom can obtain distinguished video understanding
ability with fewer tokens, increasing the performance by 2.43\% average on
three multiple-choice QA benchmarks and saving 78.8\% tokens compared with the
SOTA method. The code is available at https://github.com/lntzm/HICom.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 11:09:18 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Liu",
"Zhihang",
""
],
[
"Xie",
"Chen-Wei",
""
],
[
"Li",
"Pandeng",
""
],
[
"Zhao",
"Liming",
""
],
[
"Tang",
"Longxiang",
""
],
[
"Zheng",
"Yun",
""
],
[
"Liu",
"Chuanbin",
""
],
[
"Xie",
"Hongtao",
""
]
] | TITLE: Hybrid-Level Instruction Injection for Video Token Compression in
Multi-modal Large Language Models
ABSTRACT: Recent Multi-modal Large Language Models (MLLMs) have been challenged by the
computational overhead resulting from massive video frames, often alleviated
through compression strategies. However, the visual content is not equally
contributed to user instructions, existing strategies (\eg, average pool)
inevitably lead to the loss of potentially useful information. To tackle this,
we propose the Hybrid-level Instruction Injection Strategy for Conditional
Token Compression in MLLMs (HICom), utilizing the instruction as a condition to
guide the compression from both local and global levels. This encourages the
compression to retain the maximum amount of user-focused information while
reducing visual tokens to minimize computational burden. Specifically, the
instruction condition is injected into the grouped visual tokens at the local
level and the learnable tokens at the global level, and we conduct the
attention mechanism to complete the conditional compression. From the
hybrid-level compression, the instruction-relevant visual parts are highlighted
while the temporal-spatial structure is also preserved for easier understanding
of LLMs. To further unleash the potential of HICom, we introduce a new
conditional pre-training stage with our proposed dataset HICom-248K.
Experiments show that our HICom can obtain distinguished video understanding
ability with fewer tokens, increasing the performance by 2.43\% average on
three multiple-choice QA benchmarks and saving 78.8\% tokens compared with the
SOTA method. The code is available at https://github.com/lntzm/HICom.
|
2503.16043 | Zhiyu Cao | Zhiyu Cao, Peifeng Li, Yaxin Fan, Qiaoming Zhu | Incomplete Utterance Rewriting with Editing Operation Guidance and
Utterance Augmentation | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Although existing fashionable generation methods on Incomplete Utterance
Rewriting (IUR) can generate coherent utterances, they often result in the
inclusion of irrelevant and redundant tokens in rewritten utterances due to
their inability to focus on critical tokens in dialogue context. Furthermore,
the limited size of the training datasets also contributes to the insufficient
training of the IUR model. To address the first issue, we propose a multi-task
learning framework EO-IUR (Editing Operation-guided Incomplete Utterance
Rewriting) that introduces the editing operation labels generated by sequence
labeling module to guide generation model to focus on critical tokens.
Furthermore, we introduce a token-level heterogeneous graph to represent
dialogues. To address the second issue, we propose a two-dimensional utterance
augmentation strategy, namely editing operation-based incomplete utterance
augmentation and LLM-based historical utterance augmentation. The experimental
results on three datasets demonstrate that our EO-IUR outperforms previous
state-of-the-art (SOTA) baselines in both open-domain and task-oriented
dialogue. The code will be available at https://github.com/Dewset/EO-IUR.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 11:26:46 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Cao",
"Zhiyu",
""
],
[
"Li",
"Peifeng",
""
],
[
"Fan",
"Yaxin",
""
],
[
"Zhu",
"Qiaoming",
""
]
] | TITLE: Incomplete Utterance Rewriting with Editing Operation Guidance and
Utterance Augmentation
ABSTRACT: Although existing fashionable generation methods on Incomplete Utterance
Rewriting (IUR) can generate coherent utterances, they often result in the
inclusion of irrelevant and redundant tokens in rewritten utterances due to
their inability to focus on critical tokens in dialogue context. Furthermore,
the limited size of the training datasets also contributes to the insufficient
training of the IUR model. To address the first issue, we propose a multi-task
learning framework EO-IUR (Editing Operation-guided Incomplete Utterance
Rewriting) that introduces the editing operation labels generated by sequence
labeling module to guide generation model to focus on critical tokens.
Furthermore, we introduce a token-level heterogeneous graph to represent
dialogues. To address the second issue, we propose a two-dimensional utterance
augmentation strategy, namely editing operation-based incomplete utterance
augmentation and LLM-based historical utterance augmentation. The experimental
results on three datasets demonstrate that our EO-IUR outperforms previous
state-of-the-art (SOTA) baselines in both open-domain and task-oriented
dialogue. The code will be available at https://github.com/Dewset/EO-IUR.
|
2503.16048 | Michael Goodale | Michael Goodale, Salvador Mascarenhas and Yair Lakretz | Meta-Learning Neural Mechanisms rather than Bayesian Priors | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Children acquire language despite being exposed to several orders of
magnitude less data than large language models require. Meta-learning has been
proposed as a way to integrate human-like learning biases into neural-network
architectures, combining both the structured generalizations of symbolic models
with the scalability of neural-network models. But what does meta-learning
exactly imbue the model with? We investigate the meta-learning of formal
languages and find that, contrary to previous claims, meta-trained models are
not learning simplicity-based priors when meta-trained on datasets organised
around simplicity. Rather, we find evidence that meta-training imprints neural
mechanisms (such as counters) into the model, which function like cognitive
primitives for the network on downstream tasks. Most surprisingly, we find that
meta-training on a single formal language can provide as much improvement to a
model as meta-training on 5000 different formal languages, provided that the
formal language incentivizes the learning of useful neural mechanisms. Taken
together, our findings provide practical implications for efficient
meta-learning paradigms and new theoretical insights into linking symbolic
theories and neural mechanisms.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 11:33:59 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Goodale",
"Michael",
""
],
[
"Mascarenhas",
"Salvador",
""
],
[
"Lakretz",
"Yair",
""
]
] | TITLE: Meta-Learning Neural Mechanisms rather than Bayesian Priors
ABSTRACT: Children acquire language despite being exposed to several orders of
magnitude less data than large language models require. Meta-learning has been
proposed as a way to integrate human-like learning biases into neural-network
architectures, combining both the structured generalizations of symbolic models
with the scalability of neural-network models. But what does meta-learning
exactly imbue the model with? We investigate the meta-learning of formal
languages and find that, contrary to previous claims, meta-trained models are
not learning simplicity-based priors when meta-trained on datasets organised
around simplicity. Rather, we find evidence that meta-training imprints neural
mechanisms (such as counters) into the model, which function like cognitive
primitives for the network on downstream tasks. Most surprisingly, we find that
meta-training on a single formal language can provide as much improvement to a
model as meta-training on 5000 different formal languages, provided that the
formal language incentivizes the learning of useful neural mechanisms. Taken
together, our findings provide practical implications for efficient
meta-learning paradigms and new theoretical insights into linking symbolic
theories and neural mechanisms.
|
2503.16051 | Andrei Jelea | Andrei Jelea, Ahmed Nabil Belbachir, Marius Leordeanu | Closer to Ground Truth: Realistic Shape and Appearance Labeled Data
Generation for Unsupervised Underwater Image Segmentation | Proceedings of ECCVW 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solving fish segmentation in underwater videos, a real-world problem of great
practical value in marine and aquaculture industry, is a challenging task due
to the difficulty of the filming environment, poor visibility and limited
existing annotated underwater fish data. In order to overcome these obstacles,
we introduce a novel two stage unsupervised segmentation approach that requires
no human annotations and combines artificially created and real images. Our
method generates challenging synthetic training data, by placing virtual fish
in real-world underwater habitats, after performing fish transformations such
as Thin Plate Spline shape warping and color Histogram Matching, which
realistically integrate synthetic fish into the backgrounds, making the
generated images increasingly closer to the real world data with every stage of
our approach. While we validate our unsupervised method on the popular DeepFish
dataset, obtaining a performance close to a fully-supervised SoTA model, we
further show its effectiveness on the specific case of salmon segmentation in
underwater videos, for which we introduce DeepSalmon, the largest dataset of
its kind in the literature (30 GB). Moreover, on both datasets we prove the
capability of our approach to boost the performance of the fully-supervised
SoTA model.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 11:34:45 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Jelea",
"Andrei",
""
],
[
"Belbachir",
"Ahmed Nabil",
""
],
[
"Leordeanu",
"Marius",
""
]
] | TITLE: Closer to Ground Truth: Realistic Shape and Appearance Labeled Data
Generation for Unsupervised Underwater Image Segmentation
ABSTRACT: Solving fish segmentation in underwater videos, a real-world problem of great
practical value in marine and aquaculture industry, is a challenging task due
to the difficulty of the filming environment, poor visibility and limited
existing annotated underwater fish data. In order to overcome these obstacles,
we introduce a novel two stage unsupervised segmentation approach that requires
no human annotations and combines artificially created and real images. Our
method generates challenging synthetic training data, by placing virtual fish
in real-world underwater habitats, after performing fish transformations such
as Thin Plate Spline shape warping and color Histogram Matching, which
realistically integrate synthetic fish into the backgrounds, making the
generated images increasingly closer to the real world data with every stage of
our approach. While we validate our unsupervised method on the popular DeepFish
dataset, obtaining a performance close to a fully-supervised SoTA model, we
further show its effectiveness on the specific case of salmon segmentation in
underwater videos, for which we introduce DeepSalmon, the largest dataset of
its kind in the literature (30 GB). Moreover, on both datasets we prove the
capability of our approach to boost the performance of the fully-supervised
SoTA model.
|
2503.16055 | Abdelrahman Elsayed | Abdelrahman Elsayed, Sarim Hashmi, Mohammed Elseiagy, Hu Wang,
Mohammad Yaqub, Ibrahim Almakky | SALT: Singular Value Adaptation with Low-Rank Transformation | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The complex nature of medical image segmentation calls for models that are
specifically designed to capture detailed, domain-specific features. Large
foundation models offer considerable flexibility, yet the cost of fine-tuning
these models remains a significant barrier. Parameter-Efficient Fine-Tuning
(PEFT) methods, such as Low-Rank Adaptation (LoRA), efficiently update model
weights with low-rank matrices but may suffer from underfitting when the chosen
rank is insufficient to capture domain-specific nuances. Conversely, full-rank
Singular Value Decomposition (SVD) based methods provide comprehensive updates
by modifying all singular values, yet they often lack flexibility and exhibit
variable performance across datasets. We propose SALT (Singular Value
Adaptation with Low-Rank Transformation), a method that selectively adapts the
most influential singular values using trainable scale and shift parameters
while complementing this with a low-rank update for the remaining subspace.
This hybrid approach harnesses the advantages of both LoRA and SVD, enabling
effective adaptation without relying on increasing model size or depth.
Evaluated on 5 challenging medical datasets, ranging from as few as 20 samples
to 1000, SALT outperforms state-of-the-art PEFT (LoRA and SVD) by 2% to 5% in
Dice with only 3.9% trainable parameters, demonstrating robust adaptation even
in low-resource settings. The code for SALT is available at:
https://github.com/BioMedIA-MBZUAI/SALT
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 11:42:41 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Elsayed",
"Abdelrahman",
""
],
[
"Hashmi",
"Sarim",
""
],
[
"Elseiagy",
"Mohammed",
""
],
[
"Wang",
"Hu",
""
],
[
"Yaqub",
"Mohammad",
""
],
[
"Almakky",
"Ibrahim",
""
]
] | TITLE: SALT: Singular Value Adaptation with Low-Rank Transformation
ABSTRACT: The complex nature of medical image segmentation calls for models that are
specifically designed to capture detailed, domain-specific features. Large
foundation models offer considerable flexibility, yet the cost of fine-tuning
these models remains a significant barrier. Parameter-Efficient Fine-Tuning
(PEFT) methods, such as Low-Rank Adaptation (LoRA), efficiently update model
weights with low-rank matrices but may suffer from underfitting when the chosen
rank is insufficient to capture domain-specific nuances. Conversely, full-rank
Singular Value Decomposition (SVD) based methods provide comprehensive updates
by modifying all singular values, yet they often lack flexibility and exhibit
variable performance across datasets. We propose SALT (Singular Value
Adaptation with Low-Rank Transformation), a method that selectively adapts the
most influential singular values using trainable scale and shift parameters
while complementing this with a low-rank update for the remaining subspace.
This hybrid approach harnesses the advantages of both LoRA and SVD, enabling
effective adaptation without relying on increasing model size or depth.
Evaluated on 5 challenging medical datasets, ranging from as few as 20 samples
to 1000, SALT outperforms state-of-the-art PEFT (LoRA and SVD) by 2% to 5% in
Dice with only 3.9% trainable parameters, demonstrating robust adaptation even
in low-resource settings. The code for SALT is available at:
https://github.com/BioMedIA-MBZUAI/SALT
|
2503.16056 | Yunzhe Zhang | Wanshu Fan, Yue Wang, Cong Wang, Yunzhe Zhang, Wei Wang and Dongsheng
Zhou | Semantic-Guided Global-Local Collaborative Networks for Lightweight
Image Super-Resolution | 14 pages,13 figures, 9 tables | Ieee Transactions on Instrument and Measurement 2025 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Single-Image Super-Resolution (SISR) plays a pivotal role in enhancing the
accuracy and reliability of measurement systems, which are integral to various
vision-based instrumentation and measurement applications. These systems often
require clear and detailed images for precise object detection and recognition.
However, images captured by visual measurement tools frequently suffer from
degradation, including blurring and loss of detail, which can impede
measurement accuracy.As a potential remedy, we in this paper propose a
Semantic-Guided Global-Local Collaborative Network (SGGLC-Net) for lightweight
SISR. Our SGGLC-Net leverages semantic priors extracted from a pre-trained
model to guide the super-resolution process, enhancing image detail quality
effectively. Specifically,we propose a Semantic Guidance Module that seamlessly
integrates the semantic priors into the super-resolution network, enabling the
network to more adeptly capture and utilize semantic priors, thereby enhancing
image details. To further explore both local and non-local interactions for
improved detail rendition,we propose a Global-Local Collaborative Module, which
features three Global and Local Detail Enhancement Modules, as well as a Hybrid
Attention Mechanism to work together to efficiently learn more useful features.
Our extensive experiments show that SGGLC-Net achieves competitive PSNR and
SSIM values across multiple benchmark datasets, demonstrating higher
performance with the multi-adds reduction of 12.81G compared to
state-of-the-art lightweight super-resolution approaches. These improvements
underscore the potential of our approach to enhance the precision and
effectiveness of visual measurement systems. Codes are at
https://github.com/fanamber831/SGGLC-Net.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 11:43:55 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Fan",
"Wanshu",
""
],
[
"Wang",
"Yue",
""
],
[
"Wang",
"Cong",
""
],
[
"Zhang",
"Yunzhe",
""
],
[
"Wang",
"Wei",
""
],
[
"Zhou",
"Dongsheng",
""
]
] | TITLE: Semantic-Guided Global-Local Collaborative Networks for Lightweight
Image Super-Resolution
ABSTRACT: Single-Image Super-Resolution (SISR) plays a pivotal role in enhancing the
accuracy and reliability of measurement systems, which are integral to various
vision-based instrumentation and measurement applications. These systems often
require clear and detailed images for precise object detection and recognition.
However, images captured by visual measurement tools frequently suffer from
degradation, including blurring and loss of detail, which can impede
measurement accuracy.As a potential remedy, we in this paper propose a
Semantic-Guided Global-Local Collaborative Network (SGGLC-Net) for lightweight
SISR. Our SGGLC-Net leverages semantic priors extracted from a pre-trained
model to guide the super-resolution process, enhancing image detail quality
effectively. Specifically,we propose a Semantic Guidance Module that seamlessly
integrates the semantic priors into the super-resolution network, enabling the
network to more adeptly capture and utilize semantic priors, thereby enhancing
image details. To further explore both local and non-local interactions for
improved detail rendition,we propose a Global-Local Collaborative Module, which
features three Global and Local Detail Enhancement Modules, as well as a Hybrid
Attention Mechanism to work together to efficiently learn more useful features.
Our extensive experiments show that SGGLC-Net achieves competitive PSNR and
SSIM values across multiple benchmark datasets, demonstrating higher
performance with the multi-adds reduction of 12.81G compared to
state-of-the-art lightweight super-resolution approaches. These improvements
underscore the potential of our approach to enhance the precision and
effectiveness of visual measurement systems. Codes are at
https://github.com/fanamber831/SGGLC-Net.
|
2503.16058 | Xu He | Xu He, Zhen Huang, Qingsong Yao, Xiaoqian Zhou and S. Kevin Zhou | Landmarks Are Alike Yet Distinct: Harnessing Similarity and
Individuality for One-Shot Medical Landmark Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Landmark detection plays a crucial role in medical imaging applications such
as disease diagnosis, bone age estimation, and therapy planning. However,
training models for detecting multiple landmarks simultaneously often
encounters the "seesaw phenomenon", where improvements in detecting certain
landmarks lead to declines in detecting others. Yet, training a separate model
for each landmark increases memory usage and computational overhead. To address
these challenges, we propose a novel approach based on the belief that
"landmarks are distinct" by training models with pseudo-labels and template
data updated continuously during the training process, where each model is
dedicated to detecting a single landmark to achieve high accuracy. Furthermore,
grounded on the belief that "landmarks are also alike", we introduce an
adapter-based fusion model, combining shared weights with landmark-specific
weights, to efficiently share model parameters while allowing flexible
adaptation to individual landmarks. This approach not only significantly
reduces memory and computational resource requirements but also effectively
mitigates the seesaw phenomenon in multi-landmark training. Experimental
results on publicly available medical image datasets demonstrate that the
single-landmark models significantly outperform traditional multi-point joint
training models in detecting individual landmarks. Although our adapter-based
fusion model shows slightly lower performance compared to the combined results
of all single-landmark models, it still surpasses the current state-of-the-art
methods while achieving a notable improvement in resource efficiency.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 11:46:29 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"He",
"Xu",
""
],
[
"Huang",
"Zhen",
""
],
[
"Yao",
"Qingsong",
""
],
[
"Zhou",
"Xiaoqian",
""
],
[
"Zhou",
"S. Kevin",
""
]
] | TITLE: Landmarks Are Alike Yet Distinct: Harnessing Similarity and
Individuality for One-Shot Medical Landmark Detection
ABSTRACT: Landmark detection plays a crucial role in medical imaging applications such
as disease diagnosis, bone age estimation, and therapy planning. However,
training models for detecting multiple landmarks simultaneously often
encounters the "seesaw phenomenon", where improvements in detecting certain
landmarks lead to declines in detecting others. Yet, training a separate model
for each landmark increases memory usage and computational overhead. To address
these challenges, we propose a novel approach based on the belief that
"landmarks are distinct" by training models with pseudo-labels and template
data updated continuously during the training process, where each model is
dedicated to detecting a single landmark to achieve high accuracy. Furthermore,
grounded on the belief that "landmarks are also alike", we introduce an
adapter-based fusion model, combining shared weights with landmark-specific
weights, to efficiently share model parameters while allowing flexible
adaptation to individual landmarks. This approach not only significantly
reduces memory and computational resource requirements but also effectively
mitigates the seesaw phenomenon in multi-landmark training. Experimental
results on publicly available medical image datasets demonstrate that the
single-landmark models significantly outperform traditional multi-point joint
training models in detecting individual landmarks. Although our adapter-based
fusion model shows slightly lower performance compared to the combined results
of all single-landmark models, it still surpasses the current state-of-the-art
methods while achieving a notable improvement in resource efficiency.
|
2503.16063 | Zhiyu Cao | Zhiyu Cao, Peifeng Li, Qiaoming Zhu, Yaxin Fan | Two-stage Incomplete Utterance Rewriting on Editing Operation | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Previous work on Incomplete Utterance Rewriting (IUR) has primarily focused
on generating rewritten utterances based solely on dialogue context, ignoring
the widespread phenomenon of coreference and ellipsis in dialogues. To address
this issue, we propose a novel framework called TEO (\emph{Two-stage approach
on Editing Operation}) for IUR, in which the first stage generates editing
operations and the second stage rewrites incomplete utterances utilizing the
generated editing operations and the dialogue context. Furthermore, an
adversarial perturbation strategy is proposed to mitigate cascading errors and
exposure bias caused by the inconsistency between training and inference in the
second stage. Experimental results on three IUR datasets show that our TEO
outperforms the SOTA models significantly.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 11:56:14 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Cao",
"Zhiyu",
""
],
[
"Li",
"Peifeng",
""
],
[
"Zhu",
"Qiaoming",
""
],
[
"Fan",
"Yaxin",
""
]
] | TITLE: Two-stage Incomplete Utterance Rewriting on Editing Operation
ABSTRACT: Previous work on Incomplete Utterance Rewriting (IUR) has primarily focused
on generating rewritten utterances based solely on dialogue context, ignoring
the widespread phenomenon of coreference and ellipsis in dialogues. To address
this issue, we propose a novel framework called TEO (\emph{Two-stage approach
on Editing Operation}) for IUR, in which the first stage generates editing
operations and the second stage rewrites incomplete utterances utilizing the
generated editing operations and the dialogue context. Furthermore, an
adversarial perturbation strategy is proposed to mitigate cascading errors and
exposure bias caused by the inconsistency between training and inference in the
second stage. Experimental results on three IUR datasets show that our TEO
outperforms the SOTA models significantly.
|
2503.16064 | Qiang Zou | Qiang Zou, Shuli Cheng, Jiayi Chen | PromptHash: Affinity-Prompted Collaborative Cross-Modal Learning for
Adaptive Hashing Retrieval | Accepted by CVPR2025 | null | null | null | cs.CV cs.AI cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-modal hashing is a promising approach for efficient data retrieval and
storage optimization. However, contemporary methods exhibit significant
limitations in semantic preservation, contextual integrity, and information
redundancy, which constrains retrieval efficacy. We present PromptHash, an
innovative framework leveraging affinity prompt-aware collaborative learning
for adaptive cross-modal hashing. We propose an end-to-end framework for
affinity-prompted collaborative hashing, with the following fundamental
technical contributions: (i) a text affinity prompt learning mechanism that
preserves contextual information while maintaining parameter efficiency, (ii)
an adaptive gated selection fusion architecture that synthesizes State Space
Model with Transformer network for precise cross-modal feature integration, and
(iii) a prompt affinity alignment strategy that bridges modal heterogeneity
through hierarchical contrastive learning. To the best of our knowledge, this
study presents the first investigation into affinity prompt awareness within
collaborative cross-modal adaptive hash learning, establishing a paradigm for
enhanced semantic consistency across modalities. Through comprehensive
evaluation on three benchmark multi-label datasets, PromptHash demonstrates
substantial performance improvements over existing approaches. Notably, on the
NUS-WIDE dataset, our method achieves significant gains of 18.22% and 18.65% in
image-to-text and text-to-image retrieval tasks, respectively. The code is
publicly available at https://github.com/ShiShuMo/PromptHash.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 11:56:27 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Zou",
"Qiang",
""
],
[
"Cheng",
"Shuli",
""
],
[
"Chen",
"Jiayi",
""
]
] | TITLE: PromptHash: Affinity-Prompted Collaborative Cross-Modal Learning for
Adaptive Hashing Retrieval
ABSTRACT: Cross-modal hashing is a promising approach for efficient data retrieval and
storage optimization. However, contemporary methods exhibit significant
limitations in semantic preservation, contextual integrity, and information
redundancy, which constrains retrieval efficacy. We present PromptHash, an
innovative framework leveraging affinity prompt-aware collaborative learning
for adaptive cross-modal hashing. We propose an end-to-end framework for
affinity-prompted collaborative hashing, with the following fundamental
technical contributions: (i) a text affinity prompt learning mechanism that
preserves contextual information while maintaining parameter efficiency, (ii)
an adaptive gated selection fusion architecture that synthesizes State Space
Model with Transformer network for precise cross-modal feature integration, and
(iii) a prompt affinity alignment strategy that bridges modal heterogeneity
through hierarchical contrastive learning. To the best of our knowledge, this
study presents the first investigation into affinity prompt awareness within
collaborative cross-modal adaptive hash learning, establishing a paradigm for
enhanced semantic consistency across modalities. Through comprehensive
evaluation on three benchmark multi-label datasets, PromptHash demonstrates
substantial performance improvements over existing approaches. Notably, on the
NUS-WIDE dataset, our method achieves significant gains of 18.22% and 18.65% in
image-to-text and text-to-image retrieval tasks, respectively. The code is
publicly available at https://github.com/ShiShuMo/PromptHash.
|
2503.16068 | Changjian Li | Longbin Ji, Lei Zhong, Pengfei Wei, Changjian Li | PoseTraj: Pose-Aware Trajectory Control in Video Diffusion | Code, data and project page: https://robingg1.github.io/Pose-Traj/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in trajectory-guided video generation have achieved
notable progress. However, existing models still face challenges in generating
object motions with potentially changing 6D poses under wide-range rotations,
due to limited 3D understanding. To address this problem, we introduce
PoseTraj, a pose-aware video dragging model for generating 3D-aligned motion
from 2D trajectories. Our method adopts a novel two-stage pose-aware
pretraining framework, improving 3D understanding across diverse trajectories.
Specifically, we propose a large-scale synthetic dataset PoseTraj-10K,
containing 10k videos of objects following rotational trajectories, and enhance
the model perception of object pose changes by incorporating 3D bounding boxes
as intermediate supervision signals. Following this, we fine-tune the
trajectory-controlling module on real-world videos, applying an additional
camera-disentanglement module to further refine motion accuracy. Experiments on
various benchmark datasets demonstrate that our method not only excels in 3D
pose-aligned dragging for rotational trajectories but also outperforms existing
baselines in trajectory accuracy and video quality.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 12:01:43 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Ji",
"Longbin",
""
],
[
"Zhong",
"Lei",
""
],
[
"Wei",
"Pengfei",
""
],
[
"Li",
"Changjian",
""
]
] | TITLE: PoseTraj: Pose-Aware Trajectory Control in Video Diffusion
ABSTRACT: Recent advancements in trajectory-guided video generation have achieved
notable progress. However, existing models still face challenges in generating
object motions with potentially changing 6D poses under wide-range rotations,
due to limited 3D understanding. To address this problem, we introduce
PoseTraj, a pose-aware video dragging model for generating 3D-aligned motion
from 2D trajectories. Our method adopts a novel two-stage pose-aware
pretraining framework, improving 3D understanding across diverse trajectories.
Specifically, we propose a large-scale synthetic dataset PoseTraj-10K,
containing 10k videos of objects following rotational trajectories, and enhance
the model perception of object pose changes by incorporating 3D bounding boxes
as intermediate supervision signals. Following this, we fine-tune the
trajectory-controlling module on real-world videos, applying an additional
camera-disentanglement module to further refine motion accuracy. Experiments on
various benchmark datasets demonstrate that our method not only excels in 3D
pose-aligned dragging for rotational trajectories but also outperforms existing
baselines in trajectory accuracy and video quality.
|
2503.16069 | Aniek Eijpe | Aniek Eijpe, Soufyan Lakbir, Melis Erdal Cesur, Sara P. Oliveira,
Sanne Abeln and Wilson Silva | Disentangled and Interpretable Multimodal Attention Fusion for Cancer
Survival Prediction | 11 pages, 1 figure, 3 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | To improve the prediction of cancer survival using whole-slide images and
transcriptomics data, it is crucial to capture both modality-shared and
modality-specific information. However, multimodal frameworks often entangle
these representations, limiting interpretability and potentially suppressing
discriminative features. To address this, we propose Disentangled and
Interpretable Multimodal Attention Fusion (DIMAF), a multimodal framework that
separates the intra- and inter-modal interactions within an attention-based
fusion mechanism to learn distinct modality-specific and modality-shared
representations. We introduce a loss based on Distance Correlation to promote
disentanglement between these representations and integrate Shapley additive
explanations to assess their relative contributions to survival prediction. We
evaluate DIMAF on four public cancer survival datasets, achieving a relative
average improvement of 1.85% in performance and 23.7% in disentanglement
compared to current state-of-the-art multimodal models. Beyond improved
performance, our interpretable framework enables a deeper exploration of the
underlying interactions between and within modalities in cancer biology.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2025 12:02:10 GMT"
}
] | 2025-03-21T00:00:00 | [
[
"Eijpe",
"Aniek",
""
],
[
"Lakbir",
"Soufyan",
""
],
[
"Cesur",
"Melis Erdal",
""
],
[
"Oliveira",
"Sara P.",
""
],
[
"Abeln",
"Sanne",
""
],
[
"Silva",
"Wilson",
""
]
] | TITLE: Disentangled and Interpretable Multimodal Attention Fusion for Cancer
Survival Prediction
ABSTRACT: To improve the prediction of cancer survival using whole-slide images and
transcriptomics data, it is crucial to capture both modality-shared and
modality-specific information. However, multimodal frameworks often entangle
these representations, limiting interpretability and potentially suppressing
discriminative features. To address this, we propose Disentangled and
Interpretable Multimodal Attention Fusion (DIMAF), a multimodal framework that
separates the intra- and inter-modal interactions within an attention-based
fusion mechanism to learn distinct modality-specific and modality-shared
representations. We introduce a loss based on Distance Correlation to promote
disentanglement between these representations and integrate Shapley additive
explanations to assess their relative contributions to survival prediction. We
evaluate DIMAF on four public cancer survival datasets, achieving a relative
average improvement of 1.85% in performance and 23.7% in disentanglement
compared to current state-of-the-art multimodal models. Beyond improved
performance, our interpretable framework enables a deeper exploration of the
underlying interactions between and within modalities in cancer biology.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.