id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2405.10075 | Kun Yuan | Kun Yuan, Vinkle Srivastav, Nassir Navab, Nicolas Padoy | HecVL: Hierarchical Video-Language Pretraining for Zero-shot Surgical
Phase Recognition | Accepted by MICCAI2024 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Natural language could play an important role in developing generalist
surgical models by providing a broad source of supervision from raw texts. This
flexible form of supervision can enable the model's transferability across
datasets and tasks as natural language can be used to reference learned visual
concepts or describe new ones. In this work, we present HecVL, a novel
hierarchical video-language pretraining approach for building a generalist
surgical model. Specifically, we construct a hierarchical video-text paired
dataset by pairing the surgical lecture video with three hierarchical levels of
texts: at clip-level, atomic actions using transcribed audio texts; at
phase-level, conceptual text summaries; and at video-level, overall abstract
text of the surgical procedure. Then, we propose a novel fine-to-coarse
contrastive learning framework that learns separate embedding spaces for the
three video-text hierarchies using a single model. By disentangling embedding
spaces of different hierarchical levels, the learned multi-modal
representations encode short-term and long-term surgical concepts in the same
model. Thanks to the injected textual semantics, we demonstrate that the HecVL
approach can enable zero-shot surgical phase recognition without any human
annotation. Furthermore, we show that the same HecVL model for surgical phase
recognition can be transferred across different surgical procedures and medical
centers. The code is available at https://github.com/CAMMA-public/SurgVLP
| [
{
"version": "v1",
"created": "Thu, 16 May 2024 13:14:43 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 15:27:41 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Yuan",
"Kun",
""
],
[
"Srivastav",
"Vinkle",
""
],
[
"Navab",
"Nassir",
""
],
[
"Padoy",
"Nicolas",
""
]
]
| TITLE: HecVL: Hierarchical Video-Language Pretraining for Zero-shot Surgical
Phase Recognition
ABSTRACT: Natural language could play an important role in developing generalist
surgical models by providing a broad source of supervision from raw texts. This
flexible form of supervision can enable the model's transferability across
datasets and tasks as natural language can be used to reference learned visual
concepts or describe new ones. In this work, we present HecVL, a novel
hierarchical video-language pretraining approach for building a generalist
surgical model. Specifically, we construct a hierarchical video-text paired
dataset by pairing the surgical lecture video with three hierarchical levels of
texts: at clip-level, atomic actions using transcribed audio texts; at
phase-level, conceptual text summaries; and at video-level, overall abstract
text of the surgical procedure. Then, we propose a novel fine-to-coarse
contrastive learning framework that learns separate embedding spaces for the
three video-text hierarchies using a single model. By disentangling embedding
spaces of different hierarchical levels, the learned multi-modal
representations encode short-term and long-term surgical concepts in the same
model. Thanks to the injected textual semantics, we demonstrate that the HecVL
approach can enable zero-shot surgical phase recognition without any human
annotation. Furthermore, we show that the same HecVL model for surgical phase
recognition can be transferred across different surgical procedures and medical
centers. The code is available at https://github.com/CAMMA-public/SurgVLP
| no_new_dataset | 0.947672 |
2406.08426 | Zijin Hong | Zijin Hong, Zheng Yuan, Qinggang Zhang, Hao Chen, Junnan Dong, Feiran
Huang, Xiao Huang | Next-Generation Database Interfaces: A Survey of LLM-based Text-to-SQL | null | null | null | null | cs.CL cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating accurate SQL from users' natural language questions (text-to-SQL)
remains a long-standing challenge due to the complexities involved in user
question understanding, database schema comprehension, and SQL generation.
Traditional text-to-SQL systems, which combine human engineering and deep
neural networks, have made significant progress. Subsequently, pre-trained
language models (PLMs) have been developed for text-to-SQL tasks, achieving
promising results. However, as modern databases and user questions grow more
complex, PLMs with a limited parameter size often produce incorrect SQL. This
necessitates more sophisticated and tailored optimization methods, which
restricts the application of PLM-based systems. Recently, large language models
(LLMs) have shown significant capabilities in natural language understanding as
model scale increases. Thus, integrating LLM-based solutions can bring unique
opportunities, improvements, and solutions to text-to-SQL research. In this
survey, we provide a comprehensive review of existing LLM-based text-to-SQL
studies. Specifically, we offer a brief overview of the technical challenges
and evolutionary process of text-to-SQL. Next, we introduce the datasets and
metrics designed to evaluate text-to-SQL systems. Subsequently, we present a
systematic analysis of recent advances in LLM-based text-to-SQL. Finally, we
make a summarization and discuss the remaining challenges in this field and
suggest expectations for future research directions.
| [
{
"version": "v1",
"created": "Wed, 12 Jun 2024 17:13:17 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Jun 2024 13:51:30 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Jul 2024 08:06:57 GMT"
},
{
"version": "v4",
"created": "Sun, 23 Feb 2025 22:22:20 GMT"
},
{
"version": "v5",
"created": "Thu, 13 Mar 2025 08:45:35 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Hong",
"Zijin",
""
],
[
"Yuan",
"Zheng",
""
],
[
"Zhang",
"Qinggang",
""
],
[
"Chen",
"Hao",
""
],
[
"Dong",
"Junnan",
""
],
[
"Huang",
"Feiran",
""
],
[
"Huang",
"Xiao",
""
]
]
| TITLE: Next-Generation Database Interfaces: A Survey of LLM-based Text-to-SQL
ABSTRACT: Generating accurate SQL from users' natural language questions (text-to-SQL)
remains a long-standing challenge due to the complexities involved in user
question understanding, database schema comprehension, and SQL generation.
Traditional text-to-SQL systems, which combine human engineering and deep
neural networks, have made significant progress. Subsequently, pre-trained
language models (PLMs) have been developed for text-to-SQL tasks, achieving
promising results. However, as modern databases and user questions grow more
complex, PLMs with a limited parameter size often produce incorrect SQL. This
necessitates more sophisticated and tailored optimization methods, which
restricts the application of PLM-based systems. Recently, large language models
(LLMs) have shown significant capabilities in natural language understanding as
model scale increases. Thus, integrating LLM-based solutions can bring unique
opportunities, improvements, and solutions to text-to-SQL research. In this
survey, we provide a comprehensive review of existing LLM-based text-to-SQL
studies. Specifically, we offer a brief overview of the technical challenges
and evolutionary process of text-to-SQL. Next, we introduce the datasets and
metrics designed to evaluate text-to-SQL systems. Subsequently, we present a
systematic analysis of recent advances in LLM-based text-to-SQL. Finally, we
make a summarization and discuss the remaining challenges in this field and
suggest expectations for future research directions.
| no_new_dataset | 0.944177 |
2406.10714 | Neehar Peri | Arun Balajee Vasudevan, Neehar Peri, Jeff Schneider, Deva Ramanan | Planning with Adaptive World Models for Autonomous Driving | This project has been accepted to the International Conference on
Robotics and Automation (ICRA) 2025. Project Page:
https://arunbalajeev.github.io/world_models_planning/world_model_paper.html | null | null | null | cs.RO cs.LG | http://creativecommons.org/licenses/by/4.0/ | Motion planning is crucial for safe navigation in complex urban environments.
Historically, motion planners (MPs) have been evaluated with
procedurally-generated simulators like CARLA. However, such synthetic
benchmarks do not capture real-world multi-agent interactions. nuPlan, a
recently released MP benchmark, addresses this limitation by augmenting
real-world driving logs with closed-loop simulation logic, effectively turning
the fixed dataset into a reactive simulator. We analyze the characteristics of
nuPlan's recorded logs and find that each city has its own unique driving
behaviors, suggesting that robust planners must adapt to different
environments. We learn to model such unique behaviors with BehaviorNet, a graph
convolutional neural network (GCNN) that predicts reactive agent behaviors
using features derived from recently-observed agent histories; intuitively,
some aggressive agents may tailgate lead vehicles, while others may not. To
model such phenomena, BehaviorNet predicts the parameters of an agent's motion
controller rather than directly predicting its spacetime trajectory (as most
forecasters do). Finally, we present AdaptiveDriver, a model-predictive control
(MPC) based planner that unrolls different world models conditioned on
BehaviorNet's predictions. Our extensive experiments demonstrate that
AdaptiveDriver achieves state-of-the-art results on the nuPlan closed-loop
planning benchmark, improving over prior work by 2% on Test-14 Hard R-CLS, and
generalizes even when evaluated on never-before-seen cities.
| [
{
"version": "v1",
"created": "Sat, 15 Jun 2024 18:53:45 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Sep 2024 20:07:57 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 22:55:20 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Vasudevan",
"Arun Balajee",
""
],
[
"Peri",
"Neehar",
""
],
[
"Schneider",
"Jeff",
""
],
[
"Ramanan",
"Deva",
""
]
]
| TITLE: Planning with Adaptive World Models for Autonomous Driving
ABSTRACT: Motion planning is crucial for safe navigation in complex urban environments.
Historically, motion planners (MPs) have been evaluated with
procedurally-generated simulators like CARLA. However, such synthetic
benchmarks do not capture real-world multi-agent interactions. nuPlan, a
recently released MP benchmark, addresses this limitation by augmenting
real-world driving logs with closed-loop simulation logic, effectively turning
the fixed dataset into a reactive simulator. We analyze the characteristics of
nuPlan's recorded logs and find that each city has its own unique driving
behaviors, suggesting that robust planners must adapt to different
environments. We learn to model such unique behaviors with BehaviorNet, a graph
convolutional neural network (GCNN) that predicts reactive agent behaviors
using features derived from recently-observed agent histories; intuitively,
some aggressive agents may tailgate lead vehicles, while others may not. To
model such phenomena, BehaviorNet predicts the parameters of an agent's motion
controller rather than directly predicting its spacetime trajectory (as most
forecasters do). Finally, we present AdaptiveDriver, a model-predictive control
(MPC) based planner that unrolls different world models conditioned on
BehaviorNet's predictions. Our extensive experiments demonstrate that
AdaptiveDriver achieves state-of-the-art results on the nuPlan closed-loop
planning benchmark, improving over prior work by 2% on Test-14 Hard R-CLS, and
generalizes even when evaluated on never-before-seen cities.
| no_new_dataset | 0.939803 |
2406.12480 | Stefan Sylvius Wagner | Stefan Sylvius Wagner and Maike Behrendt and Marc Ziegele and Stefan
Harmeling | The Power of LLM-Generated Synthetic Data for Stance Detection in Online
Political Discussions | ICLR 2025 Spotlight | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Stance detection holds great potential to improve online political
discussions through its deployment in discussion platforms for purposes such as
content moderation, topic summarization or to facilitate more balanced
discussions. Typically, transformer-based models are employed directly for
stance detection, requiring vast amounts of data. However, the wide variety of
debate topics in online political discussions makes data collection
particularly challenging. LLMs have revived stance detection, but their online
deployment in online political discussions faces challenges like inconsistent
outputs, biases, and vulnerability to adversarial attacks. We show how
LLM-generated synthetic data can improve stance detection for online political
discussions by using reliable traditional stance detection models for online
deployment, while leveraging the text generation capabilities of LLMs for
synthetic data generation in a secure offline environment. To achieve this, (i)
we generate synthetic data for specific debate questions by prompting a
Mistral-7B model and show that fine-tuning with the generated synthetic data
can substantially improve the performance of stance detection, while remaining
interpretable and aligned with real world data. (ii) Using the synthetic data
as a reference, we can improve performance even further by identifying the most
informative samples in an unlabelled dataset, i.e., those samples which the
stance detection model is most uncertain about and can benefit from the most.
By fine-tuning with both synthetic data and the most informative samples, we
surpass the performance of the baseline model that is fine-tuned on all true
labels, while labelling considerably less data.
| [
{
"version": "v1",
"created": "Tue, 18 Jun 2024 10:36:21 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 22:04:34 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Wagner",
"Stefan Sylvius",
""
],
[
"Behrendt",
"Maike",
""
],
[
"Ziegele",
"Marc",
""
],
[
"Harmeling",
"Stefan",
""
]
]
| TITLE: The Power of LLM-Generated Synthetic Data for Stance Detection in Online
Political Discussions
ABSTRACT: Stance detection holds great potential to improve online political
discussions through its deployment in discussion platforms for purposes such as
content moderation, topic summarization or to facilitate more balanced
discussions. Typically, transformer-based models are employed directly for
stance detection, requiring vast amounts of data. However, the wide variety of
debate topics in online political discussions makes data collection
particularly challenging. LLMs have revived stance detection, but their online
deployment in online political discussions faces challenges like inconsistent
outputs, biases, and vulnerability to adversarial attacks. We show how
LLM-generated synthetic data can improve stance detection for online political
discussions by using reliable traditional stance detection models for online
deployment, while leveraging the text generation capabilities of LLMs for
synthetic data generation in a secure offline environment. To achieve this, (i)
we generate synthetic data for specific debate questions by prompting a
Mistral-7B model and show that fine-tuning with the generated synthetic data
can substantially improve the performance of stance detection, while remaining
interpretable and aligned with real world data. (ii) Using the synthetic data
as a reference, we can improve performance even further by identifying the most
informative samples in an unlabelled dataset, i.e., those samples which the
stance detection model is most uncertain about and can benefit from the most.
By fine-tuning with both synthetic data and the most informative samples, we
surpass the performance of the baseline model that is fine-tuned on all true
labels, while labelling considerably less data.
| no_new_dataset | 0.931275 |
2406.14678 | Pamela Riviere | Pamela D. Rivi\`ere (1), Anne L. Beatty-Mart\'inez (1) and Sean Trott
(1 and 2) ((1) Department of Cognitive Science UC San Diego, (2)
Computational Social Science UC San Diego) | Evaluating Contextualized Representations of (Spanish) Ambiguous Words:
A New Lexical Resource and Empirical Analysis | 17 pages, 12 figures, accepted at NAACL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Lexical ambiguity -- where a single wordform takes on distinct,
context-dependent meanings -- serves as a useful tool to compare across
different language models' (LMs') ability to form distinct, contextualized
representations of the same stimulus. Few studies have systematically compared
LMs' contextualized word embeddings for languages beyond English. Here, we
evaluate semantic representations of Spanish ambiguous nouns in context in a
suite of Spanish-language monolingual and multilingual BERT-based models. We
develop a novel dataset of minimal-pair sentences evoking the same or different
sense for a target ambiguous noun. In a pre-registered study, we collect
contextualized human relatedness judgments for each sentence pair. We find that
various BERT-based LMs' contextualized semantic representations capture some
variance in human judgments but fall short of the human benchmark. In
exploratory work, we find that performance scales with model size. We also
identify stereotyped trajectories of target noun disambiguation as a proportion
of traversal through a given LM family's architecture, which we partially
replicate in English. We contribute (1) a dataset of controlled, Spanish
sentence stimuli with human relatedness norms, and (2) to our evolving
understanding of the impact that LM specification (architectures, training
protocols) exerts on contextualized embeddings.
| [
{
"version": "v1",
"created": "Thu, 20 Jun 2024 18:58:11 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Oct 2024 19:06:26 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 19:31:41 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Rivière",
"Pamela D.",
"",
"1 and 2"
],
[
"Beatty-Martínez",
"Anne L.",
"",
"1 and 2"
],
[
"Trott",
"Sean",
"",
"1 and 2"
]
]
| TITLE: Evaluating Contextualized Representations of (Spanish) Ambiguous Words:
A New Lexical Resource and Empirical Analysis
ABSTRACT: Lexical ambiguity -- where a single wordform takes on distinct,
context-dependent meanings -- serves as a useful tool to compare across
different language models' (LMs') ability to form distinct, contextualized
representations of the same stimulus. Few studies have systematically compared
LMs' contextualized word embeddings for languages beyond English. Here, we
evaluate semantic representations of Spanish ambiguous nouns in context in a
suite of Spanish-language monolingual and multilingual BERT-based models. We
develop a novel dataset of minimal-pair sentences evoking the same or different
sense for a target ambiguous noun. In a pre-registered study, we collect
contextualized human relatedness judgments for each sentence pair. We find that
various BERT-based LMs' contextualized semantic representations capture some
variance in human judgments but fall short of the human benchmark. In
exploratory work, we find that performance scales with model size. We also
identify stereotyped trajectories of target noun disambiguation as a proportion
of traversal through a given LM family's architecture, which we partially
replicate in English. We contribute (1) a dataset of controlled, Spanish
sentence stimuli with human relatedness norms, and (2) to our evolving
understanding of the impact that LM specification (architectures, training
protocols) exerts on contextualized embeddings.
| new_dataset | 0.956594 |
2407.08883 | Chen Yuqian | Yuqian Chen, Fan Zhang, Meng Wang, Leo R. Zekelman, Suheyla
Cetin-Karayumak, Tengfei Xue, Chaoyi Zhang, Yang Song, Nikos Makris, Yogesh
Rathi, Weidong Cai, Lauren J. O'Donnell | TractGraphFormer: Anatomically Informed Hybrid Graph CNN-Transformer
Network for Classification from Diffusion MRI Tractography | 23 pages, 4 figures | Medical Image Analysis (2025): 103476 | 10.1016/j.media.2025.103476 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The relationship between brain connections and non-imaging phenotypes is
increasingly studied using deep neural networks. However, the local and global
properties of the brain's white matter networks are often overlooked in
convolutional network design. We introduce TractGraphFormer, a hybrid Graph
CNN-Transformer deep learning framework tailored for diffusion MRI
tractography. This model leverages local anatomical characteristics and global
feature dependencies of white matter structures. The Graph CNN module captures
white matter geometry and grey matter connectivity to aggregate local features
from anatomically similar white matter connections, while the Transformer
module uses self-attention to enhance global information learning.
Additionally, TractGraphFormer includes an attention module for interpreting
predictive white matter connections. In sex prediction tests, TractGraphFormer
shows strong performance in large datasets of children (n=9345) and young
adults (n=1065). Overall, our approach suggests that widespread connections in
the WM are predictive of the sex of an individual, and consistent predictive
anatomical tracts are identified across the two datasets. The proposed approach
highlights the potential of integrating local anatomical information and global
feature dependencies to improve prediction performance in machine learning with
diffusion MRI tractography.
| [
{
"version": "v1",
"created": "Thu, 11 Jul 2024 22:14:57 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Chen",
"Yuqian",
""
],
[
"Zhang",
"Fan",
""
],
[
"Wang",
"Meng",
""
],
[
"Zekelman",
"Leo R.",
""
],
[
"Cetin-Karayumak",
"Suheyla",
""
],
[
"Xue",
"Tengfei",
""
],
[
"Zhang",
"Chaoyi",
""
],
[
"Song",
"Yang",
""
],
[
"Makris",
"Nikos",
""
],
[
"Rathi",
"Yogesh",
""
],
[
"Cai",
"Weidong",
""
],
[
"O'Donnell",
"Lauren J.",
""
]
]
| TITLE: TractGraphFormer: Anatomically Informed Hybrid Graph CNN-Transformer
Network for Classification from Diffusion MRI Tractography
ABSTRACT: The relationship between brain connections and non-imaging phenotypes is
increasingly studied using deep neural networks. However, the local and global
properties of the brain's white matter networks are often overlooked in
convolutional network design. We introduce TractGraphFormer, a hybrid Graph
CNN-Transformer deep learning framework tailored for diffusion MRI
tractography. This model leverages local anatomical characteristics and global
feature dependencies of white matter structures. The Graph CNN module captures
white matter geometry and grey matter connectivity to aggregate local features
from anatomically similar white matter connections, while the Transformer
module uses self-attention to enhance global information learning.
Additionally, TractGraphFormer includes an attention module for interpreting
predictive white matter connections. In sex prediction tests, TractGraphFormer
shows strong performance in large datasets of children (n=9345) and young
adults (n=1065). Overall, our approach suggests that widespread connections in
the WM are predictive of the sex of an individual, and consistent predictive
anatomical tracts are identified across the two datasets. The proposed approach
highlights the potential of integrating local anatomical information and global
feature dependencies to improve prediction performance in machine learning with
diffusion MRI tractography.
| no_new_dataset | 0.951142 |
2407.11496 | Xinyi Wang | Xinyi Wang, Angeliki Katsenou, and David Bull | ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing
Video Quality Assessment | 10 pages, 3 figures | null | null | null | eess.IV cs.CV cs.MM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With the rapid growth of User-Generated Content (UGC) exchanged between users
and sharing platforms, the need for video quality assessment in the wild is
increasingly evident. UGC is typically acquired using consumer devices and
undergoes multiple rounds of compression (transcoding) before reaching the end
user. Therefore, traditional quality metrics that employ the original content
as a reference are not suitable. In this paper, we propose ReLaX-VQA, a novel
No-Reference Video Quality Assessment (NR-VQA) model that aims to address the
challenges of evaluating the quality of diverse video content without reference
to the original uncompressed videos. ReLaX-VQA uses frame differences to select
spatio-temporal fragments intelligently together with different expressions of
spatial features associated with the sampled frames. These are then used to
better capture spatial and temporal variabilities in the quality of
neighbouring frames. Furthermore, the model enhances abstraction by employing
layer-stacking techniques in deep neural network features from Residual
Networks and Vision Transformers. Extensive testing across four UGC datasets
demonstrates that ReLaX-VQA consistently outperforms existing NR-VQA methods,
achieving an average SRCC of 0.8658 and PLCC of 0.8873. Open-source code and
trained models that will facilitate further research and applications of NR-VQA
can be found at https://github.com/xinyiW915/ReLaX-VQA.
| [
{
"version": "v1",
"created": "Tue, 16 Jul 2024 08:33:55 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Mar 2025 17:37:47 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 18:07:16 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Wang",
"Xinyi",
""
],
[
"Katsenou",
"Angeliki",
""
],
[
"Bull",
"David",
""
]
]
| TITLE: ReLaX-VQA: Residual Fragment and Layer Stack Extraction for Enhancing
Video Quality Assessment
ABSTRACT: With the rapid growth of User-Generated Content (UGC) exchanged between users
and sharing platforms, the need for video quality assessment in the wild is
increasingly evident. UGC is typically acquired using consumer devices and
undergoes multiple rounds of compression (transcoding) before reaching the end
user. Therefore, traditional quality metrics that employ the original content
as a reference are not suitable. In this paper, we propose ReLaX-VQA, a novel
No-Reference Video Quality Assessment (NR-VQA) model that aims to address the
challenges of evaluating the quality of diverse video content without reference
to the original uncompressed videos. ReLaX-VQA uses frame differences to select
spatio-temporal fragments intelligently together with different expressions of
spatial features associated with the sampled frames. These are then used to
better capture spatial and temporal variabilities in the quality of
neighbouring frames. Furthermore, the model enhances abstraction by employing
layer-stacking techniques in deep neural network features from Residual
Networks and Vision Transformers. Extensive testing across four UGC datasets
demonstrates that ReLaX-VQA consistently outperforms existing NR-VQA methods,
achieving an average SRCC of 0.8658 and PLCC of 0.8873. Open-source code and
trained models that will facilitate further research and applications of NR-VQA
can be found at https://github.com/xinyiW915/ReLaX-VQA.
| no_new_dataset | 0.943556 |
2408.10330 | Shyam K Sateesh | Athul Raimon, Shubha Masti, Shyam K Sateesh, Siyani Vengatagiri,
Bhaskarjyoti Das | Meta-Learning in Audio and Speech Processing: An End to End
Comprehensive Review | Survey Paper (15 pages, 1 figure) | null | 10.1007/978-981-96-0695-5_12 | null | cs.SD cs.LG eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This survey overviews various meta-learning approaches used in audio and
speech processing scenarios. Meta-learning is used where model performance
needs to be maximized with minimum annotated samples, making it suitable for
low-sample audio processing. Although the field has made some significant
contributions, audio meta-learning still lacks the presence of comprehensive
survey papers. We present a systematic review of meta-learning methodologies in
audio processing. This includes audio-specific discussions on data
augmentation, feature extraction, preprocessing techniques, meta-learners, task
selection strategies and also presents important datasets in audio, together
with crucial real-world use cases. Through this extensive review, we aim to
provide valuable insights and identify future research directions in the
intersection of meta-learning and audio processing.
| [
{
"version": "v1",
"created": "Mon, 19 Aug 2024 18:11:59 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Raimon",
"Athul",
""
],
[
"Masti",
"Shubha",
""
],
[
"Sateesh",
"Shyam K",
""
],
[
"Vengatagiri",
"Siyani",
""
],
[
"Das",
"Bhaskarjyoti",
""
]
]
| TITLE: Meta-Learning in Audio and Speech Processing: An End to End
Comprehensive Review
ABSTRACT: This survey overviews various meta-learning approaches used in audio and
speech processing scenarios. Meta-learning is used where model performance
needs to be maximized with minimum annotated samples, making it suitable for
low-sample audio processing. Although the field has made some significant
contributions, audio meta-learning still lacks the presence of comprehensive
survey papers. We present a systematic review of meta-learning methodologies in
audio processing. This includes audio-specific discussions on data
augmentation, feature extraction, preprocessing techniques, meta-learners, task
selection strategies and also presents important datasets in audio, together
with crucial real-world use cases. Through this extensive review, we aim to
provide valuable insights and identify future research directions in the
intersection of meta-learning and audio processing.
| no_new_dataset | 0.946001 |
2408.13809 | Raz Lapid | Tal Alter, Raz Lapid and Moshe Sipper | On the Robustness of Kolmogorov-Arnold Networks: An Adversarial
Perspective | Accepted at TMLR 2025 | null | null | null | cs.CV cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kolmogorov-Arnold Networks (KANs) have recently emerged as a novel approach
to function approximation, demonstrating remarkable potential in various
domains. Despite their theoretical promise, the robustness of KANs under
adversarial conditions has yet to be thoroughly examined. In this paper we
explore the adversarial robustness of KANs, with a particular focus on image
classification tasks. We assess the performance of KANs against standard white
box and black-box adversarial attacks, comparing their resilience to that of
established neural network architectures. Our experimental evaluation
encompasses a variety of standard image classification benchmark datasets and
investigates both fully connected and convolutional neural network
architectures, of three sizes: small, medium, and large. We conclude that
small- and medium-sized KANs (either fully connected or convolutional) are not
consistently more robust than their standard counterparts, but that large-sized
KANs are, by and large, more robust. This comprehensive evaluation of KANs in
adversarial scenarios offers the first in-depth analysis of KAN security,
laying the groundwork for future research in this emerging field.
| [
{
"version": "v1",
"created": "Sun, 25 Aug 2024 11:10:15 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Dec 2024 15:40:49 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 20:45:25 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Alter",
"Tal",
""
],
[
"Lapid",
"Raz",
""
],
[
"Sipper",
"Moshe",
""
]
]
| TITLE: On the Robustness of Kolmogorov-Arnold Networks: An Adversarial
Perspective
ABSTRACT: Kolmogorov-Arnold Networks (KANs) have recently emerged as a novel approach
to function approximation, demonstrating remarkable potential in various
domains. Despite their theoretical promise, the robustness of KANs under
adversarial conditions has yet to be thoroughly examined. In this paper we
explore the adversarial robustness of KANs, with a particular focus on image
classification tasks. We assess the performance of KANs against standard white
box and black-box adversarial attacks, comparing their resilience to that of
established neural network architectures. Our experimental evaluation
encompasses a variety of standard image classification benchmark datasets and
investigates both fully connected and convolutional neural network
architectures, of three sizes: small, medium, and large. We conclude that
small- and medium-sized KANs (either fully connected or convolutional) are not
consistently more robust than their standard counterparts, but that large-sized
KANs are, by and large, more robust. This comprehensive evaluation of KANs in
adversarial scenarios offers the first in-depth analysis of KAN security,
laying the groundwork for future research in this emerging field.
| no_new_dataset | 0.949856 |
2409.04851 | Anjun Chen | Anjun Chen, Xiangyu Wang, Zhi Xu, Kun Shi, Yan Qin, Yuchi Huo, Jiming
Chen, Qi Ye | AdaptiveFusion: Adaptive Multi-Modal Multi-View Fusion for 3D Human Body
Reconstruction | TMM 2025, Project Page:
https://chen3110.github.io/adaptivefusion/index.html | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in sensor technology and deep learning have led to
significant progress in 3D human body reconstruction. However, most existing
approaches rely on data from a specific sensor, which can be unreliable due to
the inherent limitations of individual sensing modalities. Additionally,
existing multi-modal fusion methods generally require customized designs based
on the specific sensor combinations or setups, which limits the flexibility and
generality of these methods. Furthermore, conventional point-image
projection-based and Transformer-based fusion networks are susceptible to the
influence of noisy modalities and sensor poses. To address these limitations
and achieve robust 3D human body reconstruction in various conditions, we
propose AdaptiveFusion, a generic adaptive multi-modal multi-view fusion
framework that can effectively incorporate arbitrary combinations of
uncalibrated sensor inputs. By treating different modalities from various
viewpoints as equal tokens, and our handcrafted modality sampling module by
leveraging the inherent flexibility of Transformer models, AdaptiveFusion is
able to cope with arbitrary numbers of inputs and accommodate noisy modalities
with only a single training network. Extensive experiments on large-scale human
datasets demonstrate the effectiveness of AdaptiveFusion in achieving
high-quality 3D human body reconstruction in various environments. In addition,
our method achieves superior accuracy compared to state-of-the-art fusion
methods.
| [
{
"version": "v1",
"created": "Sat, 7 Sep 2024 15:06:30 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Dec 2024 03:40:35 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 06:24:50 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Chen",
"Anjun",
""
],
[
"Wang",
"Xiangyu",
""
],
[
"Xu",
"Zhi",
""
],
[
"Shi",
"Kun",
""
],
[
"Qin",
"Yan",
""
],
[
"Huo",
"Yuchi",
""
],
[
"Chen",
"Jiming",
""
],
[
"Ye",
"Qi",
""
]
]
| TITLE: AdaptiveFusion: Adaptive Multi-Modal Multi-View Fusion for 3D Human Body
Reconstruction
ABSTRACT: Recent advancements in sensor technology and deep learning have led to
significant progress in 3D human body reconstruction. However, most existing
approaches rely on data from a specific sensor, which can be unreliable due to
the inherent limitations of individual sensing modalities. Additionally,
existing multi-modal fusion methods generally require customized designs based
on the specific sensor combinations or setups, which limits the flexibility and
generality of these methods. Furthermore, conventional point-image
projection-based and Transformer-based fusion networks are susceptible to the
influence of noisy modalities and sensor poses. To address these limitations
and achieve robust 3D human body reconstruction in various conditions, we
propose AdaptiveFusion, a generic adaptive multi-modal multi-view fusion
framework that can effectively incorporate arbitrary combinations of
uncalibrated sensor inputs. By treating different modalities from various
viewpoints as equal tokens, and our handcrafted modality sampling module by
leveraging the inherent flexibility of Transformer models, AdaptiveFusion is
able to cope with arbitrary numbers of inputs and accommodate noisy modalities
with only a single training network. Extensive experiments on large-scale human
datasets demonstrate the effectiveness of AdaptiveFusion in achieving
high-quality 3D human body reconstruction in various environments. In addition,
our method achieves superior accuracy compared to state-of-the-art fusion
methods.
| no_new_dataset | 0.943764 |
2409.06214 | Kim Jaewoo | Jaewoo Kim, Uehwan Kim | Towards Generalizable Scene Change Detection | Camera-ready version. Accepted to CVPR 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While current state-of-the-art Scene Change Detection (SCD) approaches
achieve impressive results in well-trained research data, they become
unreliable under unseen environments and different temporal conditions;
in-domain performance drops from 77.6% to 8.0% in a previously unseen
environment and to 4.6% under a different temporal condition -- calling for
generalizable SCD and benchmark. In this work, we propose the Generalizable
Scene Change Detection Framework (GeSCF), which addresses unseen domain
performance and temporal consistency -- to meet the growing demand for anything
SCD. Our method leverages the pre-trained Segment Anything Model (SAM) in a
zero-shot manner. For this, we design Initial Pseudo-mask Generation and
Geometric-Semantic Mask Matching -- seamlessly turning user-guided prompt and
single-image based segmentation into scene change detection for a pair of
inputs without guidance. Furthermore, we define the Generalizable Scene Change
Detection (GeSCD) benchmark along with novel metrics and an evaluation protocol
to facilitate SCD research in generalizability. In the process, we introduce
the ChangeVPR dataset, a collection of challenging image pairs with diverse
environmental scenarios -- including urban, suburban, and rural settings.
Extensive experiments across various datasets demonstrate that GeSCF achieves
an average performance gain of 19.2% on existing SCD datasets and 30.0% on the
ChangeVPR dataset, nearly doubling the prior art performance. We believe our
work can lay a solid foundation for robust and generalizable SCD research.
| [
{
"version": "v1",
"created": "Tue, 10 Sep 2024 04:45:25 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 05:28:05 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 01:46:42 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Mar 2025 13:55:30 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Kim",
"Jaewoo",
""
],
[
"Kim",
"Uehwan",
""
]
]
| TITLE: Towards Generalizable Scene Change Detection
ABSTRACT: While current state-of-the-art Scene Change Detection (SCD) approaches
achieve impressive results in well-trained research data, they become
unreliable under unseen environments and different temporal conditions;
in-domain performance drops from 77.6% to 8.0% in a previously unseen
environment and to 4.6% under a different temporal condition -- calling for
generalizable SCD and benchmark. In this work, we propose the Generalizable
Scene Change Detection Framework (GeSCF), which addresses unseen domain
performance and temporal consistency -- to meet the growing demand for anything
SCD. Our method leverages the pre-trained Segment Anything Model (SAM) in a
zero-shot manner. For this, we design Initial Pseudo-mask Generation and
Geometric-Semantic Mask Matching -- seamlessly turning user-guided prompt and
single-image based segmentation into scene change detection for a pair of
inputs without guidance. Furthermore, we define the Generalizable Scene Change
Detection (GeSCD) benchmark along with novel metrics and an evaluation protocol
to facilitate SCD research in generalizability. In the process, we introduce
the ChangeVPR dataset, a collection of challenging image pairs with diverse
environmental scenarios -- including urban, suburban, and rural settings.
Extensive experiments across various datasets demonstrate that GeSCF achieves
an average performance gain of 19.2% on existing SCD datasets and 30.0% on the
ChangeVPR dataset, nearly doubling the prior art performance. We believe our
work can lay a solid foundation for robust and generalizable SCD research.
| new_dataset | 0.962321 |
2409.13191 | Lai Wei | Lai Wei, Zhen Ying, Muyang He, Yutong Chen, Qian Yang, Yanzhe Hong,
Jiaping Lu, Kaipeng Zheng, Shaoting Zhang, Xiaoying Li, Weiran Huang, Ying
Chen | Diabetica: Adapting Large Language Model to Enhance Multiple Medical
Tasks in Diabetes Care and Management | Accepted by ICLR 2025 SCI-FM workshop | null | null | null | cs.CL cs.AI cs.CE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Diabetes is a chronic disease with a significant global health burden,
requiring multi-stakeholder collaboration for optimal management. Large
language models (LLMs) have shown promise in various healthcare scenarios, but
their effectiveness across diverse diabetes tasks remains unproven. Our study
introduced a framework to train and validate diabetes-specific LLMs. We first
developed a comprehensive data processing pipeline that includes data
collection, filtering, augmentation and refinement. This created a
high-quality, diabetes-specific dataset and evaluation benchmarks from scratch.
Fine-tuned on the collected training dataset, our diabetes-specific LLM family
demonstrated state-of-the-art proficiency in processing various diabetes tasks
compared to other LLMs. Furthermore, clinical studies revealed the potential
applications of our models in diabetes care, including providing personalized
healthcare, assisting medical education, and streamlining clinical tasks.
Generally, our introduced framework helps develop diabetes-specific LLMs and
highlights their potential to enhance clinical practice and provide
personalized, data-driven support for diabetes management across different end
users. Our codes, benchmarks and models are available at
https://github.com/waltonfuture/Diabetica.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2024 03:47:54 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 13:20:17 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Wei",
"Lai",
""
],
[
"Ying",
"Zhen",
""
],
[
"He",
"Muyang",
""
],
[
"Chen",
"Yutong",
""
],
[
"Yang",
"Qian",
""
],
[
"Hong",
"Yanzhe",
""
],
[
"Lu",
"Jiaping",
""
],
[
"Zheng",
"Kaipeng",
""
],
[
"Zhang",
"Shaoting",
""
],
[
"Li",
"Xiaoying",
""
],
[
"Huang",
"Weiran",
""
],
[
"Chen",
"Ying",
""
]
]
| TITLE: Diabetica: Adapting Large Language Model to Enhance Multiple Medical
Tasks in Diabetes Care and Management
ABSTRACT: Diabetes is a chronic disease with a significant global health burden,
requiring multi-stakeholder collaboration for optimal management. Large
language models (LLMs) have shown promise in various healthcare scenarios, but
their effectiveness across diverse diabetes tasks remains unproven. Our study
introduced a framework to train and validate diabetes-specific LLMs. We first
developed a comprehensive data processing pipeline that includes data
collection, filtering, augmentation and refinement. This created a
high-quality, diabetes-specific dataset and evaluation benchmarks from scratch.
Fine-tuned on the collected training dataset, our diabetes-specific LLM family
demonstrated state-of-the-art proficiency in processing various diabetes tasks
compared to other LLMs. Furthermore, clinical studies revealed the potential
applications of our models in diabetes care, including providing personalized
healthcare, assisting medical education, and streamlining clinical tasks.
Generally, our introduced framework helps develop diabetes-specific LLMs and
highlights their potential to enhance clinical practice and provide
personalized, data-driven support for diabetes management across different end
users. Our codes, benchmarks and models are available at
https://github.com/waltonfuture/Diabetica.
| new_dataset | 0.966379 |
2409.15250 | Sombit Dey | Sombit Dey, Jan-Nico Zaech, Nikolay Nikolov, Luc Van Gool, Danda Pani
Paudel | ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models | Accepted at ICRA-2025, Atlanta | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent progress in large language models and access to large-scale robotic
datasets has sparked a paradigm shift in robotics models transforming them into
generalists able to adapt to various tasks, scenes, and robot modalities. A
large step for the community are open Vision Language Action models which
showcase strong performance in a wide variety of tasks. In this work, we study
the visual generalization capabilities of three existing robotic foundation
models, and propose a corresponding evaluation framework.
Our study shows that the existing models do not exhibit robustness to visual
out-of-domain scenarios. This is potentially caused by limited variations in
the training data and/or catastrophic forgetting, leading to domain limitations
in the vision foundation models. We further explore OpenVLA, which uses two
pre-trained vision foundation models and is, therefore, expected to generalize
to out-of-domain experiments. However, we showcase catastrophic forgetting by
DINO-v2 in OpenVLA through its failure to fulfill the task of depth regression.
To overcome the aforementioned issue of visual catastrophic forgetting, we
propose a gradual backbone reversal approach founded on model merging. This
enables OpenVLA which requires the adaptation of the visual backbones during
initial training -- to regain its visual generalization ability. Regaining this
capability enables our ReVLA model to improve over OpenVLA by a factor of 77%
and 66% for grasping and lifting in visual OOD tasks .
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 17:47:59 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 12:18:17 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Dey",
"Sombit",
""
],
[
"Zaech",
"Jan-Nico",
""
],
[
"Nikolov",
"Nikolay",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Paudel",
"Danda Pani",
""
]
]
| TITLE: ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models
ABSTRACT: Recent progress in large language models and access to large-scale robotic
datasets has sparked a paradigm shift in robotics models transforming them into
generalists able to adapt to various tasks, scenes, and robot modalities. A
large step for the community are open Vision Language Action models which
showcase strong performance in a wide variety of tasks. In this work, we study
the visual generalization capabilities of three existing robotic foundation
models, and propose a corresponding evaluation framework.
Our study shows that the existing models do not exhibit robustness to visual
out-of-domain scenarios. This is potentially caused by limited variations in
the training data and/or catastrophic forgetting, leading to domain limitations
in the vision foundation models. We further explore OpenVLA, which uses two
pre-trained vision foundation models and is, therefore, expected to generalize
to out-of-domain experiments. However, we showcase catastrophic forgetting by
DINO-v2 in OpenVLA through its failure to fulfill the task of depth regression.
To overcome the aforementioned issue of visual catastrophic forgetting, we
propose a gradual backbone reversal approach founded on model merging. This
enables OpenVLA which requires the adaptation of the visual backbones during
initial training -- to regain its visual generalization ability. Regaining this
capability enables our ReVLA model to improve over OpenVLA by a factor of 77%
and 66% for grasping and lifting in visual OOD tasks .
| no_new_dataset | 0.947381 |
2409.15658 | Siyuan Liu | Siyuan Liu, Jiawei Du, Sicheng Xiang, Zibo Wang and Dingsheng Luo | Long-horizon Embodied Planning with Implicit Logical Inference and
Hallucination Mitigation | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-horizon embodied planning underpins embodied AI. To accomplish
long-horizon tasks, one of the most feasible ways is to decompose abstract
instructions into a sequence of actionable steps. Foundation models still face
logical errors and hallucinations in long-horizon planning, unless provided
with highly relevant examples to the tasks. However, providing highly relevant
examples for any random task is unpractical. Therefore, we present ReLEP, a
novel framework for Real-time Long-horizon Embodied Planning. ReLEP can
complete a wide range of long-horizon tasks without in-context examples by
learning implicit logical inference through fine-tuning. The fine-tuned large
vision-language model formulates plans as sequences of skill functions. These
functions are selected from a carefully designed skill library. ReLEP is also
equipped with a Memory module for plan and status recall, and a Robot
Configuration module for versatility across robot types. In addition, we
propose a data generation pipeline to tackle dataset scarcity. When
constructing the dataset, we considered the implicit logical relationships,
enabling the model to learn implicit logical relationships and dispel
hallucinations. Through comprehensive evaluations across various long-horizon
tasks, ReLEP demonstrates high success rates and compliance to execution even
on unseen tasks and outperforms state-of-the-art baseline methods.
| [
{
"version": "v1",
"created": "Tue, 24 Sep 2024 01:47:23 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 10:15:59 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Liu",
"Siyuan",
""
],
[
"Du",
"Jiawei",
""
],
[
"Xiang",
"Sicheng",
""
],
[
"Wang",
"Zibo",
""
],
[
"Luo",
"Dingsheng",
""
]
]
| TITLE: Long-horizon Embodied Planning with Implicit Logical Inference and
Hallucination Mitigation
ABSTRACT: Long-horizon embodied planning underpins embodied AI. To accomplish
long-horizon tasks, one of the most feasible ways is to decompose abstract
instructions into a sequence of actionable steps. Foundation models still face
logical errors and hallucinations in long-horizon planning, unless provided
with highly relevant examples to the tasks. However, providing highly relevant
examples for any random task is unpractical. Therefore, we present ReLEP, a
novel framework for Real-time Long-horizon Embodied Planning. ReLEP can
complete a wide range of long-horizon tasks without in-context examples by
learning implicit logical inference through fine-tuning. The fine-tuned large
vision-language model formulates plans as sequences of skill functions. These
functions are selected from a carefully designed skill library. ReLEP is also
equipped with a Memory module for plan and status recall, and a Robot
Configuration module for versatility across robot types. In addition, we
propose a data generation pipeline to tackle dataset scarcity. When
constructing the dataset, we considered the implicit logical relationships,
enabling the model to learn implicit logical relationships and dispel
hallucinations. Through comprehensive evaluations across various long-horizon
tasks, ReLEP demonstrates high success rates and compliance to execution even
on unseen tasks and outperforms state-of-the-art baseline methods.
| no_new_dataset | 0.938067 |
2409.20560 | Jiachen Li | Xiaopan Zhang and Hao Qin and Fuquan Wang and Yue Dong and Jiachen Li | LaMMA-P: Generalizable Multi-Agent Long-Horizon Task Allocation and
Planning with LM-Driven PDDL Planner | IEEE Conference on Robotics and Automation (ICRA 2025); Project
website: https://lamma-p.github.io/ | null | null | null | cs.RO cs.AI cs.CV cs.LG cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language models (LMs) possess a strong capability to comprehend natural
language, making them effective in translating human instructions into detailed
plans for simple robot tasks. Nevertheless, it remains a significant challenge
to handle long-horizon tasks, especially in subtask identification and
allocation for cooperative heterogeneous robot teams. To address this issue, we
propose a Language Model-Driven Multi-Agent PDDL Planner (LaMMA-P), a novel
multi-agent task planning framework that achieves state-of-the-art performance
on long-horizon tasks. LaMMA-P integrates the strengths of the LMs' reasoning
capability and the traditional heuristic search planner to achieve a high
success rate and efficiency while demonstrating strong generalization across
tasks. Additionally, we create MAT-THOR, a comprehensive benchmark that
features household tasks with two different levels of complexity based on the
AI2-THOR environment. The experimental results demonstrate that LaMMA-P
achieves a 105% higher success rate and 36% higher efficiency than existing
LM-based multiagent planners. The experimental videos, code, datasets, and
detailed prompts used in each module can be found on the project website:
https://lamma-p.github.io.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2024 17:58:18 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:17:58 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Zhang",
"Xiaopan",
""
],
[
"Qin",
"Hao",
""
],
[
"Wang",
"Fuquan",
""
],
[
"Dong",
"Yue",
""
],
[
"Li",
"Jiachen",
""
]
]
| TITLE: LaMMA-P: Generalizable Multi-Agent Long-Horizon Task Allocation and
Planning with LM-Driven PDDL Planner
ABSTRACT: Language models (LMs) possess a strong capability to comprehend natural
language, making them effective in translating human instructions into detailed
plans for simple robot tasks. Nevertheless, it remains a significant challenge
to handle long-horizon tasks, especially in subtask identification and
allocation for cooperative heterogeneous robot teams. To address this issue, we
propose a Language Model-Driven Multi-Agent PDDL Planner (LaMMA-P), a novel
multi-agent task planning framework that achieves state-of-the-art performance
on long-horizon tasks. LaMMA-P integrates the strengths of the LMs' reasoning
capability and the traditional heuristic search planner to achieve a high
success rate and efficiency while demonstrating strong generalization across
tasks. Additionally, we create MAT-THOR, a comprehensive benchmark that
features household tasks with two different levels of complexity based on the
AI2-THOR environment. The experimental results demonstrate that LaMMA-P
achieves a 105% higher success rate and 36% higher efficiency than existing
LM-based multiagent planners. The experimental videos, code, datasets, and
detailed prompts used in each module can be found on the project website:
https://lamma-p.github.io.
| no_new_dataset | 0.698304 |
2410.00263 | Kun Yuan | Kun Yuan, Vinkle Srivastav, Nassir Navab, Nicolas Padoy | Procedure-Aware Surgical Video-language Pretraining with Hierarchical
Knowledge Augmentation | Accepted at the 38th Conference on Neural Information Processing
Systems (NeurIPS 2024 Spolight) | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Surgical video-language pretraining (VLP) faces unique challenges due to the
knowledge domain gap and the scarcity of multi-modal data. This study aims to
bridge the gap by addressing issues regarding textual information loss in
surgical lecture videos and the spatial-temporal challenges of surgical VLP. We
propose a hierarchical knowledge augmentation approach and a novel
Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining
(PeskaVLP) framework to tackle these issues. The knowledge augmentation uses
large language models (LLM) for refining and enriching surgical concepts, thus
providing comprehensive language supervision and reducing the risk of
overfitting. PeskaVLP combines language supervision with visual
self-supervision, constructing hard negative samples and employing a Dynamic
Time Warping (DTW) based loss function to effectively comprehend the
cross-modal procedural alignment. Extensive experiments on multiple public
surgical scene understanding and cross-modal retrieval datasets show that our
proposed method significantly improves zero-shot transferring performance and
offers a generalist visual representation for further advancements in surgical
scene understanding.The code is available at
https://github.com/CAMMA-public/SurgVLP
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2024 22:21:05 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 15:21:36 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Yuan",
"Kun",
""
],
[
"Srivastav",
"Vinkle",
""
],
[
"Navab",
"Nassir",
""
],
[
"Padoy",
"Nicolas",
""
]
]
| TITLE: Procedure-Aware Surgical Video-language Pretraining with Hierarchical
Knowledge Augmentation
ABSTRACT: Surgical video-language pretraining (VLP) faces unique challenges due to the
knowledge domain gap and the scarcity of multi-modal data. This study aims to
bridge the gap by addressing issues regarding textual information loss in
surgical lecture videos and the spatial-temporal challenges of surgical VLP. We
propose a hierarchical knowledge augmentation approach and a novel
Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining
(PeskaVLP) framework to tackle these issues. The knowledge augmentation uses
large language models (LLM) for refining and enriching surgical concepts, thus
providing comprehensive language supervision and reducing the risk of
overfitting. PeskaVLP combines language supervision with visual
self-supervision, constructing hard negative samples and employing a Dynamic
Time Warping (DTW) based loss function to effectively comprehend the
cross-modal procedural alignment. Extensive experiments on multiple public
surgical scene understanding and cross-modal retrieval datasets show that our
proposed method significantly improves zero-shot transferring performance and
offers a generalist visual representation for further advancements in surgical
scene understanding.The code is available at
https://github.com/CAMMA-public/SurgVLP
| no_new_dataset | 0.944177 |
2410.01727 | Yilmazcan Ozyurt | Yilmazcan Ozyurt, Stefan Feuerriegel, Mrinmaya Sachan | Automated Knowledge Concept Annotation and Question Representation
Learning for Knowledge Tracing | null | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge tracing (KT) is a popular approach for modeling students' learning
progress over time, which can enable more personalized and adaptive learning.
However, existing KT approaches face two major limitations: (1) they rely
heavily on expert-defined knowledge concepts (KCs) in questions, which is
time-consuming and prone to errors; and (2) KT methods tend to overlook the
semantics of both questions and the given KCs. In this work, we address these
challenges and present KCQRL, a framework for automated knowledge concept
annotation and question representation learning that can improve the
effectiveness of any existing KT model. First, we propose an automated KC
annotation process using large language models (LLMs), which generates question
solutions and then annotates KCs in each solution step of the questions.
Second, we introduce a contrastive learning approach to generate semantically
rich embeddings for questions and solution steps, aligning them with their
associated KCs via a tailored false negative elimination approach. These
embeddings can be readily integrated into existing KT models, replacing their
randomly initialized embeddings. We demonstrate the effectiveness of KCQRL
across 15 KT algorithms on two large real-world Math learning datasets, where
we achieve consistent performance improvements.
| [
{
"version": "v1",
"created": "Wed, 2 Oct 2024 16:37:19 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 13:09:14 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Ozyurt",
"Yilmazcan",
""
],
[
"Feuerriegel",
"Stefan",
""
],
[
"Sachan",
"Mrinmaya",
""
]
]
| TITLE: Automated Knowledge Concept Annotation and Question Representation
Learning for Knowledge Tracing
ABSTRACT: Knowledge tracing (KT) is a popular approach for modeling students' learning
progress over time, which can enable more personalized and adaptive learning.
However, existing KT approaches face two major limitations: (1) they rely
heavily on expert-defined knowledge concepts (KCs) in questions, which is
time-consuming and prone to errors; and (2) KT methods tend to overlook the
semantics of both questions and the given KCs. In this work, we address these
challenges and present KCQRL, a framework for automated knowledge concept
annotation and question representation learning that can improve the
effectiveness of any existing KT model. First, we propose an automated KC
annotation process using large language models (LLMs), which generates question
solutions and then annotates KCs in each solution step of the questions.
Second, we introduce a contrastive learning approach to generate semantically
rich embeddings for questions and solution steps, aligning them with their
associated KCs via a tailored false negative elimination approach. These
embeddings can be readily integrated into existing KT models, replacing their
randomly initialized embeddings. We demonstrate the effectiveness of KCQRL
across 15 KT algorithms on two large real-world Math learning datasets, where
we achieve consistent performance improvements.
| no_new_dataset | 0.947039 |
2410.05116 | Shang-Fu Chen | Ayano Hiranaka, Shang-Fu Chen, Chieh-Hsin Lai, Dongjun Kim, Naoki
Murata, Takashi Shibuya, Wei-Hsiang Liao, Shao-Hua Sun, Yuki Mitsufuji | HERO: Human-Feedback Efficient Reinforcement Learning for Online
Diffusion Model Finetuning | Published in International Conference on Learning Representations
(ICLR) 2025 | null | null | null | cs.LG cs.AI cs.CV cs.HC | http://creativecommons.org/licenses/by/4.0/ | Controllable generation through Stable Diffusion (SD) fine-tuning aims to
improve fidelity, safety, and alignment with human guidance. Existing
reinforcement learning from human feedback methods usually rely on predefined
heuristic reward functions or pretrained reward models built on large-scale
datasets, limiting their applicability to scenarios where collecting such data
is costly or difficult. To effectively and efficiently utilize human feedback,
we develop a framework, HERO, which leverages online human feedback collected
on the fly during model learning. Specifically, HERO features two key
mechanisms: (1) Feedback-Aligned Representation Learning, an online training
method that captures human feedback and provides informative learning signals
for fine-tuning, and (2) Feedback-Guided Image Generation, which involves
generating images from SD's refined initialization samples, enabling faster
convergence towards the evaluator's intent. We demonstrate that HERO is 4x more
efficient in online feedback for body part anomaly correction compared to the
best existing method. Additionally, experiments show that HERO can effectively
handle tasks like reasoning, counting, personalization, and reducing NSFW
content with only 0.5K online feedback. The code and project page are available
at https://hero-dm.github.io/.
| [
{
"version": "v1",
"created": "Mon, 7 Oct 2024 15:12:01 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 17:11:55 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 08:12:07 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Hiranaka",
"Ayano",
""
],
[
"Chen",
"Shang-Fu",
""
],
[
"Lai",
"Chieh-Hsin",
""
],
[
"Kim",
"Dongjun",
""
],
[
"Murata",
"Naoki",
""
],
[
"Shibuya",
"Takashi",
""
],
[
"Liao",
"Wei-Hsiang",
""
],
[
"Sun",
"Shao-Hua",
""
],
[
"Mitsufuji",
"Yuki",
""
]
]
| TITLE: HERO: Human-Feedback Efficient Reinforcement Learning for Online
Diffusion Model Finetuning
ABSTRACT: Controllable generation through Stable Diffusion (SD) fine-tuning aims to
improve fidelity, safety, and alignment with human guidance. Existing
reinforcement learning from human feedback methods usually rely on predefined
heuristic reward functions or pretrained reward models built on large-scale
datasets, limiting their applicability to scenarios where collecting such data
is costly or difficult. To effectively and efficiently utilize human feedback,
we develop a framework, HERO, which leverages online human feedback collected
on the fly during model learning. Specifically, HERO features two key
mechanisms: (1) Feedback-Aligned Representation Learning, an online training
method that captures human feedback and provides informative learning signals
for fine-tuning, and (2) Feedback-Guided Image Generation, which involves
generating images from SD's refined initialization samples, enabling faster
convergence towards the evaluator's intent. We demonstrate that HERO is 4x more
efficient in online feedback for body part anomaly correction compared to the
best existing method. Additionally, experiments show that HERO can effectively
handle tasks like reasoning, counting, personalization, and reducing NSFW
content with only 0.5K online feedback. The code and project page are available
at https://hero-dm.github.io/.
| no_new_dataset | 0.946001 |
2410.05609 | Zhenyu Liao | Xiaoyi Mai and Zhenyu Liao | The Breakdown of Gaussian Universality in Classification of
High-dimensional Linear Factor Mixtures | 34 pages, 10 figures, accepted by ICLR 2025
(https://openreview.net/forum?id=UrKbn51HjA) | null | null | null | stat.ML cs.LG math.ST stat.TH | http://creativecommons.org/licenses/by/4.0/ | The assumption of Gaussian or Gaussian mixture data has been extensively
exploited in a long series of precise performance analyses of machine learning
(ML) methods, on large datasets having comparably numerous samples and
features. To relax this restrictive assumption, subsequent efforts have been
devoted to establish "Gaussian equivalent principles" by studying scenarios of
Gaussian universality where the asymptotic performance of ML methods on
non-Gaussian data remains unchanged when replaced with Gaussian data having the
same mean and covariance. Beyond the realm of Gaussian universality, there are
few exact results on how the data distribution affects the learning
performance.
In this article, we provide a precise high-dimensional characterization of
empirical risk minimization, for classification under a general mixture data
setting of linear factor models that extends Gaussian mixtures. The Gaussian
universality is shown to break down under this setting, in the sense that the
asymptotic learning performance depends on the data distribution beyond the
class means and covariances. To clarify the limitations of Gaussian
universality in the classification of mixture data and to understand the impact
of its breakdown, we specify conditions for Gaussian universality and discuss
their implications for the choice of loss function.
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 01:45:37 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 01:00:33 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 08:01:35 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Mai",
"Xiaoyi",
""
],
[
"Liao",
"Zhenyu",
""
]
]
| TITLE: The Breakdown of Gaussian Universality in Classification of
High-dimensional Linear Factor Mixtures
ABSTRACT: The assumption of Gaussian or Gaussian mixture data has been extensively
exploited in a long series of precise performance analyses of machine learning
(ML) methods, on large datasets having comparably numerous samples and
features. To relax this restrictive assumption, subsequent efforts have been
devoted to establish "Gaussian equivalent principles" by studying scenarios of
Gaussian universality where the asymptotic performance of ML methods on
non-Gaussian data remains unchanged when replaced with Gaussian data having the
same mean and covariance. Beyond the realm of Gaussian universality, there are
few exact results on how the data distribution affects the learning
performance.
In this article, we provide a precise high-dimensional characterization of
empirical risk minimization, for classification under a general mixture data
setting of linear factor models that extends Gaussian mixtures. The Gaussian
universality is shown to break down under this setting, in the sense that the
asymptotic learning performance depends on the data distribution beyond the
class means and covariances. To clarify the limitations of Gaussian
universality in the classification of mixture data and to understand the impact
of its breakdown, we specify conditions for Gaussian universality and discuss
their implications for the choice of loss function.
| no_new_dataset | 0.951639 |
2410.07388 | Qiheng Lu | Qiheng Lu, Nicholas D. Sidiropoulos, Aritra Konar | On Densest $k$-Subgraph Mining and Diagonal Loading | null | null | null | null | cs.SI cs.DS | http://creativecommons.org/licenses/by/4.0/ | The Densest $k$-Subgraph (D$k$S) problem aims to find a subgraph comprising
$k$ vertices with the maximum number of edges between them. A continuous
reformulation of the binary quadratic D$k$S problem is considered, which
incorporates a diagonal loading term. It is shown that this non-convex,
continuous relaxation is tight for a range of diagonal loading parameters, and
the impact of the diagonal loading parameter on the optimization landscape is
studied. On the algorithmic side, two projection-free algorithms are proposed
to tackle the relaxed problem, based on Frank-Wolfe and explicit constraint
parametrization, respectively. Experiments suggest that both algorithms have
merits relative to the state-of-art, while the Frank-Wolfe-based algorithm
stands out in terms of subgraph density, computational complexity, and ability
to scale up to very large datasets.
| [
{
"version": "v1",
"created": "Wed, 9 Oct 2024 19:14:46 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 21:06:51 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Lu",
"Qiheng",
""
],
[
"Sidiropoulos",
"Nicholas D.",
""
],
[
"Konar",
"Aritra",
""
]
]
| TITLE: On Densest $k$-Subgraph Mining and Diagonal Loading
ABSTRACT: The Densest $k$-Subgraph (D$k$S) problem aims to find a subgraph comprising
$k$ vertices with the maximum number of edges between them. A continuous
reformulation of the binary quadratic D$k$S problem is considered, which
incorporates a diagonal loading term. It is shown that this non-convex,
continuous relaxation is tight for a range of diagonal loading parameters, and
the impact of the diagonal loading parameter on the optimization landscape is
studied. On the algorithmic side, two projection-free algorithms are proposed
to tackle the relaxed problem, based on Frank-Wolfe and explicit constraint
parametrization, respectively. Experiments suggest that both algorithms have
merits relative to the state-of-art, while the Frank-Wolfe-based algorithm
stands out in terms of subgraph density, computational complexity, and ability
to scale up to very large datasets.
| no_new_dataset | 0.943867 |
2410.12854 | Weibin Liao | Weibin Liao, Xu Chu, Yasha Wang | TPO: Aligning Large Language Models with Multi-branch & Multi-step
Preference Trees | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the domain of complex reasoning tasks, such as mathematical reasoning,
recent advancements have proposed the use of Direct Preference Optimization
(DPO) to suppress output of dispreferred responses, thereby enhancing the
long-chain reasoning capabilities of large language models (LLMs). To this end,
these studies employed LLMs to generate preference trees via Tree-of-thoughts
(ToT) and sample the paired preference responses required by the DPO algorithm.
However, the DPO algorithm based on binary preference optimization is unable to
learn multiple responses with varying degrees of preference/dispreference that
provided by the preference trees, resulting in incomplete preference learning.
In this work, we introduce Tree Preference Optimization (TPO), that does not
sample paired preference responses from the preference tree; instead, it
directly learns from the entire preference tree during the fine-tuning.
Specifically, TPO formulates the language model alignment as a Preference List
Ranking problem, where the policy can potentially learn more effectively from a
ranked preference list of responses given the prompt. In addition, to further
assist LLMs in identifying discriminative steps within long-chain reasoning and
increase the relative reward margin in the preference list, TPO utilizes
Adaptive Step Reward to adjust the reward values of each step in trajectory for
performing fine-grained preference optimization. We carry out extensive
experiments on mathematical reasoning tasks to evaluate TPO. The experimental
results indicate that TPO consistently outperforms DPO across five public large
language models on four datasets.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2024 22:22:05 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:40:44 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Liao",
"Weibin",
""
],
[
"Chu",
"Xu",
""
],
[
"Wang",
"Yasha",
""
]
]
| TITLE: TPO: Aligning Large Language Models with Multi-branch & Multi-step
Preference Trees
ABSTRACT: In the domain of complex reasoning tasks, such as mathematical reasoning,
recent advancements have proposed the use of Direct Preference Optimization
(DPO) to suppress output of dispreferred responses, thereby enhancing the
long-chain reasoning capabilities of large language models (LLMs). To this end,
these studies employed LLMs to generate preference trees via Tree-of-thoughts
(ToT) and sample the paired preference responses required by the DPO algorithm.
However, the DPO algorithm based on binary preference optimization is unable to
learn multiple responses with varying degrees of preference/dispreference that
provided by the preference trees, resulting in incomplete preference learning.
In this work, we introduce Tree Preference Optimization (TPO), that does not
sample paired preference responses from the preference tree; instead, it
directly learns from the entire preference tree during the fine-tuning.
Specifically, TPO formulates the language model alignment as a Preference List
Ranking problem, where the policy can potentially learn more effectively from a
ranked preference list of responses given the prompt. In addition, to further
assist LLMs in identifying discriminative steps within long-chain reasoning and
increase the relative reward margin in the preference list, TPO utilizes
Adaptive Step Reward to adjust the reward values of each step in trajectory for
performing fine-grained preference optimization. We carry out extensive
experiments on mathematical reasoning tasks to evaluate TPO. The experimental
results indicate that TPO consistently outperforms DPO across five public large
language models on four datasets.
| no_new_dataset | 0.950365 |
2410.14211 | Xingyu Tan | Xingyu Tan, Xiaoyang Wang, Qing Liu, Xiwei Xu, Xin Yuan, Wenjie Zhang | Paths-over-Graph: Knowledge Graph Empowered Large Language Model
Reasoning | Accepted by The Web Conference 2025 (WWW, 2025) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have achieved impressive results in various
tasks but struggle with hallucination problems and lack of relevant knowledge,
especially in deep complex reasoning and knowledge-intensive tasks. Knowledge
Graphs (KGs), which capture vast amounts of facts in a structured format, offer
a reliable source of knowledge for reasoning. However, existing KG-based LLM
reasoning methods face challenges like handling multi-hop reasoning,
multi-entity questions, and effectively utilizing graph structures. To address
these issues, we propose Paths-over-Graph (PoG), a novel method that enhances
LLM reasoning by integrating knowledge reasoning paths from KGs, improving the
interpretability and faithfulness of LLM outputs. PoG tackles multi-hop and
multi-entity questions through a three-phase dynamic multi-hop path
exploration, which combines the inherent knowledge of LLMs with factual
knowledge from KGs. In order to improve the efficiency, PoG prunes irrelevant
information from the graph exploration first and introduces efficient
three-step pruning techniques that incorporate graph structures, LLM prompting,
and a pre-trained language model (e.g., SBERT) to effectively narrow down the
explored candidate paths. This ensures all reasoning paths contain highly
relevant information captured from KGs, making the reasoning faithful and
interpretable in problem-solving. PoG innovatively utilizes graph structure to
prune the irrelevant noise and represents the first method to implement
multi-entity deep path detection on KGs for LLM reasoning tasks. Comprehensive
experiments on five benchmark KGQA datasets demonstrate PoG outperforms the
state-of-the-art method ToG across GPT-3.5-Turbo and GPT-4, achieving an
average accuracy improvement of 18.9%. Notably, PoG with GPT-3.5-Turbo
surpasses ToG with GPT-4 by up to 23.9%.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 06:57:19 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Oct 2024 01:22:16 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jan 2025 04:31:11 GMT"
},
{
"version": "v4",
"created": "Wed, 12 Mar 2025 23:45:13 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Tan",
"Xingyu",
""
],
[
"Wang",
"Xiaoyang",
""
],
[
"Liu",
"Qing",
""
],
[
"Xu",
"Xiwei",
""
],
[
"Yuan",
"Xin",
""
],
[
"Zhang",
"Wenjie",
""
]
]
| TITLE: Paths-over-Graph: Knowledge Graph Empowered Large Language Model
Reasoning
ABSTRACT: Large Language Models (LLMs) have achieved impressive results in various
tasks but struggle with hallucination problems and lack of relevant knowledge,
especially in deep complex reasoning and knowledge-intensive tasks. Knowledge
Graphs (KGs), which capture vast amounts of facts in a structured format, offer
a reliable source of knowledge for reasoning. However, existing KG-based LLM
reasoning methods face challenges like handling multi-hop reasoning,
multi-entity questions, and effectively utilizing graph structures. To address
these issues, we propose Paths-over-Graph (PoG), a novel method that enhances
LLM reasoning by integrating knowledge reasoning paths from KGs, improving the
interpretability and faithfulness of LLM outputs. PoG tackles multi-hop and
multi-entity questions through a three-phase dynamic multi-hop path
exploration, which combines the inherent knowledge of LLMs with factual
knowledge from KGs. In order to improve the efficiency, PoG prunes irrelevant
information from the graph exploration first and introduces efficient
three-step pruning techniques that incorporate graph structures, LLM prompting,
and a pre-trained language model (e.g., SBERT) to effectively narrow down the
explored candidate paths. This ensures all reasoning paths contain highly
relevant information captured from KGs, making the reasoning faithful and
interpretable in problem-solving. PoG innovatively utilizes graph structure to
prune the irrelevant noise and represents the first method to implement
multi-entity deep path detection on KGs for LLM reasoning tasks. Comprehensive
experiments on five benchmark KGQA datasets demonstrate PoG outperforms the
state-of-the-art method ToG across GPT-3.5-Turbo and GPT-4, achieving an
average accuracy improvement of 18.9%. Notably, PoG with GPT-3.5-Turbo
surpasses ToG with GPT-4 by up to 23.9%.
| no_new_dataset | 0.949949 |
2410.14993 | Hao Wu | Hao Wu, Donglin Bai, Shiqi Jiang, Qianxi Zhang, Yifan Yang, Xin Ding,
Ting Cao, Yunxin Liu, Fengyuan Xu | Making Every Frame Matter: Continuous Activity Recognition in Streaming
Video via Adaptive Video Context Modeling | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Video activity recognition has become increasingly important in robots and
embodied AI. Recognizing continuous video activities poses considerable
challenges due to the fast expansion of streaming video, which contains
multi-scale and untrimmed activities. We introduce a novel system, CARS, to
overcome these issues through adaptive video context modeling. Adaptive video
context modeling refers to selectively maintaining activity-related features in
temporal and spatial dimensions. CARS has two key designs. The first is an
activity spatial feature extraction by eliminating irrelevant visual features
while maintaining recognition accuracy. The second is an activity-aware state
update introducing dynamic adaptability to better preserve the video context
for multi-scale activity recognition. Our CARS runs at speeds $>$30 FPS on
typical edge devices and outperforms all baselines by 1.2\% to 79.7\% in
accuracy. Moreover, we explore applying CARS to a large video model as a video
encoder. Experimental results show that our CARS can result in a 0.46-point
enhancement (on a 5-point scale) on the in-distribution video activity dataset,
and an improvement ranging from 1.19\% to 4\% on zero-shot video activity
datasets.
| [
{
"version": "v1",
"created": "Sat, 19 Oct 2024 05:50:00 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 15:19:21 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Wu",
"Hao",
""
],
[
"Bai",
"Donglin",
""
],
[
"Jiang",
"Shiqi",
""
],
[
"Zhang",
"Qianxi",
""
],
[
"Yang",
"Yifan",
""
],
[
"Ding",
"Xin",
""
],
[
"Cao",
"Ting",
""
],
[
"Liu",
"Yunxin",
""
],
[
"Xu",
"Fengyuan",
""
]
]
| TITLE: Making Every Frame Matter: Continuous Activity Recognition in Streaming
Video via Adaptive Video Context Modeling
ABSTRACT: Video activity recognition has become increasingly important in robots and
embodied AI. Recognizing continuous video activities poses considerable
challenges due to the fast expansion of streaming video, which contains
multi-scale and untrimmed activities. We introduce a novel system, CARS, to
overcome these issues through adaptive video context modeling. Adaptive video
context modeling refers to selectively maintaining activity-related features in
temporal and spatial dimensions. CARS has two key designs. The first is an
activity spatial feature extraction by eliminating irrelevant visual features
while maintaining recognition accuracy. The second is an activity-aware state
update introducing dynamic adaptability to better preserve the video context
for multi-scale activity recognition. Our CARS runs at speeds $>$30 FPS on
typical edge devices and outperforms all baselines by 1.2\% to 79.7\% in
accuracy. Moreover, we explore applying CARS to a large video model as a video
encoder. Experimental results show that our CARS can result in a 0.46-point
enhancement (on a 5-point scale) on the in-distribution video activity dataset,
and an improvement ranging from 1.19\% to 4\% on zero-shot video activity
datasets.
| no_new_dataset | 0.948537 |
2411.00113 | Brendan Ross | Brendan Leigh Ross, Hamidreza Kamkari, Tongzi Wu, Rasa Hosseinzadeh,
Zhaoyan Liu, George Stein, Jesse C. Cresswell, Gabriel Loaiza-Ganem | A Geometric Framework for Understanding Memorization in Generative
Models | Accepted to ICLR 2025 (Spotlight) | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As deep generative models have progressed, recent work has shown them to be
capable of memorizing and reproducing training datapoints when deployed. These
findings call into question the usability of generative models, especially in
light of the legal and privacy risks brought about by memorization. To better
understand this phenomenon, we propose the manifold memorization hypothesis
(MMH), a geometric framework which leverages the manifold hypothesis into a
clear language in which to reason about memorization. We propose to analyze
memorization in terms of the relationship between the dimensionalities of (i)
the ground truth data manifold and (ii) the manifold learned by the model. This
framework provides a formal standard for "how memorized" a datapoint is and
systematically categorizes memorized data into two types: memorization driven
by overfitting and memorization driven by the underlying data distribution. By
analyzing prior work in the context of the MMH, we explain and unify assorted
observations in the literature. We empirically validate the MMH using synthetic
data and image datasets up to the scale of Stable Diffusion, developing new
tools for detecting and preventing generation of memorized samples in the
process.
| [
{
"version": "v1",
"created": "Thu, 31 Oct 2024 18:09:01 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 18:00:00 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Ross",
"Brendan Leigh",
""
],
[
"Kamkari",
"Hamidreza",
""
],
[
"Wu",
"Tongzi",
""
],
[
"Hosseinzadeh",
"Rasa",
""
],
[
"Liu",
"Zhaoyan",
""
],
[
"Stein",
"George",
""
],
[
"Cresswell",
"Jesse C.",
""
],
[
"Loaiza-Ganem",
"Gabriel",
""
]
]
| TITLE: A Geometric Framework for Understanding Memorization in Generative
Models
ABSTRACT: As deep generative models have progressed, recent work has shown them to be
capable of memorizing and reproducing training datapoints when deployed. These
findings call into question the usability of generative models, especially in
light of the legal and privacy risks brought about by memorization. To better
understand this phenomenon, we propose the manifold memorization hypothesis
(MMH), a geometric framework which leverages the manifold hypothesis into a
clear language in which to reason about memorization. We propose to analyze
memorization in terms of the relationship between the dimensionalities of (i)
the ground truth data manifold and (ii) the manifold learned by the model. This
framework provides a formal standard for "how memorized" a datapoint is and
systematically categorizes memorized data into two types: memorization driven
by overfitting and memorization driven by the underlying data distribution. By
analyzing prior work in the context of the MMH, we explain and unify assorted
observations in the literature. We empirically validate the MMH using synthetic
data and image datasets up to the scale of Stable Diffusion, developing new
tools for detecting and preventing generation of memorized samples in the
process.
| no_new_dataset | 0.946151 |
2411.04954 | Jingwei Xu | Jingwei Xu, Zibo Zhao, Chenyu Wang, Wen Liu, Yi Ma, Shenghua Gao | CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM | Project page: https://cad-mllm.github.io/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper aims to design a unified Computer-Aided Design (CAD) generation
system that can easily generate CAD models based on the user's inputs in the
form of textual description, images, point clouds, or even a combination of
them. Towards this goal, we introduce the CAD-MLLM, the first system capable of
generating parametric CAD models conditioned on the multimodal input.
Specifically, within the CAD-MLLM framework, we leverage the command sequences
of CAD models and then employ advanced large language models (LLMs) to align
the feature space across these diverse multi-modalities data and CAD models'
vectorized representations. To facilitate the model training, we design a
comprehensive data construction and annotation pipeline that equips each CAD
model with corresponding multimodal data. Our resulting dataset, named
Omni-CAD, is the first multimodal CAD dataset that contains textual
description, multi-view images, points, and command sequence for each CAD
model. It contains approximately 450K instances and their CAD construction
sequences. To thoroughly evaluate the quality of our generated CAD models, we
go beyond current evaluation metrics that focus on reconstruction quality by
introducing additional metrics that assess topology quality and surface
enclosure extent. Extensive experimental results demonstrate that CAD-MLLM
significantly outperforms existing conditional generative methods and remains
highly robust to noises and missing points. The project page and more
visualizations can be found at: https://cad-mllm.github.io/
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2024 18:31:08 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:11:16 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Xu",
"Jingwei",
""
],
[
"Zhao",
"Zibo",
""
],
[
"Wang",
"Chenyu",
""
],
[
"Liu",
"Wen",
""
],
[
"Ma",
"Yi",
""
],
[
"Gao",
"Shenghua",
""
]
]
| TITLE: CAD-MLLM: Unifying Multimodality-Conditioned CAD Generation With MLLM
ABSTRACT: This paper aims to design a unified Computer-Aided Design (CAD) generation
system that can easily generate CAD models based on the user's inputs in the
form of textual description, images, point clouds, or even a combination of
them. Towards this goal, we introduce the CAD-MLLM, the first system capable of
generating parametric CAD models conditioned on the multimodal input.
Specifically, within the CAD-MLLM framework, we leverage the command sequences
of CAD models and then employ advanced large language models (LLMs) to align
the feature space across these diverse multi-modalities data and CAD models'
vectorized representations. To facilitate the model training, we design a
comprehensive data construction and annotation pipeline that equips each CAD
model with corresponding multimodal data. Our resulting dataset, named
Omni-CAD, is the first multimodal CAD dataset that contains textual
description, multi-view images, points, and command sequence for each CAD
model. It contains approximately 450K instances and their CAD construction
sequences. To thoroughly evaluate the quality of our generated CAD models, we
go beyond current evaluation metrics that focus on reconstruction quality by
introducing additional metrics that assess topology quality and surface
enclosure extent. Extensive experimental results demonstrate that CAD-MLLM
significantly outperforms existing conditional generative methods and remains
highly robust to noises and missing points. The project page and more
visualizations can be found at: https://cad-mllm.github.io/
| new_dataset | 0.964822 |
2411.05039 | Subhankar Maity | Aniket Deroy, Subhankar Maity | YouTube Comments Decoded: Leveraging LLMs for Low Resource Language
Classification | Updated and Final Version | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sarcasm detection is a significant challenge in sentiment analysis,
particularly due to its nature of conveying opinions where the intended meaning
deviates from the literal expression. This challenge is heightened in social
media contexts where code-mixing, especially in Dravidian languages, is
prevalent. Code-mixing involves the blending of multiple languages within a
single utterance, often with non-native scripts, complicating the task for
systems trained on monolingual data. This shared task introduces a novel gold
standard corpus designed for sarcasm and sentiment detection within code-mixed
texts, specifically in Tamil-English and Malayalam-English languages. The
primary objective of this task is to identify sarcasm and sentiment polarity
within a code-mixed dataset of Tamil-English and Malayalam-English comments and
posts collected from social media platforms. Each comment or post is annotated
at the message level for sentiment polarity, with particular attention to the
challenges posed by class imbalance, reflecting real-world scenarios.In this
work, we experiment with state-of-the-art large language models like GPT-3.5
Turbo via prompting to classify comments into sarcastic or non-sarcastic
categories. We obtained a macro-F1 score of 0.61 for Tamil language. We
obtained a macro-F1 score of 0.50 for Malayalam language.
| [
{
"version": "v1",
"created": "Wed, 6 Nov 2024 17:58:01 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 16:17:21 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Deroy",
"Aniket",
""
],
[
"Maity",
"Subhankar",
""
]
]
| TITLE: YouTube Comments Decoded: Leveraging LLMs for Low Resource Language
Classification
ABSTRACT: Sarcasm detection is a significant challenge in sentiment analysis,
particularly due to its nature of conveying opinions where the intended meaning
deviates from the literal expression. This challenge is heightened in social
media contexts where code-mixing, especially in Dravidian languages, is
prevalent. Code-mixing involves the blending of multiple languages within a
single utterance, often with non-native scripts, complicating the task for
systems trained on monolingual data. This shared task introduces a novel gold
standard corpus designed for sarcasm and sentiment detection within code-mixed
texts, specifically in Tamil-English and Malayalam-English languages. The
primary objective of this task is to identify sarcasm and sentiment polarity
within a code-mixed dataset of Tamil-English and Malayalam-English comments and
posts collected from social media platforms. Each comment or post is annotated
at the message level for sentiment polarity, with particular attention to the
challenges posed by class imbalance, reflecting real-world scenarios.In this
work, we experiment with state-of-the-art large language models like GPT-3.5
Turbo via prompting to classify comments into sarcastic or non-sarcastic
categories. We obtained a macro-F1 score of 0.61 for Tamil language. We
obtained a macro-F1 score of 0.50 for Malayalam language.
| new_dataset | 0.969324 |
2411.11282 | Yucong Meng | Yucong Meng, Zhiwei Yang, Minghong Duan, Yonghong Shi, Zhijian Song | Continuous K-space Recovery Network with Image Guidance for Fast MRI
Reconstruction | null | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Magnetic resonance imaging (MRI) is a crucial tool for clinical diagnosis
while facing the challenge of long scanning time. To reduce the acquisition
time, fast MRI reconstruction aims to restore high-quality images from the
undersampled k-space. Existing methods typically train deep learning models to
map the undersampled data to artifact-free MRI images. However, these studies
often overlook the unique properties of k-space and directly apply general
networks designed for image processing to k-space recovery, leaving the precise
learning of k-space largely underexplored. In this work, we propose a
continuous k-space recovery network from a new perspective of implicit neural
representation with image domain guidance, which boosts the performance of MRI
reconstruction. Specifically, (1) an implicit neural representation based
encoder-decoder structure is customized to continuously query unsampled
k-values. (2) an image guidance module is designed to mine the semantic
information from the low-quality MRI images to further guide the k-space
recovery. (3) a multi-stage training strategy is proposed to recover dense
k-space progressively. Extensive experiments conducted on CC359, fastMRI, and
IXI datasets demonstrate the effectiveness of our method and its superiority
over other competitors.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2024 04:54:04 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 12:40:10 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Meng",
"Yucong",
""
],
[
"Yang",
"Zhiwei",
""
],
[
"Duan",
"Minghong",
""
],
[
"Shi",
"Yonghong",
""
],
[
"Song",
"Zhijian",
""
]
]
| TITLE: Continuous K-space Recovery Network with Image Guidance for Fast MRI
Reconstruction
ABSTRACT: Magnetic resonance imaging (MRI) is a crucial tool for clinical diagnosis
while facing the challenge of long scanning time. To reduce the acquisition
time, fast MRI reconstruction aims to restore high-quality images from the
undersampled k-space. Existing methods typically train deep learning models to
map the undersampled data to artifact-free MRI images. However, these studies
often overlook the unique properties of k-space and directly apply general
networks designed for image processing to k-space recovery, leaving the precise
learning of k-space largely underexplored. In this work, we propose a
continuous k-space recovery network from a new perspective of implicit neural
representation with image domain guidance, which boosts the performance of MRI
reconstruction. Specifically, (1) an implicit neural representation based
encoder-decoder structure is customized to continuously query unsampled
k-values. (2) an image guidance module is designed to mine the semantic
information from the low-quality MRI images to further guide the k-space
recovery. (3) a multi-stage training strategy is proposed to recover dense
k-space progressively. Extensive experiments conducted on CC359, fastMRI, and
IXI datasets demonstrate the effectiveness of our method and its superiority
over other competitors.
| no_new_dataset | 0.947039 |
2411.13317 | Sivan Doveh | Sivan Doveh, Nimrod Shabtay, Wei Lin, Eli Schwartz, Hilde Kuehne, Raja
Giryes, Rogerio Feris, Leonid Karlinsky, James Glass, Assaf Arbelle, Shimon
Ullman, M. Jehanzeb Mirza | Teaching VLMs to Localize Specific Objects from In-context Examples | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-Language Models (VLMs) have shown remarkable capabilities across
diverse visual tasks, including image recognition, video understanding, and
Visual Question Answering (VQA) when explicitly trained for these tasks.
Despite these advances, we find that present-day VLMs (including the
proprietary GPT-4o) lack a fundamental cognitive ability: learning to localize
specific objects in a scene by taking into account the context. In this work,
we focus on the task of few-shot personalized localization, where a model is
given a small set of annotated images (in-context examples) -- each with a
category label and bounding box -- and is tasked with localizing the same
object type in a query image. Personalized localization can be particularly
important in cases of ambiguity of several related objects that can respond to
a text or an object that is hard to describe with words. To provoke
personalized localization abilities in models, we present a data-centric
solution that fine-tunes them using carefully curated data from video object
tracking datasets. By leveraging sequences of frames tracking the same object
across multiple shots, we simulate instruction-tuning dialogues that promote
context awareness. To reinforce this, we introduce a novel regularization
technique that replaces object labels with pseudo-names, ensuring the model
relies on visual context rather than prior knowledge. Our method significantly
enhances the few-shot localization performance of recent VLMs ranging from 7B
to 72B in size, without sacrificing generalization, as demonstrated on several
benchmarks tailored towards evaluating personalized localization abilities.
This work is the first to explore and benchmark personalized few-shot
localization for VLMs -- exposing critical weaknesses in present-day VLMs, and
laying a foundation for future research in context-driven vision-language
applications.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2024 13:34:22 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 19:43:14 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Doveh",
"Sivan",
""
],
[
"Shabtay",
"Nimrod",
""
],
[
"Lin",
"Wei",
""
],
[
"Schwartz",
"Eli",
""
],
[
"Kuehne",
"Hilde",
""
],
[
"Giryes",
"Raja",
""
],
[
"Feris",
"Rogerio",
""
],
[
"Karlinsky",
"Leonid",
""
],
[
"Glass",
"James",
""
],
[
"Arbelle",
"Assaf",
""
],
[
"Ullman",
"Shimon",
""
],
[
"Mirza",
"M. Jehanzeb",
""
]
]
| TITLE: Teaching VLMs to Localize Specific Objects from In-context Examples
ABSTRACT: Vision-Language Models (VLMs) have shown remarkable capabilities across
diverse visual tasks, including image recognition, video understanding, and
Visual Question Answering (VQA) when explicitly trained for these tasks.
Despite these advances, we find that present-day VLMs (including the
proprietary GPT-4o) lack a fundamental cognitive ability: learning to localize
specific objects in a scene by taking into account the context. In this work,
we focus on the task of few-shot personalized localization, where a model is
given a small set of annotated images (in-context examples) -- each with a
category label and bounding box -- and is tasked with localizing the same
object type in a query image. Personalized localization can be particularly
important in cases of ambiguity of several related objects that can respond to
a text or an object that is hard to describe with words. To provoke
personalized localization abilities in models, we present a data-centric
solution that fine-tunes them using carefully curated data from video object
tracking datasets. By leveraging sequences of frames tracking the same object
across multiple shots, we simulate instruction-tuning dialogues that promote
context awareness. To reinforce this, we introduce a novel regularization
technique that replaces object labels with pseudo-names, ensuring the model
relies on visual context rather than prior knowledge. Our method significantly
enhances the few-shot localization performance of recent VLMs ranging from 7B
to 72B in size, without sacrificing generalization, as demonstrated on several
benchmarks tailored towards evaluating personalized localization abilities.
This work is the first to explore and benchmark personalized few-shot
localization for VLMs -- exposing critical weaknesses in present-day VLMs, and
laying a foundation for future research in context-driven vision-language
applications.
| no_new_dataset | 0.947672 |
2411.14961 | Jieming Bian | Jieming Bian, Lei Wang, Letian Zhang, Jie Xu | LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and
Initialization Refinement | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Foundation models (FMs) achieve strong performance across diverse tasks with
task-specific fine-tuning, yet full parameter fine-tuning is often
computationally prohibitive for large models. Parameter-efficient fine-tuning
(PEFT) methods like Low-Rank Adaptation (LoRA) reduce this cost by introducing
low-rank matrices for tuning fewer parameters. While LoRA allows for efficient
fine-tuning, it requires significant data for adaptation, making Federated
Learning (FL) an appealing solution due to its privacy-preserving collaborative
framework. However, combining LoRA with FL introduces two key challenges: the
\textbf{Server-Side Aggregation Bias}, where server-side averaging of LoRA
matrices diverges from the ideal global update, and the \textbf{Client-Side
Initialization Lag}, emphasizing the need for consistent initialization across
rounds. Existing approaches address these challenges individually, limiting
their effectiveness. We propose LoRA-FAIR, a novel method that tackles both
issues by introducing a correction term on the server, enhancing aggregation
efficiency and accuracy. LoRA-FAIR maintains computational and communication
efficiency, yielding superior performance over state-of-the-art methods.
Experimental results on ViT and MLP-Mixer models across large-scale datasets
demonstrate that LoRA-FAIR consistently achieves performance improvements in FL
settings.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 14:19:01 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 19:43:25 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Bian",
"Jieming",
""
],
[
"Wang",
"Lei",
""
],
[
"Zhang",
"Letian",
""
],
[
"Xu",
"Jie",
""
]
]
| TITLE: LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and
Initialization Refinement
ABSTRACT: Foundation models (FMs) achieve strong performance across diverse tasks with
task-specific fine-tuning, yet full parameter fine-tuning is often
computationally prohibitive for large models. Parameter-efficient fine-tuning
(PEFT) methods like Low-Rank Adaptation (LoRA) reduce this cost by introducing
low-rank matrices for tuning fewer parameters. While LoRA allows for efficient
fine-tuning, it requires significant data for adaptation, making Federated
Learning (FL) an appealing solution due to its privacy-preserving collaborative
framework. However, combining LoRA with FL introduces two key challenges: the
\textbf{Server-Side Aggregation Bias}, where server-side averaging of LoRA
matrices diverges from the ideal global update, and the \textbf{Client-Side
Initialization Lag}, emphasizing the need for consistent initialization across
rounds. Existing approaches address these challenges individually, limiting
their effectiveness. We propose LoRA-FAIR, a novel method that tackles both
issues by introducing a correction term on the server, enhancing aggregation
efficiency and accuracy. LoRA-FAIR maintains computational and communication
efficiency, yielding superior performance over state-of-the-art methods.
Experimental results on ViT and MLP-Mixer models across large-scale datasets
demonstrate that LoRA-FAIR consistently achieves performance improvements in FL
settings.
| no_new_dataset | 0.948965 |
2411.16816 | Georg Hess | Georg Hess, Carl Lindstr\"om, Maryam Fatemi, Christoffer Petersson,
Lennart Svensson | SplatAD: Real-Time Lidar and Camera Rendering with 3D Gaussian Splatting
for Autonomous Driving | null | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by-sa/4.0/ | Ensuring the safety of autonomous robots, such as self-driving vehicles,
requires extensive testing across diverse driving scenarios. Simulation is a
key ingredient for conducting such testing in a cost-effective and scalable
way. Neural rendering methods have gained popularity, as they can build
simulation environments from collected logs in a data-driven manner. However,
existing neural radiance field (NeRF) methods for sensor-realistic rendering of
camera and lidar data suffer from low rendering speeds, limiting their
applicability for large-scale testing. While 3D Gaussian Splatting (3DGS)
enables real-time rendering, current methods are limited to camera data and are
unable to render lidar data essential for autonomous driving. To address these
limitations, we propose SplatAD, the first 3DGS-based method for realistic,
real-time rendering of dynamic scenes for both camera and lidar data. SplatAD
accurately models key sensor-specific phenomena such as rolling shutter
effects, lidar intensity, and lidar ray dropouts, using purpose-built
algorithms to optimize rendering efficiency. Evaluation across three autonomous
driving datasets demonstrates that SplatAD achieves state-of-the-art rendering
quality with up to +2 PSNR for NVS and +3 PSNR for reconstruction while
increasing rendering speed over NeRF-based methods by an order of magnitude.
See https://research.zenseact.com/publications/splatad/ for our project page.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 16:18:22 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 08:51:12 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 14:41:47 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Hess",
"Georg",
""
],
[
"Lindström",
"Carl",
""
],
[
"Fatemi",
"Maryam",
""
],
[
"Petersson",
"Christoffer",
""
],
[
"Svensson",
"Lennart",
""
]
]
| TITLE: SplatAD: Real-Time Lidar and Camera Rendering with 3D Gaussian Splatting
for Autonomous Driving
ABSTRACT: Ensuring the safety of autonomous robots, such as self-driving vehicles,
requires extensive testing across diverse driving scenarios. Simulation is a
key ingredient for conducting such testing in a cost-effective and scalable
way. Neural rendering methods have gained popularity, as they can build
simulation environments from collected logs in a data-driven manner. However,
existing neural radiance field (NeRF) methods for sensor-realistic rendering of
camera and lidar data suffer from low rendering speeds, limiting their
applicability for large-scale testing. While 3D Gaussian Splatting (3DGS)
enables real-time rendering, current methods are limited to camera data and are
unable to render lidar data essential for autonomous driving. To address these
limitations, we propose SplatAD, the first 3DGS-based method for realistic,
real-time rendering of dynamic scenes for both camera and lidar data. SplatAD
accurately models key sensor-specific phenomena such as rolling shutter
effects, lidar intensity, and lidar ray dropouts, using purpose-built
algorithms to optimize rendering efficiency. Evaluation across three autonomous
driving datasets demonstrates that SplatAD achieves state-of-the-art rendering
quality with up to +2 PSNR for NVS and +3 PSNR for reconstruction while
increasing rendering speed over NeRF-based methods by an order of magnitude.
See https://research.zenseact.com/publications/splatad/ for our project page.
| no_new_dataset | 0.950503 |
2411.17274 | Yikun Li | Yikun Li, Ting Zhang, Ratnadira Widyasari, Yan Naing Tun, Huu Hung
Nguyen, Tan Bui, Ivana Clairine Irsan, Yiran Cheng, Xiang Lan, Han Wei Ang,
Frank Liauw, Martin Weyssow, Hong Jin Kang, Eng Lieh Ouh, Lwin Khin Shar,
David Lo | CleanVul: Automatic Function-Level Vulnerability Detection in Code
Commits Using LLM Heuristics | null | null | null | null | cs.SE cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate identification of software vulnerabilities is crucial for system
integrity. Vulnerability datasets, often derived from the National
Vulnerability Database (NVD) or directly from GitHub, are essential for
training machine learning models to detect these security flaws. However, these
datasets frequently suffer from significant noise, typically 40% to 75%, due
primarily to the automatic and indiscriminate labeling of all changes in
vulnerability-fixing commits (VFCs) as vulnerability-related. This
misclassification occurs because not all changes in a commit aimed at fixing
vulnerabilities pertain to security threats; many are routine updates like bug
fixes or test improvements.
This paper introduces the first methodology that uses the Large Language
Model (LLM) with a heuristic enhancement to automatically identify
vulnerability-fixing changes from VFCs, achieving an F1-score of 0.82.
VulSifter was applied to a large-scale study, where we conducted a crawl of
127,063 repositories on GitHub, resulting in the acquisition of 5,352,105
commits. VulSifter involves utilizing an LLM to comprehend code semantics and
contextual information, while applying heuristics to filter out unrelated
changes. We then developed CleanVul, a high-quality dataset comprising 8,203
functions using our LLM heuristic enhancement approach, demonstrating
Correctness (90.6%) comparable to established datasets such as SVEN and
PrimeVul.
To evaluate the CleanVul dataset, we conducted experiments focusing on
fine-tuning various LLMs on CleanVul and other high-quality datasets.
Evaluation results reveal that LLMs fine-tuned on CleanVul not only exhibit
enhanced accuracy but also superior generalization capabilities compared to
those trained on uncleaned datasets. Specifically, models trained on CleanVul
and tested on PrimeVul achieve accuracy higher than those trained and tested
exclusively on PrimeVul.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 09:51:55 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Nov 2024 03:52:23 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Jan 2025 04:08:15 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Mar 2025 10:41:04 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Li",
"Yikun",
""
],
[
"Zhang",
"Ting",
""
],
[
"Widyasari",
"Ratnadira",
""
],
[
"Tun",
"Yan Naing",
""
],
[
"Nguyen",
"Huu Hung",
""
],
[
"Bui",
"Tan",
""
],
[
"Irsan",
"Ivana Clairine",
""
],
[
"Cheng",
"Yiran",
""
],
[
"Lan",
"Xiang",
""
],
[
"Ang",
"Han Wei",
""
],
[
"Liauw",
"Frank",
""
],
[
"Weyssow",
"Martin",
""
],
[
"Kang",
"Hong Jin",
""
],
[
"Ouh",
"Eng Lieh",
""
],
[
"Shar",
"Lwin Khin",
""
],
[
"Lo",
"David",
""
]
]
| TITLE: CleanVul: Automatic Function-Level Vulnerability Detection in Code
Commits Using LLM Heuristics
ABSTRACT: Accurate identification of software vulnerabilities is crucial for system
integrity. Vulnerability datasets, often derived from the National
Vulnerability Database (NVD) or directly from GitHub, are essential for
training machine learning models to detect these security flaws. However, these
datasets frequently suffer from significant noise, typically 40% to 75%, due
primarily to the automatic and indiscriminate labeling of all changes in
vulnerability-fixing commits (VFCs) as vulnerability-related. This
misclassification occurs because not all changes in a commit aimed at fixing
vulnerabilities pertain to security threats; many are routine updates like bug
fixes or test improvements.
This paper introduces the first methodology that uses the Large Language
Model (LLM) with a heuristic enhancement to automatically identify
vulnerability-fixing changes from VFCs, achieving an F1-score of 0.82.
VulSifter was applied to a large-scale study, where we conducted a crawl of
127,063 repositories on GitHub, resulting in the acquisition of 5,352,105
commits. VulSifter involves utilizing an LLM to comprehend code semantics and
contextual information, while applying heuristics to filter out unrelated
changes. We then developed CleanVul, a high-quality dataset comprising 8,203
functions using our LLM heuristic enhancement approach, demonstrating
Correctness (90.6%) comparable to established datasets such as SVEN and
PrimeVul.
To evaluate the CleanVul dataset, we conducted experiments focusing on
fine-tuning various LLMs on CleanVul and other high-quality datasets.
Evaluation results reveal that LLMs fine-tuned on CleanVul not only exhibit
enhanced accuracy but also superior generalization capabilities compared to
those trained on uncleaned datasets. Specifically, models trained on CleanVul
and tested on PrimeVul achieve accuracy higher than those trained and tested
exclusively on PrimeVul.
| no_new_dataset | 0.891952 |
2412.00733 | Yun Zhan | Jiahao Cui, Hui Li, Yun Zhan, Hanlin Shang, Kaihui Cheng, Yuqi Ma,
Shan Mu, Hang Zhou, Jingdong Wang, Siyu Zhu | Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Video
Diffusion Transformer | null | null | null | null | cs.CV cs.GR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing methodologies for animating portrait images face significant
challenges, particularly in handling non-frontal perspectives, rendering
dynamic objects around the portrait, and generating immersive, realistic
backgrounds. In this paper, we introduce the first application of a pretrained
transformer-based video generative model that demonstrates strong
generalization capabilities and generates highly dynamic, realistic videos for
portrait animation, effectively addressing these challenges. The adoption of a
new video backbone model makes previous U-Net-based methods for identity
maintenance, audio conditioning, and video extrapolation inapplicable. To
address this limitation, we design an identity reference network consisting of
a causal 3D VAE combined with a stacked series of transformer layers, ensuring
consistent facial identity across video sequences. Additionally, we investigate
various speech audio conditioning and motion frame mechanisms to enable the
generation of continuous video driven by speech audio. Our method is validated
through experiments on benchmark and newly proposed wild datasets,
demonstrating substantial improvements over prior methods in generating
realistic portraits characterized by diverse orientations within dynamic and
immersive scenes. Further visualizations and the source code are available at:
https://fudan-generative-vision.github.io/hallo3/.
| [
{
"version": "v1",
"created": "Sun, 1 Dec 2024 08:54:30 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Dec 2024 02:55:56 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Jan 2025 06:49:09 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Mar 2025 08:23:27 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Cui",
"Jiahao",
""
],
[
"Li",
"Hui",
""
],
[
"Zhan",
"Yun",
""
],
[
"Shang",
"Hanlin",
""
],
[
"Cheng",
"Kaihui",
""
],
[
"Ma",
"Yuqi",
""
],
[
"Mu",
"Shan",
""
],
[
"Zhou",
"Hang",
""
],
[
"Wang",
"Jingdong",
""
],
[
"Zhu",
"Siyu",
""
]
]
| TITLE: Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Video
Diffusion Transformer
ABSTRACT: Existing methodologies for animating portrait images face significant
challenges, particularly in handling non-frontal perspectives, rendering
dynamic objects around the portrait, and generating immersive, realistic
backgrounds. In this paper, we introduce the first application of a pretrained
transformer-based video generative model that demonstrates strong
generalization capabilities and generates highly dynamic, realistic videos for
portrait animation, effectively addressing these challenges. The adoption of a
new video backbone model makes previous U-Net-based methods for identity
maintenance, audio conditioning, and video extrapolation inapplicable. To
address this limitation, we design an identity reference network consisting of
a causal 3D VAE combined with a stacked series of transformer layers, ensuring
consistent facial identity across video sequences. Additionally, we investigate
various speech audio conditioning and motion frame mechanisms to enable the
generation of continuous video driven by speech audio. Our method is validated
through experiments on benchmark and newly proposed wild datasets,
demonstrating substantial improvements over prior methods in generating
realistic portraits characterized by diverse orientations within dynamic and
immersive scenes. Further visualizations and the source code are available at:
https://fudan-generative-vision.github.io/hallo3/.
| new_dataset | 0.504326 |
2412.05256 | Xiangyu Han | Xiangyu Han, Zhen Jia, Boyi Li, Yan Wang, Boris Ivanovic, Yurong You,
Lingjie Liu, Yue Wang, Marco Pavone, Chen Feng, Yiming Li | Extrapolated Urban View Synthesis Benchmark | Project page: https://ai4ce.github.io/EUVS-Benchmark/ | null | null | null | cs.CV cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Photorealistic simulators are essential for the training and evaluation of
vision-centric autonomous vehicles (AVs). At their core is Novel View Synthesis
(NVS), a crucial capability that generates diverse unseen viewpoints to
accommodate the broad and continuous pose distribution of AVs. Recent advances
in radiance fields, such as 3D Gaussian Splatting, achieve photorealistic
rendering at real-time speeds and have been widely used in modeling large-scale
driving scenes. However, their performance is commonly evaluated using an
interpolated setup with highly correlated training and test views. In contrast,
extrapolation, where test views largely deviate from training views, remains
underexplored, limiting progress in generalizable simulation technology. To
address this gap, we leverage publicly available AV datasets with multiple
traversals, multiple vehicles, and multiple cameras to build the first
Extrapolated Urban View Synthesis (EUVS) benchmark. Meanwhile, we conduct both
quantitative and qualitative evaluations of state-of-the-art NVS methods across
different evaluation settings. Our results show that current NVS methods are
prone to overfitting to training views. Besides, incorporating diffusion priors
and improving geometry cannot fundamentally improve NVS under large view
changes, highlighting the need for more robust approaches and large-scale
training. We will release the data to help advance self-driving and urban
robotics simulation technology.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2024 18:41:39 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Dec 2024 02:54:36 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 20:57:59 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Han",
"Xiangyu",
""
],
[
"Jia",
"Zhen",
""
],
[
"Li",
"Boyi",
""
],
[
"Wang",
"Yan",
""
],
[
"Ivanovic",
"Boris",
""
],
[
"You",
"Yurong",
""
],
[
"Liu",
"Lingjie",
""
],
[
"Wang",
"Yue",
""
],
[
"Pavone",
"Marco",
""
],
[
"Feng",
"Chen",
""
],
[
"Li",
"Yiming",
""
]
]
| TITLE: Extrapolated Urban View Synthesis Benchmark
ABSTRACT: Photorealistic simulators are essential for the training and evaluation of
vision-centric autonomous vehicles (AVs). At their core is Novel View Synthesis
(NVS), a crucial capability that generates diverse unseen viewpoints to
accommodate the broad and continuous pose distribution of AVs. Recent advances
in radiance fields, such as 3D Gaussian Splatting, achieve photorealistic
rendering at real-time speeds and have been widely used in modeling large-scale
driving scenes. However, their performance is commonly evaluated using an
interpolated setup with highly correlated training and test views. In contrast,
extrapolation, where test views largely deviate from training views, remains
underexplored, limiting progress in generalizable simulation technology. To
address this gap, we leverage publicly available AV datasets with multiple
traversals, multiple vehicles, and multiple cameras to build the first
Extrapolated Urban View Synthesis (EUVS) benchmark. Meanwhile, we conduct both
quantitative and qualitative evaluations of state-of-the-art NVS methods across
different evaluation settings. Our results show that current NVS methods are
prone to overfitting to training views. Besides, incorporating diffusion priors
and improving geometry cannot fundamentally improve NVS under large view
changes, highlighting the need for more robust approaches and large-scale
training. We will release the data to help advance self-driving and urban
robotics simulation technology.
| no_new_dataset | 0.870927 |
2412.07377 | Fuyi Yang | Jiazuo Mu, Fuyi Yang, Yanshun Zhang, Mingqian Zhang, Junxiong Zhang,
Yongjian Luo, Lan Xu, Yujiao Shi and Yingliang Zhang | CADSpotting: Robust Panoptic Symbol Spotting on Large-Scale CAD Drawings | 18pages, 14 figures, Project web-page:
https://dgeneai.github.io/cadspotting-pages/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce CADSpotting, an effective method for panoptic symbol spotting in
large-scale architectural CAD drawings. Existing approaches struggle with
symbol diversity, scale variations, and overlapping elements in CAD designs.
CADSpotting overcomes these challenges by representing primitives through
densely sampled points with attributes like coordinates and colors, using a
unified 3D point cloud model for robust feature learning. To enable accurate
segmentation in large, complex drawings, we further propose a novel Sliding
Window Aggregation (SWA) technique, combining weighted voting and Non-Maximum
Suppression (NMS). Moreover, we introduce LS-CAD, a new large-scale CAD dataset
to support our experiments, with each floorplan covering around 1,000 square
meters, significantly larger than previous benchmarks. Experiments on
FloorPlanCAD and LS-CAD datasets show that CADSpotting significantly
outperforms existing methods. We also demonstrate its practical value through
automating parametric 3D reconstruction, enabling interior modeling directly
from raw CAD inputs.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 10:22:17 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Dec 2024 03:27:12 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 07:41:50 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Mu",
"Jiazuo",
""
],
[
"Yang",
"Fuyi",
""
],
[
"Zhang",
"Yanshun",
""
],
[
"Zhang",
"Mingqian",
""
],
[
"Zhang",
"Junxiong",
""
],
[
"Luo",
"Yongjian",
""
],
[
"Xu",
"Lan",
""
],
[
"Shi",
"Yujiao",
""
],
[
"Zhang",
"Yingliang",
""
]
]
| TITLE: CADSpotting: Robust Panoptic Symbol Spotting on Large-Scale CAD Drawings
ABSTRACT: We introduce CADSpotting, an effective method for panoptic symbol spotting in
large-scale architectural CAD drawings. Existing approaches struggle with
symbol diversity, scale variations, and overlapping elements in CAD designs.
CADSpotting overcomes these challenges by representing primitives through
densely sampled points with attributes like coordinates and colors, using a
unified 3D point cloud model for robust feature learning. To enable accurate
segmentation in large, complex drawings, we further propose a novel Sliding
Window Aggregation (SWA) technique, combining weighted voting and Non-Maximum
Suppression (NMS). Moreover, we introduce LS-CAD, a new large-scale CAD dataset
to support our experiments, with each floorplan covering around 1,000 square
meters, significantly larger than previous benchmarks. Experiments on
FloorPlanCAD and LS-CAD datasets show that CADSpotting significantly
outperforms existing methods. We also demonstrate its practical value through
automating parametric 3D reconstruction, enabling interior modeling directly
from raw CAD inputs.
| new_dataset | 0.954137 |
2412.07589 | Jianzong Wu | Jianzong Wu, Chao Tang, Jingbo Wang, Yanhong Zeng, Xiangtai Li, Yunhai
Tong | DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for
Customized Manga Generation | [CVPR 2025] The project page is
https://jianzongwu.github.io/projects/diffsensei/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Story visualization, the task of creating visual narratives from textual
descriptions, has seen progress with text-to-image generation models. However,
these models often lack effective control over character appearances and
interactions, particularly in multi-character scenes. To address these
limitations, we propose a new task: \textbf{customized manga generation} and
introduce \textbf{DiffSensei}, an innovative framework specifically designed
for generating manga with dynamic multi-character control. DiffSensei
integrates a diffusion-based image generator with a multimodal large language
model (MLLM) that acts as a text-compatible identity adapter. Our approach
employs masked cross-attention to seamlessly incorporate character features,
enabling precise layout control without direct pixel transfer. Additionally,
the MLLM-based adapter adjusts character features to align with panel-specific
text cues, allowing flexible adjustments in character expressions, poses, and
actions. We also introduce \textbf{MangaZero}, a large-scale dataset tailored
to this task, containing 43,264 manga pages and 427,147 annotated panels,
supporting the visualization of varied character interactions and movements
across sequential frames. Extensive experiments demonstrate that DiffSensei
outperforms existing models, marking a significant advancement in manga
generation by enabling text-adaptable character customization. The project page
is https://jianzongwu.github.io/projects/diffsensei/.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 15:24:12 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:23:03 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Wu",
"Jianzong",
""
],
[
"Tang",
"Chao",
""
],
[
"Wang",
"Jingbo",
""
],
[
"Zeng",
"Yanhong",
""
],
[
"Li",
"Xiangtai",
""
],
[
"Tong",
"Yunhai",
""
]
]
| TITLE: DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for
Customized Manga Generation
ABSTRACT: Story visualization, the task of creating visual narratives from textual
descriptions, has seen progress with text-to-image generation models. However,
these models often lack effective control over character appearances and
interactions, particularly in multi-character scenes. To address these
limitations, we propose a new task: \textbf{customized manga generation} and
introduce \textbf{DiffSensei}, an innovative framework specifically designed
for generating manga with dynamic multi-character control. DiffSensei
integrates a diffusion-based image generator with a multimodal large language
model (MLLM) that acts as a text-compatible identity adapter. Our approach
employs masked cross-attention to seamlessly incorporate character features,
enabling precise layout control without direct pixel transfer. Additionally,
the MLLM-based adapter adjusts character features to align with panel-specific
text cues, allowing flexible adjustments in character expressions, poses, and
actions. We also introduce \textbf{MangaZero}, a large-scale dataset tailored
to this task, containing 43,264 manga pages and 427,147 annotated panels,
supporting the visualization of varied character interactions and movements
across sequential frames. Extensive experiments demonstrate that DiffSensei
outperforms existing models, marking a significant advancement in manga
generation by enabling text-adaptable character customization. The project page
is https://jianzongwu.github.io/projects/diffsensei/.
| new_dataset | 0.95452 |
2412.08099 | Sicong Jiang | Fuqiang Liu, Sicong Jiang, Luis Miranda-Moreno, Seongjin Choi, Lijun
Sun | Adversarial Vulnerabilities in Large Language Models for Time Series
Forecasting | AISTATS 2025 | null | null | null | cs.LG cs.AI cs.CL cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have recently demonstrated significant potential
in time series forecasting, offering impressive capabilities in handling
complex temporal data. However, their robustness and reliability in real-world
applications remain under-explored, particularly concerning their
susceptibility to adversarial attacks. In this paper, we introduce a targeted
adversarial attack framework for LLM-based time series forecasting. By
employing both gradient-free and black-box optimization methods, we generate
minimal yet highly effective perturbations that significantly degrade the
forecasting accuracy across multiple datasets and LLM architectures. Our
experiments, which include models like LLMTime with GPT-3.5, GPT-4, LLaMa, and
Mistral, TimeGPT, and TimeLLM show that adversarial attacks lead to much more
severe performance degradation than random noise, and demonstrate the broad
effectiveness of our attacks across different LLMs. The results underscore the
critical vulnerabilities of LLMs in time series forecasting, highlighting the
need for robust defense mechanisms to ensure their reliable deployment in
practical applications. The code repository can be found at
https://github.com/JohnsonJiang1996/AdvAttack_LLM4TS.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 04:53:15 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jan 2025 20:32:48 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Jan 2025 17:33:40 GMT"
},
{
"version": "v4",
"created": "Wed, 12 Mar 2025 21:35:52 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Liu",
"Fuqiang",
""
],
[
"Jiang",
"Sicong",
""
],
[
"Miranda-Moreno",
"Luis",
""
],
[
"Choi",
"Seongjin",
""
],
[
"Sun",
"Lijun",
""
]
]
| TITLE: Adversarial Vulnerabilities in Large Language Models for Time Series
Forecasting
ABSTRACT: Large Language Models (LLMs) have recently demonstrated significant potential
in time series forecasting, offering impressive capabilities in handling
complex temporal data. However, their robustness and reliability in real-world
applications remain under-explored, particularly concerning their
susceptibility to adversarial attacks. In this paper, we introduce a targeted
adversarial attack framework for LLM-based time series forecasting. By
employing both gradient-free and black-box optimization methods, we generate
minimal yet highly effective perturbations that significantly degrade the
forecasting accuracy across multiple datasets and LLM architectures. Our
experiments, which include models like LLMTime with GPT-3.5, GPT-4, LLaMa, and
Mistral, TimeGPT, and TimeLLM show that adversarial attacks lead to much more
severe performance degradation than random noise, and demonstrate the broad
effectiveness of our attacks across different LLMs. The results underscore the
critical vulnerabilities of LLMs in time series forecasting, highlighting the
need for robust defense mechanisms to ensure their reliable deployment in
practical applications. The code repository can be found at
https://github.com/JohnsonJiang1996/AdvAttack_LLM4TS.
| no_new_dataset | 0.946151 |
2412.09262 | Chunyu Li | Chunyu Li, Chao Zhang, Weikai Xu, Jingyu Lin, Jinghui Xie, Weiguo
Feng, Bingyue Peng, Cunjian Chen, Weiwei Xing | LatentSync: Taming Audio-Conditioned Latent Diffusion Models for Lip
Sync with SyncNet Supervision | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | End-to-end audio-conditioned latent diffusion models (LDMs) have been widely
adopted for audio-driven portrait animation, demonstrating their effectiveness
in generating lifelike and high-resolution talking videos. However, direct
application of audio-conditioned LDMs to lip-synchronization (lip-sync) tasks
results in suboptimal lip-sync accuracy. Through an in-depth analysis, we
identified the underlying cause as the "shortcut learning problem", wherein the
model predominantly learns visual-visual shortcuts while neglecting the
critical audio-visual correlations. To address this issue, we explored
different approaches for integrating SyncNet supervision into audio-conditioned
LDMs to explicitly enforce the learning of audio-visual correlations. Since the
performance of SyncNet directly influences the lip-sync accuracy of the
supervised model, the training of a well-converged SyncNet becomes crucial. We
conducted the first comprehensive empirical studies to identify key factors
affecting SyncNet convergence. Based on our analysis, we introduce
StableSyncNet, with an architecture designed for stable convergence. Our
StableSyncNet achieved a significant improvement in accuracy, increasing from
91% to 94% on the HDTF test set. Additionally, we introduce a novel Temporal
Representation Alignment (TREPA) mechanism to enhance temporal consistency in
the generated videos. Experimental results show that our method surpasses
state-of-the-art lip-sync approaches across various evaluation metrics on the
HDTF and VoxCeleb2 datasets.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 13:20:52 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 09:17:52 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Li",
"Chunyu",
""
],
[
"Zhang",
"Chao",
""
],
[
"Xu",
"Weikai",
""
],
[
"Lin",
"Jingyu",
""
],
[
"Xie",
"Jinghui",
""
],
[
"Feng",
"Weiguo",
""
],
[
"Peng",
"Bingyue",
""
],
[
"Chen",
"Cunjian",
""
],
[
"Xing",
"Weiwei",
""
]
]
| TITLE: LatentSync: Taming Audio-Conditioned Latent Diffusion Models for Lip
Sync with SyncNet Supervision
ABSTRACT: End-to-end audio-conditioned latent diffusion models (LDMs) have been widely
adopted for audio-driven portrait animation, demonstrating their effectiveness
in generating lifelike and high-resolution talking videos. However, direct
application of audio-conditioned LDMs to lip-synchronization (lip-sync) tasks
results in suboptimal lip-sync accuracy. Through an in-depth analysis, we
identified the underlying cause as the "shortcut learning problem", wherein the
model predominantly learns visual-visual shortcuts while neglecting the
critical audio-visual correlations. To address this issue, we explored
different approaches for integrating SyncNet supervision into audio-conditioned
LDMs to explicitly enforce the learning of audio-visual correlations. Since the
performance of SyncNet directly influences the lip-sync accuracy of the
supervised model, the training of a well-converged SyncNet becomes crucial. We
conducted the first comprehensive empirical studies to identify key factors
affecting SyncNet convergence. Based on our analysis, we introduce
StableSyncNet, with an architecture designed for stable convergence. Our
StableSyncNet achieved a significant improvement in accuracy, increasing from
91% to 94% on the HDTF test set. Additionally, we introduce a novel Temporal
Representation Alignment (TREPA) mechanism to enhance temporal consistency in
the generated videos. Experimental results show that our method surpasses
state-of-the-art lip-sync approaches across various evaluation metrics on the
HDTF and VoxCeleb2 datasets.
| no_new_dataset | 0.950273 |
2412.11154 | Chuang Yu | Chuang Yu, Jinmiao Zhao, Yunpeng Liu, Sicheng Zhao, Yimian Dai,
Xiangyu Yue | From Easy to Hard: Progressive Active Learning Framework for Infrared
Small Target Detection with Single Point Supervision | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, single-frame infrared small target (SIRST) detection with single
point supervision has drawn wide-spread attention. However, the latest label
evolution with single point supervision (LESPS) framework suffers from
instability, excessive label evolution, and difficulty in exerting embedded
network performance. Inspired by organisms gradually adapting to their
environment and continuously accumulating knowledge, we construct an innovative
Progressive Active Learning (PAL) framework for single point supervision, which
drives the existing SIRST detection networks progressively and actively
recognizes and learns more hard samples to achieve significant performance
improvements. Specifically, to avoid the early low-performance model leading to
the wrong selection of hard samples, we propose a model pre-start concept,
which focuses on automatically selecting a portion of easy samples and helping
the model have basic task-specific learning capabilities. Meanwhile, we propose
a refined dual-update strategy, which can promote reasonable learning of harder
samples and continuous refinement of pseudo-labels. In addition, to alleviate
the risk of excessive label evolution, a decay factor is reasonably introduced,
which helps to achieve a dynamic balance between the expansion and contraction
of target annotations. Extensive experiments show that existing SIRST detection
networks equipped with our PAL framework have achieved state-of-the-art (SOTA)
results on multiple public datasets. Furthermore, our PAL framework can build
an efficient and stable bridge between full supervision and single point
supervision tasks. Our code are available at
https://github.com/YuChuang1205/PAL.
| [
{
"version": "v1",
"created": "Sun, 15 Dec 2024 11:08:49 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 08:04:37 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Yu",
"Chuang",
""
],
[
"Zhao",
"Jinmiao",
""
],
[
"Liu",
"Yunpeng",
""
],
[
"Zhao",
"Sicheng",
""
],
[
"Dai",
"Yimian",
""
],
[
"Yue",
"Xiangyu",
""
]
]
| TITLE: From Easy to Hard: Progressive Active Learning Framework for Infrared
Small Target Detection with Single Point Supervision
ABSTRACT: Recently, single-frame infrared small target (SIRST) detection with single
point supervision has drawn wide-spread attention. However, the latest label
evolution with single point supervision (LESPS) framework suffers from
instability, excessive label evolution, and difficulty in exerting embedded
network performance. Inspired by organisms gradually adapting to their
environment and continuously accumulating knowledge, we construct an innovative
Progressive Active Learning (PAL) framework for single point supervision, which
drives the existing SIRST detection networks progressively and actively
recognizes and learns more hard samples to achieve significant performance
improvements. Specifically, to avoid the early low-performance model leading to
the wrong selection of hard samples, we propose a model pre-start concept,
which focuses on automatically selecting a portion of easy samples and helping
the model have basic task-specific learning capabilities. Meanwhile, we propose
a refined dual-update strategy, which can promote reasonable learning of harder
samples and continuous refinement of pseudo-labels. In addition, to alleviate
the risk of excessive label evolution, a decay factor is reasonably introduced,
which helps to achieve a dynamic balance between the expansion and contraction
of target annotations. Extensive experiments show that existing SIRST detection
networks equipped with our PAL framework have achieved state-of-the-art (SOTA)
results on multiple public datasets. Furthermore, our PAL framework can build
an efficient and stable bridge between full supervision and single point
supervision tasks. Our code are available at
https://github.com/YuChuang1205/PAL.
| no_new_dataset | 0.945951 |
2412.17741 | Rui Qian | Rui Qian, Xin Yin, Dejing Dou | Reasoning to Attend: Try to Understand How <SEG> Token Works | This work has been accepted to CVPR 2025, please refer to
https://github.com/rui-qian/READ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current Large Multimodal Models (LMMs) empowered visual grounding typically
rely on $\texttt{<SEG>}$ tokens as a text prompt to jointly optimize the
vision-language model (e.g., LLaVA) and the downstream task-specific model
(e.g., SAM). However, we observe that little research has looked into how it
works.In this work, we first visualize the similarity maps, which are obtained
by computing the semantic similarity between the $\texttt{<SEG>}$ token and the
image token embeddings derived from the last hidden layer in both the LLaVA
encoder and SAM decoder. Intriguingly, we have found that a striking
consistency holds in terms of activation responses in the similarity map, which
reveals that what the $\texttt{<SEG>}$ token contributes to is semantic
similarity within image-text pairs. Specifically, the $\texttt{<SEG>}$ token, a
placeholder expanded in text vocabulary, extensively queries among individual
tokenized image patches to match the semantics of an object from text to the
paired image, while the Large Language Models (LLMs) are being fine-tuned. Upon
the above findings, we present READ, which facilitates LMMs' resilient
$\textbf{REA}$soning capability of where to atten$\textbf{D}$ under the
guidance of highly activated points borrowed from similarity maps. Remarkably,
READ features an intuitive design, Similarity as Points module (SasP), which
can be seamlessly applied to $\texttt{<SEG>}$-like paradigms in a plug-and-play
fashion. Also, extensive experiments have been conducted on ReasonSeg and
RefCOCO(+/g) datasets. To validate whether READ suffers from catastrophic
forgetting of previous skills after fine-tuning, we further assess its
generation ability on an augmented FP-RefCOCO(+/g) dataset. All codes and
models are publicly available at https://github.com/rui-qian/READ.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 17:44:05 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Dec 2024 10:19:44 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Jan 2025 07:57:50 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Mar 2025 15:55:51 GMT"
},
{
"version": "v5",
"created": "Thu, 6 Mar 2025 04:11:30 GMT"
},
{
"version": "v6",
"created": "Thu, 13 Mar 2025 14:04:12 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Qian",
"Rui",
""
],
[
"Yin",
"Xin",
""
],
[
"Dou",
"Dejing",
""
]
]
| TITLE: Reasoning to Attend: Try to Understand How <SEG> Token Works
ABSTRACT: Current Large Multimodal Models (LMMs) empowered visual grounding typically
rely on $\texttt{<SEG>}$ tokens as a text prompt to jointly optimize the
vision-language model (e.g., LLaVA) and the downstream task-specific model
(e.g., SAM). However, we observe that little research has looked into how it
works.In this work, we first visualize the similarity maps, which are obtained
by computing the semantic similarity between the $\texttt{<SEG>}$ token and the
image token embeddings derived from the last hidden layer in both the LLaVA
encoder and SAM decoder. Intriguingly, we have found that a striking
consistency holds in terms of activation responses in the similarity map, which
reveals that what the $\texttt{<SEG>}$ token contributes to is semantic
similarity within image-text pairs. Specifically, the $\texttt{<SEG>}$ token, a
placeholder expanded in text vocabulary, extensively queries among individual
tokenized image patches to match the semantics of an object from text to the
paired image, while the Large Language Models (LLMs) are being fine-tuned. Upon
the above findings, we present READ, which facilitates LMMs' resilient
$\textbf{REA}$soning capability of where to atten$\textbf{D}$ under the
guidance of highly activated points borrowed from similarity maps. Remarkably,
READ features an intuitive design, Similarity as Points module (SasP), which
can be seamlessly applied to $\texttt{<SEG>}$-like paradigms in a plug-and-play
fashion. Also, extensive experiments have been conducted on ReasonSeg and
RefCOCO(+/g) datasets. To validate whether READ suffers from catastrophic
forgetting of previous skills after fine-tuning, we further assess its
generation ability on an augmented FP-RefCOCO(+/g) dataset. All codes and
models are publicly available at https://github.com/rui-qian/READ.
| no_new_dataset | 0.9434 |
2501.02509 | Hui Li | Hui Li, Xiaoyu Ren, Hongjiu Yu, Huiyu Duan, Kai Li, Ying Chen, Libo
Wang, Xiongkuo Min, Guangtao Zhai, Xu Liu | Facial Attractiveness Prediction in Live Streaming: A New Benchmark and
Multi-modal Method | Section 3 in Images Collection has description errors about data
cleaning. The compared methods data of Table 3 lacks other metrics | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial attractiveness prediction (FAP) has long been an important computer
vision task, which could be widely applied in live streaming for facial
retouching, content recommendation, etc. However, previous FAP datasets are
either small, closed-source, or lack diversity. Moreover, the corresponding FAP
models exhibit limited generalization and adaptation ability. To overcome these
limitations, in this paper we present LiveBeauty, the first large-scale
live-specific FAP dataset, in a more challenging application scenario, i.e.,
live streaming. 10,000 face images are collected from a live streaming platform
directly, with 200,000 corresponding attractiveness annotations obtained from a
well-devised subjective experiment, making LiveBeauty the largest open-access
FAP dataset in the challenging live scenario. Furthermore, a multi-modal FAP
method is proposed to measure the facial attractiveness in live streaming.
Specifically, we first extract holistic facial prior knowledge and multi-modal
aesthetic semantic features via a Personalized Attractiveness Prior Module
(PAPM) and a Multi-modal Attractiveness Encoder Module (MAEM), respectively,
then integrate the extracted features through a Cross-Modal Fusion Module
(CMFM). Extensive experiments conducted on both LiveBeauty and other
open-source FAP datasets demonstrate that our proposed method achieves
state-of-the-art performance. Dataset will be available soon.
| [
{
"version": "v1",
"created": "Sun, 5 Jan 2025 11:43:35 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 02:34:18 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Li",
"Hui",
""
],
[
"Ren",
"Xiaoyu",
""
],
[
"Yu",
"Hongjiu",
""
],
[
"Duan",
"Huiyu",
""
],
[
"Li",
"Kai",
""
],
[
"Chen",
"Ying",
""
],
[
"Wang",
"Libo",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Liu",
"Xu",
""
]
]
| TITLE: Facial Attractiveness Prediction in Live Streaming: A New Benchmark and
Multi-modal Method
ABSTRACT: Facial attractiveness prediction (FAP) has long been an important computer
vision task, which could be widely applied in live streaming for facial
retouching, content recommendation, etc. However, previous FAP datasets are
either small, closed-source, or lack diversity. Moreover, the corresponding FAP
models exhibit limited generalization and adaptation ability. To overcome these
limitations, in this paper we present LiveBeauty, the first large-scale
live-specific FAP dataset, in a more challenging application scenario, i.e.,
live streaming. 10,000 face images are collected from a live streaming platform
directly, with 200,000 corresponding attractiveness annotations obtained from a
well-devised subjective experiment, making LiveBeauty the largest open-access
FAP dataset in the challenging live scenario. Furthermore, a multi-modal FAP
method is proposed to measure the facial attractiveness in live streaming.
Specifically, we first extract holistic facial prior knowledge and multi-modal
aesthetic semantic features via a Personalized Attractiveness Prior Module
(PAPM) and a Multi-modal Attractiveness Encoder Module (MAEM), respectively,
then integrate the extracted features through a Cross-Modal Fusion Module
(CMFM). Extensive experiments conducted on both LiveBeauty and other
open-source FAP datasets demonstrate that our proposed method achieves
state-of-the-art performance. Dataset will be available soon.
| new_dataset | 0.959307 |
2501.04467 | Marc Aubreville | Christof A. Bertram, Viktoria Weiss, Taryn A. Donovan, Sweta Banerjee,
Thomas Conrad, Jonas Ammeling, Robert Klopfleisch, Christopher Kaltenecker,
Marc Aubreville | Histologic Dataset of Normal and Atypical Mitotic Figures on Human
Breast Cancer (AMi-Br) | null | In: Palm, C., et al. Bildverarbeitung f\"ur die Medizin 2025. BVM
2025. Informatik aktuell. Springer Vieweg, Wiesbaden | 10.1007/978-3-658-47422-5_25 | null | cs.CV cs.DB | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Assessment of the density of mitotic figures (MFs) in histologic tumor
sections is an important prognostic marker for many tumor types, including
breast cancer. Recently, it has been reported in multiple works that the
quantity of MFs with an atypical morphology (atypical MFs, AMFs) might be an
independent prognostic criterion for breast cancer. AMFs are an indicator of
mutations in the genes regulating the cell cycle and can lead to aberrant
chromosome constitution (aneuploidy) of the tumor cells. To facilitate further
research on this topic using pattern recognition, we present the first ever
publicly available dataset of atypical and normal MFs (AMi-Br). For this, we
utilized two of the most popular MF datasets (MIDOG 2021 and TUPAC) and
subclassified all MFs using a three expert majority vote. Our final dataset
consists of 3,720 MFs, split into 832 AMFs (22.4%) and 2,888 normal MFs (77.6%)
across all 223 tumor cases in the combined set. We provide baseline
classification experiments to investigate the consistency of the dataset, using
a Monte Carlo cross-validation and different strategies to combat class
imbalance. We found an averaged balanced accuracy of up to 0.806 when using a
patch-level data set split, and up to 0.713 when using a patient-level split.
| [
{
"version": "v1",
"created": "Wed, 8 Jan 2025 12:41:42 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 07:10:26 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Bertram",
"Christof A.",
""
],
[
"Weiss",
"Viktoria",
""
],
[
"Donovan",
"Taryn A.",
""
],
[
"Banerjee",
"Sweta",
""
],
[
"Conrad",
"Thomas",
""
],
[
"Ammeling",
"Jonas",
""
],
[
"Klopfleisch",
"Robert",
""
],
[
"Kaltenecker",
"Christopher",
""
],
[
"Aubreville",
"Marc",
""
]
]
| TITLE: Histologic Dataset of Normal and Atypical Mitotic Figures on Human
Breast Cancer (AMi-Br)
ABSTRACT: Assessment of the density of mitotic figures (MFs) in histologic tumor
sections is an important prognostic marker for many tumor types, including
breast cancer. Recently, it has been reported in multiple works that the
quantity of MFs with an atypical morphology (atypical MFs, AMFs) might be an
independent prognostic criterion for breast cancer. AMFs are an indicator of
mutations in the genes regulating the cell cycle and can lead to aberrant
chromosome constitution (aneuploidy) of the tumor cells. To facilitate further
research on this topic using pattern recognition, we present the first ever
publicly available dataset of atypical and normal MFs (AMi-Br). For this, we
utilized two of the most popular MF datasets (MIDOG 2021 and TUPAC) and
subclassified all MFs using a three expert majority vote. Our final dataset
consists of 3,720 MFs, split into 832 AMFs (22.4%) and 2,888 normal MFs (77.6%)
across all 223 tumor cases in the combined set. We provide baseline
classification experiments to investigate the consistency of the dataset, using
a Monte Carlo cross-validation and different strategies to combat class
imbalance. We found an averaged balanced accuracy of up to 0.806 when using a
patch-level data set split, and up to 0.713 when using a patient-level split.
| new_dataset | 0.96707 |
2501.05031 | Ronghao Dang | Ronghao Dang, Yuqian Yuan, Wenqi Zhang, Yifei Xin, Boqiang Zhang, Long
Li, Liuyi Wang, Qinyang Zeng, Xin Li, Lidong Bing | ECBench: Can Multi-modal Foundation Models Understand the Egocentric
World? A Holistic Embodied Cognition Benchmark | null | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The enhancement of generalization in robots by large vision-language models
(LVLMs) is increasingly evident. Therefore, the embodied cognitive abilities of
LVLMs based on egocentric videos are of great interest. However, current
datasets for embodied video question answering lack comprehensive and
systematic evaluation frameworks. Critical embodied cognitive issues, such as
robotic self-cognition, dynamic scene perception, and hallucination, are rarely
addressed. To tackle these challenges, we propose ECBench, a high-quality
benchmark designed to systematically evaluate the embodied cognitive abilities
of LVLMs. ECBench features a diverse range of scene video sources, open and
varied question formats, and 30 dimensions of embodied cognition. To ensure
quality, balance, and high visual dependence, ECBench uses class-independent
meticulous human annotation and multi-round question screening strategies.
Additionally, we introduce ECEval, a comprehensive evaluation system that
ensures the fairness and rationality of the indicators. Utilizing ECBench, we
conduct extensive evaluations of proprietary, open-source, and task-specific
LVLMs. ECBench is pivotal in advancing the embodied cognitive capabilities of
LVLMs, laying a solid foundation for developing reliable core models for
embodied agents. All data and code are available at
https://github.com/Rh-Dang/ECBench.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2025 07:43:49 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 07:45:55 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Dang",
"Ronghao",
""
],
[
"Yuan",
"Yuqian",
""
],
[
"Zhang",
"Wenqi",
""
],
[
"Xin",
"Yifei",
""
],
[
"Zhang",
"Boqiang",
""
],
[
"Li",
"Long",
""
],
[
"Wang",
"Liuyi",
""
],
[
"Zeng",
"Qinyang",
""
],
[
"Li",
"Xin",
""
],
[
"Bing",
"Lidong",
""
]
]
| TITLE: ECBench: Can Multi-modal Foundation Models Understand the Egocentric
World? A Holistic Embodied Cognition Benchmark
ABSTRACT: The enhancement of generalization in robots by large vision-language models
(LVLMs) is increasingly evident. Therefore, the embodied cognitive abilities of
LVLMs based on egocentric videos are of great interest. However, current
datasets for embodied video question answering lack comprehensive and
systematic evaluation frameworks. Critical embodied cognitive issues, such as
robotic self-cognition, dynamic scene perception, and hallucination, are rarely
addressed. To tackle these challenges, we propose ECBench, a high-quality
benchmark designed to systematically evaluate the embodied cognitive abilities
of LVLMs. ECBench features a diverse range of scene video sources, open and
varied question formats, and 30 dimensions of embodied cognition. To ensure
quality, balance, and high visual dependence, ECBench uses class-independent
meticulous human annotation and multi-round question screening strategies.
Additionally, we introduce ECEval, a comprehensive evaluation system that
ensures the fairness and rationality of the indicators. Utilizing ECBench, we
conduct extensive evaluations of proprietary, open-source, and task-specific
LVLMs. ECBench is pivotal in advancing the embodied cognitive capabilities of
LVLMs, laying a solid foundation for developing reliable core models for
embodied agents. All data and code are available at
https://github.com/Rh-Dang/ECBench.
| no_new_dataset | 0.569673 |
2501.06828 | Ruizhe Ou | Ruizhe Ou, Yuan Hu, Fan Zhang, Jiaxin Chen, Yu Liu | GeoPix: Multi-Modal Large Language Model for Pixel-level Image
Understanding in Remote Sensing | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multi-modal large language models (MLLMs) have achieved remarkable success in
image- and region-level remote sensing (RS) image understanding tasks, such as
image captioning, visual question answering, and visual grounding. However,
existing RS MLLMs lack the pixel-level dialogue capability, which involves
responding to user instructions with segmentation masks for specific instances.
In this paper, we propose GeoPix, a RS MLLM that extends image understanding
capabilities to the pixel level. This is achieved by equipping the MLLM with a
mask predictor, which transforms visual features from the vision encoder into
masks conditioned on the LLM's segmentation token embeddings. To facilitate the
segmentation of multi-scale objects in RS imagery, a class-wise learnable
memory module is integrated into the mask predictor to capture and store
class-wise geo-context at the instance level across the entire dataset. In
addition, to address the absence of large-scale datasets for training
pixel-level RS MLLMs, we construct the GeoPixInstruct dataset, comprising
65,463 images and 140,412 instances, with each instance annotated with text
descriptions, bounding boxes, and masks. Furthermore, we develop a two-stage
training strategy to balance the distinct requirements of text generation and
masks prediction in multi-modal multi-task optimization. Extensive experiments
verify the effectiveness and superiority of GeoPix in pixel-level segmentation
tasks, while also maintaining competitive performance in image- and
region-level benchmarks.
| [
{
"version": "v1",
"created": "Sun, 12 Jan 2025 14:45:27 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 08:16:01 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Ou",
"Ruizhe",
""
],
[
"Hu",
"Yuan",
""
],
[
"Zhang",
"Fan",
""
],
[
"Chen",
"Jiaxin",
""
],
[
"Liu",
"Yu",
""
]
]
| TITLE: GeoPix: Multi-Modal Large Language Model for Pixel-level Image
Understanding in Remote Sensing
ABSTRACT: Multi-modal large language models (MLLMs) have achieved remarkable success in
image- and region-level remote sensing (RS) image understanding tasks, such as
image captioning, visual question answering, and visual grounding. However,
existing RS MLLMs lack the pixel-level dialogue capability, which involves
responding to user instructions with segmentation masks for specific instances.
In this paper, we propose GeoPix, a RS MLLM that extends image understanding
capabilities to the pixel level. This is achieved by equipping the MLLM with a
mask predictor, which transforms visual features from the vision encoder into
masks conditioned on the LLM's segmentation token embeddings. To facilitate the
segmentation of multi-scale objects in RS imagery, a class-wise learnable
memory module is integrated into the mask predictor to capture and store
class-wise geo-context at the instance level across the entire dataset. In
addition, to address the absence of large-scale datasets for training
pixel-level RS MLLMs, we construct the GeoPixInstruct dataset, comprising
65,463 images and 140,412 instances, with each instance annotated with text
descriptions, bounding boxes, and masks. Furthermore, we develop a two-stage
training strategy to balance the distinct requirements of text generation and
masks prediction in multi-modal multi-task optimization. Extensive experiments
verify the effectiveness and superiority of GeoPix in pixel-level segmentation
tasks, while also maintaining competitive performance in image- and
region-level benchmarks.
| new_dataset | 0.963575 |
2501.08137 | Marcella Astrid | Marcella Astrid, Enjie Ghorbel, Djamila Aouada | Audio-Visual Deepfake Detection With Local Temporal Inconsistencies | Accepted in ICASSP 2025 | null | null | null | cs.CV cs.CR cs.MM cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | This paper proposes an audio-visual deepfake detection approach that aims to
capture fine-grained temporal inconsistencies between audio and visual
modalities. To achieve this, both architectural and data synthesis strategies
are introduced. From an architectural perspective, a temporal distance map,
coupled with an attention mechanism, is designed to capture these
inconsistencies while minimizing the impact of irrelevant temporal
subsequences. Moreover, we explore novel pseudo-fake generation techniques to
synthesize local inconsistencies. Our approach is evaluated against
state-of-the-art methods using the DFDC and FakeAVCeleb datasets, demonstrating
its effectiveness in detecting audio-visual deepfakes.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2025 14:15:10 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jan 2025 09:14:14 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Mar 2025 10:22:54 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Mar 2025 11:02:33 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Astrid",
"Marcella",
""
],
[
"Ghorbel",
"Enjie",
""
],
[
"Aouada",
"Djamila",
""
]
]
| TITLE: Audio-Visual Deepfake Detection With Local Temporal Inconsistencies
ABSTRACT: This paper proposes an audio-visual deepfake detection approach that aims to
capture fine-grained temporal inconsistencies between audio and visual
modalities. To achieve this, both architectural and data synthesis strategies
are introduced. From an architectural perspective, a temporal distance map,
coupled with an attention mechanism, is designed to capture these
inconsistencies while minimizing the impact of irrelevant temporal
subsequences. Moreover, we explore novel pseudo-fake generation techniques to
synthesize local inconsistencies. Our approach is evaluated against
state-of-the-art methods using the DFDC and FakeAVCeleb datasets, demonstrating
its effectiveness in detecting audio-visual deepfakes.
| no_new_dataset | 0.948202 |
2501.10637 | Pengyang Song | Pengyang Song, Han Feng, Shreyashi Shukla, Jue Wang, and Tao Hong | HOPS: High-order Polynomials with Self-supervised Dimension Reduction
for Load Forecasting | 20 pages, 5 figures | null | null | null | cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Load forecasting is a fundamental task in smart grid. Many techniques have
been applied to developing load forecasting models. Due to the challenges such
as the Curse of Dimensionality, overfitting, and limited computing resources,
multivariate higher-order polynomial models have received limited attention in
load forecasting, despite their desirable mathematical foundations and
optimization properties. In this paper, we propose low rank approximation and
self-supervised dimension reduction to address the aforementioned issues. To
further improve computational efficiency, we also utilize a fast Conjugate
Gradient based algorithm for the proposed polynomial models. Based on the load
datasets from the ISO New England, the proposed method high-order polynomials
with self-supervised dimension reduction (HOPS) demonstrates higher forecasting
accuracy over several competitive models. Additionally, experimental results
indicate that our approach alleviates redundant variable construction,
achieving better forecasts with fewer input variables.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2025 02:44:34 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 01:18:10 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Song",
"Pengyang",
""
],
[
"Feng",
"Han",
""
],
[
"Shukla",
"Shreyashi",
""
],
[
"Wang",
"Jue",
""
],
[
"Hong",
"Tao",
""
]
]
| TITLE: HOPS: High-order Polynomials with Self-supervised Dimension Reduction
for Load Forecasting
ABSTRACT: Load forecasting is a fundamental task in smart grid. Many techniques have
been applied to developing load forecasting models. Due to the challenges such
as the Curse of Dimensionality, overfitting, and limited computing resources,
multivariate higher-order polynomial models have received limited attention in
load forecasting, despite their desirable mathematical foundations and
optimization properties. In this paper, we propose low rank approximation and
self-supervised dimension reduction to address the aforementioned issues. To
further improve computational efficiency, we also utilize a fast Conjugate
Gradient based algorithm for the proposed polynomial models. Based on the load
datasets from the ISO New England, the proposed method high-order polynomials
with self-supervised dimension reduction (HOPS) demonstrates higher forecasting
accuracy over several competitive models. Additionally, experimental results
indicate that our approach alleviates redundant variable construction,
achieving better forecasts with fewer input variables.
| no_new_dataset | 0.950088 |
2501.10736 | Shanwen Wang | Shanwen Wang, Xin Sun, Changrui Chen, Danfeng Hong, Jungong Han | Semi-supervised Semantic Segmentation for Remote Sensing Images via
Multi-scale Uncertainty Consistency and Cross-Teacher-Student Attention | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised learning offers an appealing solution for remote sensing (RS)
image segmentation to relieve the burden of labor-intensive pixel-level
labeling. However, RS images pose unique challenges, including rich multi-scale
features and high inter-class similarity. To address these problems, this paper
proposes a novel semi-supervised Multi-Scale Uncertainty and
Cross-Teacher-Student Attention (MUCA) model for RS image semantic segmentation
tasks. Specifically, MUCA constrains the consistency among feature maps at
different layers of the network by introducing a multi-scale uncertainty
consistency regularization. It improves the multi-scale learning capability of
semi-supervised algorithms on unlabeled data. Additionally, MUCA utilizes a
Cross-Teacher-Student attention mechanism to guide the student network, guiding
the student network to construct more discriminative feature representations
through complementary features from the teacher network. This design
effectively integrates weak and strong augmentations (WA and SA) to further
boost segmentation performance. To verify the effectiveness of our model, we
conduct extensive experiments on ISPRS-Potsdam and LoveDA datasets. The
experimental results show the superiority of our method over state-of-the-art
semi-supervised methods. Notably, our model excels in distinguishing highly
similar objects, showcasing its potential for advancing semi-supervised RS
image segmentation tasks.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2025 11:57:20 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 14:18:36 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Wang",
"Shanwen",
""
],
[
"Sun",
"Xin",
""
],
[
"Chen",
"Changrui",
""
],
[
"Hong",
"Danfeng",
""
],
[
"Han",
"Jungong",
""
]
]
| TITLE: Semi-supervised Semantic Segmentation for Remote Sensing Images via
Multi-scale Uncertainty Consistency and Cross-Teacher-Student Attention
ABSTRACT: Semi-supervised learning offers an appealing solution for remote sensing (RS)
image segmentation to relieve the burden of labor-intensive pixel-level
labeling. However, RS images pose unique challenges, including rich multi-scale
features and high inter-class similarity. To address these problems, this paper
proposes a novel semi-supervised Multi-Scale Uncertainty and
Cross-Teacher-Student Attention (MUCA) model for RS image semantic segmentation
tasks. Specifically, MUCA constrains the consistency among feature maps at
different layers of the network by introducing a multi-scale uncertainty
consistency regularization. It improves the multi-scale learning capability of
semi-supervised algorithms on unlabeled data. Additionally, MUCA utilizes a
Cross-Teacher-Student attention mechanism to guide the student network, guiding
the student network to construct more discriminative feature representations
through complementary features from the teacher network. This design
effectively integrates weak and strong augmentations (WA and SA) to further
boost segmentation performance. To verify the effectiveness of our model, we
conduct extensive experiments on ISPRS-Potsdam and LoveDA datasets. The
experimental results show the superiority of our method over state-of-the-art
semi-supervised methods. Notably, our model excels in distinguishing highly
similar objects, showcasing its potential for advancing semi-supervised RS
image segmentation tasks.
| no_new_dataset | 0.955026 |
2501.11069 | Shibang Liu | Shibang Liu, Xuemei Xie, and Guangming Shi | Refinement Module based on Parse Graph of Feature Map for Human Pose
Estimation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parse graphs of the human body can be obtained in the human brain to help
humans complete the human Pose Estimation better (HPE). It contains a
hierarchical structure, like a tree structure, and context relations among
nodes. To equip models with such capabilities, many researchers predefine the
parse graph of body structure to design HPE frameworks. However, these
frameworks struggle to adapt to instances that deviate from the predefined
parse graph and they are often parameter-heavy. Unlike them, we view the
feature map holistically, much like the human body. It can be optimized using
parse graphs, where nodes' implicit feature representation boosts adaptability,
avoiding rigid structural limitations. In this paper, we design the Refinement
Module based on the Parse Graph of feature map (RMPG), which includes two
stages: top-down decomposition and bottom-up combination. In the first stage,
the feature map is constructed into a tree structure through recursive
decomposition, with each node representing a sub-feature map, thereby achieving
hierarchical modeling of features. In the second stage, context information is
calculated and sub-feature maps with context are recursively connected to
gradually build a refined feature map. Additionally, we design a hierarchical
network with fewer parameters using multiple RMPG modules to model the context
relations and hierarchies in the parse graph of body structure for HPE, some of
which are supervised to obtain context relations among body parts. Our network
achieves excellent results on multiple mainstream human pose datasets and the
effectiveness of RMPG is proven on different methods. The code of RMPG will be
open.
| [
{
"version": "v1",
"created": "Sun, 19 Jan 2025 15:05:15 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Feb 2025 13:07:16 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 03:01:19 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Mar 2025 02:41:37 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Liu",
"Shibang",
""
],
[
"Xie",
"Xuemei",
""
],
[
"Shi",
"Guangming",
""
]
]
| TITLE: Refinement Module based on Parse Graph of Feature Map for Human Pose
Estimation
ABSTRACT: Parse graphs of the human body can be obtained in the human brain to help
humans complete the human Pose Estimation better (HPE). It contains a
hierarchical structure, like a tree structure, and context relations among
nodes. To equip models with such capabilities, many researchers predefine the
parse graph of body structure to design HPE frameworks. However, these
frameworks struggle to adapt to instances that deviate from the predefined
parse graph and they are often parameter-heavy. Unlike them, we view the
feature map holistically, much like the human body. It can be optimized using
parse graphs, where nodes' implicit feature representation boosts adaptability,
avoiding rigid structural limitations. In this paper, we design the Refinement
Module based on the Parse Graph of feature map (RMPG), which includes two
stages: top-down decomposition and bottom-up combination. In the first stage,
the feature map is constructed into a tree structure through recursive
decomposition, with each node representing a sub-feature map, thereby achieving
hierarchical modeling of features. In the second stage, context information is
calculated and sub-feature maps with context are recursively connected to
gradually build a refined feature map. Additionally, we design a hierarchical
network with fewer parameters using multiple RMPG modules to model the context
relations and hierarchies in the parse graph of body structure for HPE, some of
which are supervised to obtain context relations among body parts. Our network
achieves excellent results on multiple mainstream human pose datasets and the
effectiveness of RMPG is proven on different methods. The code of RMPG will be
open.
| no_new_dataset | 0.947721 |
2501.13354 | Weijie Li | Yongxiang Liu and Weijie Li and Li Liu and Jie Zhou and Bowen Peng and
Yafei Song and Xuying Xiong and Wei Yang and Tianpeng Liu and Zhen Liu and
Xiang Li | ATRNet-STAR: A Large Dataset and Benchmark Towards Remote Sensing Object
Recognition in the Wild | 17 pages, 14 figures; ATRNet-STAR:
https://github.com/waterdisappear/ATRNet-STAR | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The absence of publicly available, large-scale, high-quality datasets for
Synthetic Aperture Radar Automatic Target Recognition (SAR ATR) has
significantly hindered the application of rapidly advancing deep learning
techniques, which hold huge potential to unlock new capabilities in this field.
This is primarily because collecting large volumes of diverse target samples
from SAR images is prohibitively expensive, largely due to privacy concerns,
the characteristics of microwave radar imagery perception, and the need for
specialized expertise in data annotation. Throughout the history of SAR ATR
research, there have been only a number of small datasets, mainly including
targets like ships, airplanes, buildings, etc. There is only one vehicle
dataset MSTAR collected in the 1990s, which has been a valuable source for SAR
ATR. To fill this gap, this paper introduces a large-scale, new dataset named
ATRNet-STAR with 40 different vehicle categories collected under various
realistic imaging conditions and scenes. It marks a substantial advancement in
dataset scale and diversity, comprising over 190,000 well-annotated samples, 10
times larger than its predecessor, the famous MSTAR. Building such a large
dataset is a challenging task, and the data collection scheme will be detailed.
Secondly, we illustrate the value of ATRNet-STAR via extensively evaluating the
performance of 15 representative methods with 7 different experimental settings
on challenging classification and detection benchmarks derived from the
dataset. Finally, based on our extensive experiments, we identify valuable
insights for SAR ATR and discuss potential future research directions in this
field. We hope that the scale, diversity, and benchmark of ATRNet-STAR can
significantly facilitate the advancement of SAR ATR.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2025 03:42:22 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jan 2025 23:57:36 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 14:28:51 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Mar 2025 10:51:12 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Liu",
"Yongxiang",
""
],
[
"Li",
"Weijie",
""
],
[
"Liu",
"Li",
""
],
[
"Zhou",
"Jie",
""
],
[
"Peng",
"Bowen",
""
],
[
"Song",
"Yafei",
""
],
[
"Xiong",
"Xuying",
""
],
[
"Yang",
"Wei",
""
],
[
"Liu",
"Tianpeng",
""
],
[
"Liu",
"Zhen",
""
],
[
"Li",
"Xiang",
""
]
]
| TITLE: ATRNet-STAR: A Large Dataset and Benchmark Towards Remote Sensing Object
Recognition in the Wild
ABSTRACT: The absence of publicly available, large-scale, high-quality datasets for
Synthetic Aperture Radar Automatic Target Recognition (SAR ATR) has
significantly hindered the application of rapidly advancing deep learning
techniques, which hold huge potential to unlock new capabilities in this field.
This is primarily because collecting large volumes of diverse target samples
from SAR images is prohibitively expensive, largely due to privacy concerns,
the characteristics of microwave radar imagery perception, and the need for
specialized expertise in data annotation. Throughout the history of SAR ATR
research, there have been only a number of small datasets, mainly including
targets like ships, airplanes, buildings, etc. There is only one vehicle
dataset MSTAR collected in the 1990s, which has been a valuable source for SAR
ATR. To fill this gap, this paper introduces a large-scale, new dataset named
ATRNet-STAR with 40 different vehicle categories collected under various
realistic imaging conditions and scenes. It marks a substantial advancement in
dataset scale and diversity, comprising over 190,000 well-annotated samples, 10
times larger than its predecessor, the famous MSTAR. Building such a large
dataset is a challenging task, and the data collection scheme will be detailed.
Secondly, we illustrate the value of ATRNet-STAR via extensively evaluating the
performance of 15 representative methods with 7 different experimental settings
on challenging classification and detection benchmarks derived from the
dataset. Finally, based on our extensive experiments, we identify valuable
insights for SAR ATR and discuss potential future research directions in this
field. We hope that the scale, diversity, and benchmark of ATRNet-STAR can
significantly facilitate the advancement of SAR ATR.
| new_dataset | 0.969928 |
2501.15187 | Zecheng Li | Zecheng Li, Wengang Zhou, Weichao Zhao, Kepeng Wu, Hezhen Hu, Houqiang
Li | Uni-Sign: Toward Unified Sign Language Understanding at Scale | Accepted by ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sign language pre-training has gained increasing attention for its ability to
enhance performance across various sign language understanding (SLU) tasks.
However, existing methods often suffer from a gap between pre-training and
fine-tuning, leading to suboptimal results. To address this, we propose
Uni-Sign, a unified pre-training framework that eliminates the gap between
pre-training and downstream SLU tasks through a large-scale generative
pre-training strategy and a novel fine-tuning paradigm. First, we introduce
CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985
hours of video paired with textual annotations, which enables effective
large-scale pre-training. Second, Uni-Sign unifies SLU tasks by treating
downstream tasks as a single sign language translation (SLT) task during
fine-tuning, ensuring seamless knowledge transfer between pre-training and
fine-tuning. Furthermore, we incorporate a prior-guided fusion (PGF) module and
a score-aware sampling strategy to efficiently fuse pose and RGB information,
addressing keypoint inaccuracies and improving computational efficiency.
Extensive experiments across multiple SLU benchmarks demonstrate that Uni-Sign
achieves state-of-the-art performance across multiple downstream SLU tasks.
Dataset and code are available at github.com/ZechengLi19/Uni-Sign.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2025 11:51:23 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jan 2025 09:44:28 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 12:51:29 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Li",
"Zecheng",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Zhao",
"Weichao",
""
],
[
"Wu",
"Kepeng",
""
],
[
"Hu",
"Hezhen",
""
],
[
"Li",
"Houqiang",
""
]
]
| TITLE: Uni-Sign: Toward Unified Sign Language Understanding at Scale
ABSTRACT: Sign language pre-training has gained increasing attention for its ability to
enhance performance across various sign language understanding (SLU) tasks.
However, existing methods often suffer from a gap between pre-training and
fine-tuning, leading to suboptimal results. To address this, we propose
Uni-Sign, a unified pre-training framework that eliminates the gap between
pre-training and downstream SLU tasks through a large-scale generative
pre-training strategy and a novel fine-tuning paradigm. First, we introduce
CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985
hours of video paired with textual annotations, which enables effective
large-scale pre-training. Second, Uni-Sign unifies SLU tasks by treating
downstream tasks as a single sign language translation (SLT) task during
fine-tuning, ensuring seamless knowledge transfer between pre-training and
fine-tuning. Furthermore, we incorporate a prior-guided fusion (PGF) module and
a score-aware sampling strategy to efficiently fuse pose and RGB information,
addressing keypoint inaccuracies and improving computational efficiency.
Extensive experiments across multiple SLU benchmarks demonstrate that Uni-Sign
achieves state-of-the-art performance across multiple downstream SLU tasks.
Dataset and code are available at github.com/ZechengLi19/Uni-Sign.
| new_dataset | 0.957833 |
2501.15374 | Melkamu Mersha | Melkamu Abay Mersha, Mesay Gemeda Yigezu, Jugal Kalita | Evaluating the Effectiveness of XAI Techniques for Encoder-Based
Language Models | null | 310(2025)113042 | 10.1016/j.knosys.2025.113042 | null | cs.CL cs.AI cs.CY cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The black-box nature of large language models (LLMs) necessitates the
development of eXplainable AI (XAI) techniques for transparency and
trustworthiness. However, evaluating these techniques remains a challenge. This
study presents a general evaluation framework using four key metrics:
Human-reasoning Agreement (HA), Robustness, Consistency, and Contrastivity. We
assess the effectiveness of six explainability techniques from five different
XAI categories model simplification (LIME), perturbation-based methods (SHAP),
gradient-based approaches (InputXGradient, Grad-CAM), Layer-wise Relevance
Propagation (LRP), and attention mechanisms-based explainability methods
(Attention Mechanism Visualization, AMV) across five encoder-based language
models: TinyBERT, BERTbase, BERTlarge, XLM-R large, and DeBERTa-xlarge, using
the IMDB Movie Reviews and Tweet Sentiment Extraction (TSE) datasets. Our
findings show that the model simplification-based XAI method (LIME)
consistently outperforms across multiple metrics and models, significantly
excelling in HA with a score of 0.9685 on DeBERTa-xlarge, robustness, and
consistency as the complexity of large language models increases. AMV
demonstrates the best Robustness, with scores as low as 0.0020. It also excels
in Consistency, achieving near-perfect scores of 0.9999 across all models.
Regarding Contrastivity, LRP performs the best, particularly on more complex
models, with scores up to 0.9371.
| [
{
"version": "v1",
"created": "Sun, 26 Jan 2025 03:08:34 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Mersha",
"Melkamu Abay",
""
],
[
"Yigezu",
"Mesay Gemeda",
""
],
[
"Kalita",
"Jugal",
""
]
]
| TITLE: Evaluating the Effectiveness of XAI Techniques for Encoder-Based
Language Models
ABSTRACT: The black-box nature of large language models (LLMs) necessitates the
development of eXplainable AI (XAI) techniques for transparency and
trustworthiness. However, evaluating these techniques remains a challenge. This
study presents a general evaluation framework using four key metrics:
Human-reasoning Agreement (HA), Robustness, Consistency, and Contrastivity. We
assess the effectiveness of six explainability techniques from five different
XAI categories model simplification (LIME), perturbation-based methods (SHAP),
gradient-based approaches (InputXGradient, Grad-CAM), Layer-wise Relevance
Propagation (LRP), and attention mechanisms-based explainability methods
(Attention Mechanism Visualization, AMV) across five encoder-based language
models: TinyBERT, BERTbase, BERTlarge, XLM-R large, and DeBERTa-xlarge, using
the IMDB Movie Reviews and Tweet Sentiment Extraction (TSE) datasets. Our
findings show that the model simplification-based XAI method (LIME)
consistently outperforms across multiple metrics and models, significantly
excelling in HA with a score of 0.9685 on DeBERTa-xlarge, robustness, and
consistency as the complexity of large language models increases. AMV
demonstrates the best Robustness, with scores as low as 0.0020. It also excels
in Consistency, achieving near-perfect scores of 0.9999 across all models.
Regarding Contrastivity, LRP performs the best, particularly on more complex
models, with scores up to 0.9371.
| no_new_dataset | 0.945901 |
2501.17568 | Ehsan Aminian | Ehsan Aminian, Rita P. Ribeiro, Joao Gama | Histogram Approaches for Imbalanced Data Streams Regression | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Imbalanced domains pose a significant challenge in real-world predictive
analytics, particularly in the context of regression. While existing research
has primarily focused on batch learning from static datasets, limited attention
has been given to imbalanced regression in online learning scenarios. Intending
to address this gap, in prior work, we proposed sampling strategies based on
Chebyshevs inequality as the first methodologies designed explicitly for data
streams. However, these approaches operated under the restrictive assumption
that rare instances exclusively reside at distribution extremes. This study
introduces histogram-based sampling strategies to overcome this constraint,
proposing flexible solutions for imbalanced regression in evolving data
streams. The proposed techniques -- Histogram-based Undersampling (HistUS) and
Histogram-based Oversampling (HistOS) -- employ incremental online histograms
to dynamically detect and prioritize rare instances across arbitrary regions of
the target distribution to improve predictions in the rare cases. Comprehensive
experiments on synthetic and real-world benchmarks demonstrate that HistUS and
HistOS substantially improve rare-case prediction accuracy, outperforming
baseline models while maintaining competitiveness with Chebyshev-based
approaches.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2025 11:03:02 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 11:38:47 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Aminian",
"Ehsan",
""
],
[
"Ribeiro",
"Rita P.",
""
],
[
"Gama",
"Joao",
""
]
]
| TITLE: Histogram Approaches for Imbalanced Data Streams Regression
ABSTRACT: Imbalanced domains pose a significant challenge in real-world predictive
analytics, particularly in the context of regression. While existing research
has primarily focused on batch learning from static datasets, limited attention
has been given to imbalanced regression in online learning scenarios. Intending
to address this gap, in prior work, we proposed sampling strategies based on
Chebyshevs inequality as the first methodologies designed explicitly for data
streams. However, these approaches operated under the restrictive assumption
that rare instances exclusively reside at distribution extremes. This study
introduces histogram-based sampling strategies to overcome this constraint,
proposing flexible solutions for imbalanced regression in evolving data
streams. The proposed techniques -- Histogram-based Undersampling (HistUS) and
Histogram-based Oversampling (HistOS) -- employ incremental online histograms
to dynamically detect and prioritize rare instances across arbitrary regions of
the target distribution to improve predictions in the rare cases. Comprehensive
experiments on synthetic and real-world benchmarks demonstrate that HistUS and
HistOS substantially improve rare-case prediction accuracy, outperforming
baseline models while maintaining competitiveness with Chebyshev-based
approaches.
| no_new_dataset | 0.947381 |
2502.02307 | Jiawei Qin | Jiawei Qin, Xucong Zhang, Yusuke Sugano | UniGaze: Towards Universal Gaze Estimation via Large-scale Pre-Training | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite decades of research on data collection and model architectures,
current gaze estimation models encounter significant challenges in generalizing
across diverse data domains. Recent advances in self-supervised pre-training
have shown remarkable performances in generalization across various vision
tasks. However, their effectiveness in gaze estimation remains unexplored. We
propose UniGaze, for the first time, leveraging large-scale in-the-wild facial
datasets for gaze estimation through self-supervised pre-training. Through
systematic investigation, we clarify critical factors that are essential for
effective pretraining in gaze estimation. Our experiments reveal that
self-supervised approaches designed for semantic tasks fail when applied to
gaze estimation, while our carefully designed pre-training pipeline
consistently improves cross-domain performance. Through comprehensive
experiments of challenging cross-dataset evaluation and novel protocols
including leave-one-dataset-out and joint-dataset settings, we demonstrate that
UniGaze significantly improves generalization across multiple data domains
while minimizing reliance on costly labeled data. source code and model are
available at https://github.com/ut-vision/UniGaze.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 13:24:23 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 15:59:03 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Qin",
"Jiawei",
""
],
[
"Zhang",
"Xucong",
""
],
[
"Sugano",
"Yusuke",
""
]
]
| TITLE: UniGaze: Towards Universal Gaze Estimation via Large-scale Pre-Training
ABSTRACT: Despite decades of research on data collection and model architectures,
current gaze estimation models encounter significant challenges in generalizing
across diverse data domains. Recent advances in self-supervised pre-training
have shown remarkable performances in generalization across various vision
tasks. However, their effectiveness in gaze estimation remains unexplored. We
propose UniGaze, for the first time, leveraging large-scale in-the-wild facial
datasets for gaze estimation through self-supervised pre-training. Through
systematic investigation, we clarify critical factors that are essential for
effective pretraining in gaze estimation. Our experiments reveal that
self-supervised approaches designed for semantic tasks fail when applied to
gaze estimation, while our carefully designed pre-training pipeline
consistently improves cross-domain performance. Through comprehensive
experiments of challenging cross-dataset evaluation and novel protocols
including leave-one-dataset-out and joint-dataset settings, we demonstrate that
UniGaze significantly improves generalization across multiple data domains
while minimizing reliance on costly labeled data. source code and model are
available at https://github.com/ut-vision/UniGaze.
| no_new_dataset | 0.944944 |
2502.06432 | Huaqiu Li | Huaqiu Li, Wang Zhang, Xiaowan Hu, Tao Jiang, Zikang Chen, Haoqian
Wang | Prompt-SID: Learning Structural Representation Prompt via Latent
Diffusion for Single-Image Denoising | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Many studies have concentrated on constructing supervised models utilizing
paired datasets for image denoising, which proves to be expensive and
time-consuming. Current self-supervised and unsupervised approaches typically
rely on blind-spot networks or sub-image pairs sampling, resulting in pixel
information loss and destruction of detailed structural information, thereby
significantly constraining the efficacy of such methods. In this paper, we
introduce Prompt-SID, a prompt-learning-based single image denoising framework
that emphasizes preserving of structural details. This approach is trained in a
self-supervised manner using downsampled image pairs. It captures
original-scale image information through structural encoding and integrates
this prompt into the denoiser. To achieve this, we propose a structural
representation generation model based on the latent diffusion process and
design a structural attention module within the transformer-based denoiser
architecture to decode the prompt. Additionally, we introduce a scale replay
training mechanism, which effectively mitigates the scale gap from images of
different resolutions. We conduct comprehensive experiments on synthetic,
real-world, and fluorescence imaging datasets, showcasing the remarkable
effectiveness of Prompt-SID. Our code will be released at
https://github.com/huaqlili/Prompt-SID.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 13:09:47 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 12:49:20 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Li",
"Huaqiu",
""
],
[
"Zhang",
"Wang",
""
],
[
"Hu",
"Xiaowan",
""
],
[
"Jiang",
"Tao",
""
],
[
"Chen",
"Zikang",
""
],
[
"Wang",
"Haoqian",
""
]
]
| TITLE: Prompt-SID: Learning Structural Representation Prompt via Latent
Diffusion for Single-Image Denoising
ABSTRACT: Many studies have concentrated on constructing supervised models utilizing
paired datasets for image denoising, which proves to be expensive and
time-consuming. Current self-supervised and unsupervised approaches typically
rely on blind-spot networks or sub-image pairs sampling, resulting in pixel
information loss and destruction of detailed structural information, thereby
significantly constraining the efficacy of such methods. In this paper, we
introduce Prompt-SID, a prompt-learning-based single image denoising framework
that emphasizes preserving of structural details. This approach is trained in a
self-supervised manner using downsampled image pairs. It captures
original-scale image information through structural encoding and integrates
this prompt into the denoiser. To achieve this, we propose a structural
representation generation model based on the latent diffusion process and
design a structural attention module within the transformer-based denoiser
architecture to decode the prompt. Additionally, we introduce a scale replay
training mechanism, which effectively mitigates the scale gap from images of
different resolutions. We conduct comprehensive experiments on synthetic,
real-world, and fluorescence imaging datasets, showcasing the remarkable
effectiveness of Prompt-SID. Our code will be released at
https://github.com/huaqlili/Prompt-SID.
| no_new_dataset | 0.946448 |
2502.08658 | Hao Lyu | Hao Lyu, Yanyong Guo, Pan Liu, Shuo Feng, Weilin Ren and Quansheng Yue | Knowledge-data fusion dominated vehicle platoon dynamics modeling and
analysis: A physics-encoded deep learning approach | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, artificial intelligence (AI)-enabled nonlinear vehicle platoon
dynamics modeling plays a crucial role in predicting and optimizing the
interactions between vehicles. Existing efforts lack the extraction and capture
of vehicle behavior interaction features at the platoon scale. More
importantly, maintaining high modeling accuracy without losing physical
analyzability remains to be solved. To this end, this paper proposes a novel
physics-encoded deep learning network, named PeMTFLN, to model the nonlinear
vehicle platoon dynamics. Specifically, an analyzable parameters encoded
computational graph (APeCG) is designed to guide the platoon to respond to the
driving behavior of the lead vehicle while ensuring local stability. Besides, a
multi-scale trajectory feature learning network (MTFLN) is constructed to
capture platoon following patterns and infer the physical parameters required
for APeCG from trajectory data. The human-driven vehicle trajectory datasets
(HIGHSIM) were used to train the proposed PeMTFLN. The trajectories prediction
experiments show that PeMTFLN exhibits superior compared to the baseline models
in terms of predictive accuracy in speed and gap. The stability analysis result
shows that the physical parameters in APeCG is able to reproduce the platoon
stability in real-world condition. In simulation experiments, PeMTFLN performs
low inference error in platoon trajectories generation. Moreover, PeMTFLN also
accurately reproduces ground-truth safety statistics. The code of proposed
PeMTFLN is open source.
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2025 05:10:46 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 13:42:00 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Lyu",
"Hao",
""
],
[
"Guo",
"Yanyong",
""
],
[
"Liu",
"Pan",
""
],
[
"Feng",
"Shuo",
""
],
[
"Ren",
"Weilin",
""
],
[
"Yue",
"Quansheng",
""
]
]
| TITLE: Knowledge-data fusion dominated vehicle platoon dynamics modeling and
analysis: A physics-encoded deep learning approach
ABSTRACT: Recently, artificial intelligence (AI)-enabled nonlinear vehicle platoon
dynamics modeling plays a crucial role in predicting and optimizing the
interactions between vehicles. Existing efforts lack the extraction and capture
of vehicle behavior interaction features at the platoon scale. More
importantly, maintaining high modeling accuracy without losing physical
analyzability remains to be solved. To this end, this paper proposes a novel
physics-encoded deep learning network, named PeMTFLN, to model the nonlinear
vehicle platoon dynamics. Specifically, an analyzable parameters encoded
computational graph (APeCG) is designed to guide the platoon to respond to the
driving behavior of the lead vehicle while ensuring local stability. Besides, a
multi-scale trajectory feature learning network (MTFLN) is constructed to
capture platoon following patterns and infer the physical parameters required
for APeCG from trajectory data. The human-driven vehicle trajectory datasets
(HIGHSIM) were used to train the proposed PeMTFLN. The trajectories prediction
experiments show that PeMTFLN exhibits superior compared to the baseline models
in terms of predictive accuracy in speed and gap. The stability analysis result
shows that the physical parameters in APeCG is able to reproduce the platoon
stability in real-world condition. In simulation experiments, PeMTFLN performs
low inference error in platoon trajectories generation. Moreover, PeMTFLN also
accurately reproduces ground-truth safety statistics. The code of proposed
PeMTFLN is open source.
| no_new_dataset | 0.948202 |
2502.10236 | Luca Scimeca | Thomas Jiralerspong, Berton Earnshaw, Jason Hartford, Yoshua Bengio,
Luca Scimeca | Shaping Inductive Bias in Diffusion Models through Frequency-Based Noise
Control | Published as workshop paper at DeLTa and FPI workshops, ICLR 2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Diffusion Probabilistic Models (DPMs) are powerful generative models that
have achieved unparalleled success in a number of generative tasks. In this
work, we aim to build inductive biases into the training and sampling of
diffusion models to better accommodate the target distribution of the data to
model. For topologically structured data, we devise a frequency-based noising
operator to purposefully manipulate, and set, these inductive biases. We first
show that appropriate manipulations of the noising forward process can lead
DPMs to focus on particular aspects of the distribution to learn. We show that
different datasets necessitate different inductive biases, and that appropriate
frequency-based noise control induces increased generative performance compared
to standard diffusion. Finally, we demonstrate the possibility of ignoring
information at particular frequencies while learning. We show this in an image
corruption and recovery task, where we train a DPM to recover the original
target distribution after severe noise corruption.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 15:46:37 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 18:40:15 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Jiralerspong",
"Thomas",
""
],
[
"Earnshaw",
"Berton",
""
],
[
"Hartford",
"Jason",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Scimeca",
"Luca",
""
]
]
| TITLE: Shaping Inductive Bias in Diffusion Models through Frequency-Based Noise
Control
ABSTRACT: Diffusion Probabilistic Models (DPMs) are powerful generative models that
have achieved unparalleled success in a number of generative tasks. In this
work, we aim to build inductive biases into the training and sampling of
diffusion models to better accommodate the target distribution of the data to
model. For topologically structured data, we devise a frequency-based noising
operator to purposefully manipulate, and set, these inductive biases. We first
show that appropriate manipulations of the noising forward process can lead
DPMs to focus on particular aspects of the distribution to learn. We show that
different datasets necessitate different inductive biases, and that appropriate
frequency-based noise control induces increased generative performance compared
to standard diffusion. Finally, we demonstrate the possibility of ignoring
information at particular frequencies while learning. We show this in an image
corruption and recovery task, where we train a DPM to recover the original
target distribution after severe noise corruption.
| no_new_dataset | 0.949856 |
2502.12029 | Qi Zhao | Qi Zhao, Hongyu Yang, Qi Song, Xinwei Yao, Xiangyang Li | KnowPath: Knowledge-enhanced Reasoning via LLM-generated Inference Paths
over Knowledge Graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated remarkable capabilities in
various complex tasks, yet they still suffer from hallucinations. Introducing
external knowledge, such as knowledge graph, can enhance the LLMs' ability to
provide factual answers. LLMs have the ability to interactively explore
knowledge graphs. However, most approaches have been affected by insufficient
internal knowledge excavation in LLMs, limited generation of trustworthy
knowledge reasoning paths, and a vague integration between internal and
external knowledge. Therefore, we propose KnowPath, a knowledge-enhanced large
model framework driven by the collaboration of internal and external knowledge.
It relies on the internal knowledge of the LLM to guide the exploration of
interpretable directed subgraphs in external knowledge graphs, better
integrating the two knowledge sources for more accurate reasoning. Extensive
experiments on multiple real-world datasets confirm the superiority of
KnowPath.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 17:02:01 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 13:22:46 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Zhao",
"Qi",
""
],
[
"Yang",
"Hongyu",
""
],
[
"Song",
"Qi",
""
],
[
"Yao",
"Xinwei",
""
],
[
"Li",
"Xiangyang",
""
]
]
| TITLE: KnowPath: Knowledge-enhanced Reasoning via LLM-generated Inference Paths
over Knowledge Graphs
ABSTRACT: Large language models (LLMs) have demonstrated remarkable capabilities in
various complex tasks, yet they still suffer from hallucinations. Introducing
external knowledge, such as knowledge graph, can enhance the LLMs' ability to
provide factual answers. LLMs have the ability to interactively explore
knowledge graphs. However, most approaches have been affected by insufficient
internal knowledge excavation in LLMs, limited generation of trustworthy
knowledge reasoning paths, and a vague integration between internal and
external knowledge. Therefore, we propose KnowPath, a knowledge-enhanced large
model framework driven by the collaboration of internal and external knowledge.
It relies on the internal knowledge of the LLM to guide the exploration of
interpretable directed subgraphs in external knowledge graphs, better
integrating the two knowledge sources for more accurate reasoning. Extensive
experiments on multiple real-world datasets confirm the superiority of
KnowPath.
| no_new_dataset | 0.945601 |
2502.14936 | Amirmohammad Chegeni | Samira Rezaei, Amirmohammad Chegeni, Bharath Chowdhary Nagam, J. P.
McKean, Mitra Baratchi, Koen Kuijken, L\'eon V. E. Koopmans | Reducing false positives in strong lens detection through effective
augmentation and ensemble learning | 15 pages, 14 figures, 7 tables, Accepted for publication in MNRAS | Monthly Notices of the Royal Astronomical Society, Volume 538,
Issue 2, April 2025, Pages 1081-1095 | 10.1093/mnras/staf327 | null | astro-ph.IM astro-ph.CO astro-ph.GA cs.CV | http://creativecommons.org/licenses/by/4.0/ | This research studies the impact of high-quality training datasets on the
performance of Convolutional Neural Networks (CNNs) in detecting strong
gravitational lenses. We stress the importance of data diversity and
representativeness, demonstrating how variations in sample populations
influence CNN performance. In addition to the quality of training data, our
results highlight the effectiveness of various techniques, such as data
augmentation and ensemble learning, in reducing false positives while
maintaining model completeness at an acceptable level. This enhances the
robustness of gravitational lens detection models and advancing capabilities in
this field. Our experiments, employing variations of DenseNet and EfficientNet,
achieved a best false positive rate (FP rate) of $10^{-4}$, while successfully
identifying over 88 per cent of genuine gravitational lenses in the test
dataset. This represents an 11-fold reduction in the FP rate compared to the
original training dataset. Notably, this substantial enhancement in the FP rate
is accompanied by only a 2.3 per cent decrease in the number of true positive
samples. Validated on the KiDS dataset, our findings offer insights applicable
to ongoing missions, like Euclid.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 11:50:56 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Rezaei",
"Samira",
""
],
[
"Chegeni",
"Amirmohammad",
""
],
[
"Nagam",
"Bharath Chowdhary",
""
],
[
"McKean",
"J. P.",
""
],
[
"Baratchi",
"Mitra",
""
],
[
"Kuijken",
"Koen",
""
],
[
"Koopmans",
"Léon V. E.",
""
]
]
| TITLE: Reducing false positives in strong lens detection through effective
augmentation and ensemble learning
ABSTRACT: This research studies the impact of high-quality training datasets on the
performance of Convolutional Neural Networks (CNNs) in detecting strong
gravitational lenses. We stress the importance of data diversity and
representativeness, demonstrating how variations in sample populations
influence CNN performance. In addition to the quality of training data, our
results highlight the effectiveness of various techniques, such as data
augmentation and ensemble learning, in reducing false positives while
maintaining model completeness at an acceptable level. This enhances the
robustness of gravitational lens detection models and advancing capabilities in
this field. Our experiments, employing variations of DenseNet and EfficientNet,
achieved a best false positive rate (FP rate) of $10^{-4}$, while successfully
identifying over 88 per cent of genuine gravitational lenses in the test
dataset. This represents an 11-fold reduction in the FP rate compared to the
original training dataset. Notably, this substantial enhancement in the FP rate
is accompanied by only a 2.3 per cent decrease in the number of true positive
samples. Validated on the KiDS dataset, our findings offer insights applicable
to ongoing missions, like Euclid.
| no_new_dataset | 0.949201 |
2502.19339 | Tohida Rehman Ms. | Tohida Rehman, Soumabha Ghosh, Kuntal Das, Souvik Bhattacharjee,
Debarshi Kumar Sanyal, Samiran Chattopadhyay | Evaluating LLMs and Pre-trained Models for Text Summarization Across
Diverse Datasets | 5 pages, 2 figures, 6 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Text summarization plays a crucial role in natural language processing by
condensing large volumes of text into concise and coherent summaries. As
digital content continues to grow rapidly and the demand for effective
information retrieval increases, text summarization has become a focal point of
research in recent years. This study offers a thorough evaluation of four
leading pre-trained and open-source large language models: BART, FLAN-T5,
LLaMA-3-8B, and Gemma-7B, across five diverse datasets CNN/DM, Gigaword, News
Summary, XSum, and BBC News. The evaluation employs widely recognized automatic
metrics, including ROUGE-1, ROUGE-2, ROUGE-L, BERTScore, and METEOR, to assess
the models' capabilities in generating coherent and informative summaries. The
results reveal the comparative strengths and limitations of these models in
processing various text types.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 17:32:07 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 09:40:42 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Rehman",
"Tohida",
""
],
[
"Ghosh",
"Soumabha",
""
],
[
"Das",
"Kuntal",
""
],
[
"Bhattacharjee",
"Souvik",
""
],
[
"Sanyal",
"Debarshi Kumar",
""
],
[
"Chattopadhyay",
"Samiran",
""
]
]
| TITLE: Evaluating LLMs and Pre-trained Models for Text Summarization Across
Diverse Datasets
ABSTRACT: Text summarization plays a crucial role in natural language processing by
condensing large volumes of text into concise and coherent summaries. As
digital content continues to grow rapidly and the demand for effective
information retrieval increases, text summarization has become a focal point of
research in recent years. This study offers a thorough evaluation of four
leading pre-trained and open-source large language models: BART, FLAN-T5,
LLaMA-3-8B, and Gemma-7B, across five diverse datasets CNN/DM, Gigaword, News
Summary, XSum, and BBC News. The evaluation employs widely recognized automatic
metrics, including ROUGE-1, ROUGE-2, ROUGE-L, BERTScore, and METEOR, to assess
the models' capabilities in generating coherent and informative summaries. The
results reveal the comparative strengths and limitations of these models in
processing various text types.
| no_new_dataset | 0.945801 |
2502.19638 | Harsh Gupta | Harsh Gupta, Yuchen Mo, Shengmiao Jin, Wenzhen Yuan | Sensor-Invariant Tactile Representation | Accepted to ICLR'25. Project webpage: https://hgupt3.github.io/sitr/ | null | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-resolution tactile sensors have become critical for embodied perception
and robotic manipulation. However, a key challenge in the field is the lack of
transferability between sensors due to design and manufacturing variations,
which result in significant differences in tactile signals. This limitation
hinders the ability to transfer models or knowledge learned from one sensor to
another. To address this, we introduce a novel method for extracting
Sensor-Invariant Tactile Representations (SITR), enabling zero-shot transfer
across optical tactile sensors. Our approach utilizes a transformer-based
architecture trained on a diverse dataset of simulated sensor designs, allowing
it to generalize to new sensors in the real world with minimal calibration.
Experimental results demonstrate the method's effectiveness across various
tactile sensing applications, facilitating data and model transferability for
future advancements in the field.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 00:12:50 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 01:45:38 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Gupta",
"Harsh",
""
],
[
"Mo",
"Yuchen",
""
],
[
"Jin",
"Shengmiao",
""
],
[
"Yuan",
"Wenzhen",
""
]
]
| TITLE: Sensor-Invariant Tactile Representation
ABSTRACT: High-resolution tactile sensors have become critical for embodied perception
and robotic manipulation. However, a key challenge in the field is the lack of
transferability between sensors due to design and manufacturing variations,
which result in significant differences in tactile signals. This limitation
hinders the ability to transfer models or knowledge learned from one sensor to
another. To address this, we introduce a novel method for extracting
Sensor-Invariant Tactile Representations (SITR), enabling zero-shot transfer
across optical tactile sensors. Our approach utilizes a transformer-based
architecture trained on a diverse dataset of simulated sensor designs, allowing
it to generalize to new sensors in the real world with minimal calibration.
Experimental results demonstrate the method's effectiveness across various
tactile sensing applications, facilitating data and model transferability for
future advancements in the field.
| no_new_dataset | 0.954478 |
2503.00746 | Liao Shen | Liao Shen, Tianqi Liu, Huiqiang Sun, Jiaqi Li, Zhiguo Cao, Wei Li,
Chen Change Loy | DoF-Gaussian: Controllable Depth-of-Field for 3D Gaussian Splatting | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in 3D Gaussian Splatting (3D-GS) have shown remarkable
success in representing 3D scenes and generating high-quality, novel views in
real-time. However, 3D-GS and its variants assume that input images are
captured based on pinhole imaging and are fully in focus. This assumption
limits their applicability, as real-world images often feature shallow
depth-of-field (DoF). In this paper, we introduce DoF-Gaussian, a controllable
depth-of-field method for 3D-GS. We develop a lens-based imaging model based on
geometric optics principles to control DoF effects. To ensure accurate scene
geometry, we incorporate depth priors adjusted per scene, and we apply
defocus-to-focus adaptation to minimize the gap in the circle of confusion. We
also introduce a synthetic dataset to assess refocusing capabilities and the
model's ability to learn precise lens parameters. Our framework is customizable
and supports various interactive applications. Extensive experiments confirm
the effectiveness of our method. Our project is available at
https://dof-gaussian.github.io.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 05:57:57 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 12:26:41 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Mar 2025 07:26:01 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Shen",
"Liao",
""
],
[
"Liu",
"Tianqi",
""
],
[
"Sun",
"Huiqiang",
""
],
[
"Li",
"Jiaqi",
""
],
[
"Cao",
"Zhiguo",
""
],
[
"Li",
"Wei",
""
],
[
"Loy",
"Chen Change",
""
]
]
| TITLE: DoF-Gaussian: Controllable Depth-of-Field for 3D Gaussian Splatting
ABSTRACT: Recent advances in 3D Gaussian Splatting (3D-GS) have shown remarkable
success in representing 3D scenes and generating high-quality, novel views in
real-time. However, 3D-GS and its variants assume that input images are
captured based on pinhole imaging and are fully in focus. This assumption
limits their applicability, as real-world images often feature shallow
depth-of-field (DoF). In this paper, we introduce DoF-Gaussian, a controllable
depth-of-field method for 3D-GS. We develop a lens-based imaging model based on
geometric optics principles to control DoF effects. To ensure accurate scene
geometry, we incorporate depth priors adjusted per scene, and we apply
defocus-to-focus adaptation to minimize the gap in the circle of confusion. We
also introduce a synthetic dataset to assess refocusing capabilities and the
model's ability to learn precise lens parameters. Our framework is customizable
and supports various interactive applications. Extensive experiments confirm
the effectiveness of our method. Our project is available at
https://dof-gaussian.github.io.
| new_dataset | 0.959535 |
2503.02702 | Chenyu Li | Chenyu Li, Zhengjia Zhu, Jiyan He, Xiu Zhang | RedChronos: A Large Language Model-Based Log Analysis System for Insider
Threat Detection in Enterprises | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Internal threat detection (IDT) aims to address security threats within
organizations or enterprises by identifying potential or already occurring
malicious threats within vast amounts of logs. Although organizations or
enterprises have dedicated personnel responsible for reviewing these logs, it
is impossible to manually examine all logs entirely.In response to the vast
number of logs, we propose a system called RedChronos, which is a Large
Language Model-Based Log Analysis System. This system incorporates innovative
improvements over previous research by employing Query-Aware Weighted Voting
and a Semantic Expansion-based Genetic Algorithm with LLM-driven Mutations. On
the public datasets CERT 4.2 and 5.2, RedChronos outperforms or matches
existing approaches in terms of accuracy, precision, and detection rate.
Moreover, RedChronos reduces the need for manual intervention in security log
reviews by approximately 90% in the Xiaohongshu Security Operation Center.
Therefore, our RedChronos system demonstrates exceptional performance in
handling IDT tasks, providing innovative solutions for these challenges. We
believe that future research can continue to enhance the system's performance
in IDT tasks while also reducing the response time to internal risk events.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:18:40 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 11:47:44 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Li",
"Chenyu",
""
],
[
"Zhu",
"Zhengjia",
""
],
[
"He",
"Jiyan",
""
],
[
"Zhang",
"Xiu",
""
]
]
| TITLE: RedChronos: A Large Language Model-Based Log Analysis System for Insider
Threat Detection in Enterprises
ABSTRACT: Internal threat detection (IDT) aims to address security threats within
organizations or enterprises by identifying potential or already occurring
malicious threats within vast amounts of logs. Although organizations or
enterprises have dedicated personnel responsible for reviewing these logs, it
is impossible to manually examine all logs entirely.In response to the vast
number of logs, we propose a system called RedChronos, which is a Large
Language Model-Based Log Analysis System. This system incorporates innovative
improvements over previous research by employing Query-Aware Weighted Voting
and a Semantic Expansion-based Genetic Algorithm with LLM-driven Mutations. On
the public datasets CERT 4.2 and 5.2, RedChronos outperforms or matches
existing approaches in terms of accuracy, precision, and detection rate.
Moreover, RedChronos reduces the need for manual intervention in security log
reviews by approximately 90% in the Xiaohongshu Security Operation Center.
Therefore, our RedChronos system demonstrates exceptional performance in
handling IDT tasks, providing innovative solutions for these challenges. We
believe that future research can continue to enhance the system's performance
in IDT tasks while also reducing the response time to internal risk events.
| no_new_dataset | 0.945045 |
2503.04305 | Dilek K\"u\c{c}\"uk | Dilek K\"u\c{c}\"uk and Fazli Can | Computational Law: Datasets, Benchmarks, and Ontologies | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments in computer science and artificial intelligence have also
contributed to the legal domain, as revealed by the number and range of related
publications and applications. Machine and deep learning models require
considerable amount of domain-specific data for training and comparison
purposes, in order to attain high-performance in the legal domain.
Additionally, semantic resources such as ontologies are valuable for building
large-scale computational legal systems, in addition to ensuring
interoperability of such systems. Considering these aspects, we present an
up-to-date review of the literature on datasets, benchmarks, and ontologies
proposed for computational law. We believe that this comprehensive and recent
review will help researchers and practitioners when developing and testing
approaches and systems for computational law.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 10:46:15 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 08:04:09 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Küçük",
"Dilek",
""
],
[
"Can",
"Fazli",
""
]
]
| TITLE: Computational Law: Datasets, Benchmarks, and Ontologies
ABSTRACT: Recent developments in computer science and artificial intelligence have also
contributed to the legal domain, as revealed by the number and range of related
publications and applications. Machine and deep learning models require
considerable amount of domain-specific data for training and comparison
purposes, in order to attain high-performance in the legal domain.
Additionally, semantic resources such as ontologies are valuable for building
large-scale computational legal systems, in addition to ensuring
interoperability of such systems. Considering these aspects, we present an
up-to-date review of the literature on datasets, benchmarks, and ontologies
proposed for computational law. We believe that this comprehensive and recent
review will help researchers and practitioners when developing and testing
approaches and systems for computational law.
| no_new_dataset | 0.945701 |
2503.04385 | Yihao Huang | Yihao Huang, Xin Luo, Qing Guo, Felix Juefei-Xu, Xiaojun Jia, Weikai
Miao, Geguang Pu, Yang Liu | Scale-Invariant Adversarial Attack against Arbitrary-scale
Super-resolution | 17 pages, accepted by TIFS 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of local continuous image function (LIIF) has garnered significant
attention for arbitrary-scale super-resolution (SR) techniques. However, while
the vulnerabilities of fixed-scale SR have been assessed, the robustness of
continuous representation-based arbitrary-scale SR against adversarial attacks
remains an area warranting further exploration. The elaborately designed
adversarial attacks for fixed-scale SR are scale-dependent, which will cause
time-consuming and memory-consuming problems when applied to arbitrary-scale
SR. To address this concern, we propose a simple yet effective
``scale-invariant'' SR adversarial attack method with good transferability,
termed SIAGT. Specifically, we propose to construct resource-saving attacks by
exploiting finite discrete points of continuous representation. In addition, we
formulate a coordinate-dependent loss to enhance the cross-model
transferability of the attack. The attack can significantly deteriorate the SR
images while introducing imperceptible distortion to the targeted
low-resolution (LR) images. Experiments carried out on three popular LIIF-based
SR approaches and four classical SR datasets show remarkable attack performance
and transferability of SIAGT.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 12:36:35 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 17:42:24 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Huang",
"Yihao",
""
],
[
"Luo",
"Xin",
""
],
[
"Guo",
"Qing",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Jia",
"Xiaojun",
""
],
[
"Miao",
"Weikai",
""
],
[
"Pu",
"Geguang",
""
],
[
"Liu",
"Yang",
""
]
]
| TITLE: Scale-Invariant Adversarial Attack against Arbitrary-scale
Super-resolution
ABSTRACT: The advent of local continuous image function (LIIF) has garnered significant
attention for arbitrary-scale super-resolution (SR) techniques. However, while
the vulnerabilities of fixed-scale SR have been assessed, the robustness of
continuous representation-based arbitrary-scale SR against adversarial attacks
remains an area warranting further exploration. The elaborately designed
adversarial attacks for fixed-scale SR are scale-dependent, which will cause
time-consuming and memory-consuming problems when applied to arbitrary-scale
SR. To address this concern, we propose a simple yet effective
``scale-invariant'' SR adversarial attack method with good transferability,
termed SIAGT. Specifically, we propose to construct resource-saving attacks by
exploiting finite discrete points of continuous representation. In addition, we
formulate a coordinate-dependent loss to enhance the cross-model
transferability of the attack. The attack can significantly deteriorate the SR
images while introducing imperceptible distortion to the targeted
low-resolution (LR) images. Experiments carried out on three popular LIIF-based
SR approaches and four classical SR datasets show remarkable attack performance
and transferability of SIAGT.
| no_new_dataset | 0.944842 |
2503.04823 | Yuheng Kuang | Yuheng Kuang, Zhengning Wang, Jianping Zhang, Zhenyu Shi, Yuding Zhang | DA-STGCN: 4D Trajectory Prediction Based on Spatiotemporal Feature
Extraction | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The importance of four-dimensional (4D) trajectory prediction within air
traffic management systems is on the rise. Key operations such as conflict
detection and resolution, aircraft anomaly monitoring, and the management of
congested flight paths are increasingly reliant on this foundational
technology, underscoring the urgent demand for intelligent solutions. The
dynamics in airport terminal zones and crowded airspaces are intricate and
ever-changing; however, current methodologies do not sufficiently account for
the interactions among aircraft. To tackle these challenges, we propose
DA-STGCN, an innovative spatiotemporal graph convolutional network that
integrates a dual attention mechanism. Our model reconstructs the adjacency
matrix through a self-attention approach, enhancing the capture of node
correlations, and employs graph attention to distill spatiotemporal
characteristics, thereby generating a probabilistic distribution of predicted
trajectories. This novel adjacency matrix, reconstructed with the
self-attention mechanism, is dynamically optimized throughout the network's
training process, offering a more nuanced reflection of the inter-node
relationships compared to traditional algorithms. The performance of the model
is validated on two ADS-B datasets, one near the airport terminal area and the
other in dense airspace. Experimental results demonstrate a notable improvement
over current 4D trajectory prediction methods, achieving a 20% and 30%
reduction in the Average Displacement Error (ADE) and Final Displacement Error
(FDE), respectively. The incorporation of a Dual-Attention module has been
shown to significantly enhance the extraction of node correlations, as verified
by ablation experiments.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 03:42:49 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 03:39:44 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Kuang",
"Yuheng",
""
],
[
"Wang",
"Zhengning",
""
],
[
"Zhang",
"Jianping",
""
],
[
"Shi",
"Zhenyu",
""
],
[
"Zhang",
"Yuding",
""
]
]
| TITLE: DA-STGCN: 4D Trajectory Prediction Based on Spatiotemporal Feature
Extraction
ABSTRACT: The importance of four-dimensional (4D) trajectory prediction within air
traffic management systems is on the rise. Key operations such as conflict
detection and resolution, aircraft anomaly monitoring, and the management of
congested flight paths are increasingly reliant on this foundational
technology, underscoring the urgent demand for intelligent solutions. The
dynamics in airport terminal zones and crowded airspaces are intricate and
ever-changing; however, current methodologies do not sufficiently account for
the interactions among aircraft. To tackle these challenges, we propose
DA-STGCN, an innovative spatiotemporal graph convolutional network that
integrates a dual attention mechanism. Our model reconstructs the adjacency
matrix through a self-attention approach, enhancing the capture of node
correlations, and employs graph attention to distill spatiotemporal
characteristics, thereby generating a probabilistic distribution of predicted
trajectories. This novel adjacency matrix, reconstructed with the
self-attention mechanism, is dynamically optimized throughout the network's
training process, offering a more nuanced reflection of the inter-node
relationships compared to traditional algorithms. The performance of the model
is validated on two ADS-B datasets, one near the airport terminal area and the
other in dense airspace. Experimental results demonstrate a notable improvement
over current 4D trajectory prediction methods, achieving a 20% and 30%
reduction in the Average Displacement Error (ADE) and Final Displacement Error
(FDE), respectively. The incorporation of a Dual-Attention module has been
shown to significantly enhance the extraction of node correlations, as verified
by ablation experiments.
| no_new_dataset | 0.945951 |
2503.06426 | Zihao Peng | Zihao Peng, Xijun Wang, Shengbo Chen, Hong Rao, Cong Shen | Federated Learning for Diffusion Models | null | null | 10.1109/TCCN.2025.3550359 | null | cs.LG cs.CV cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion models are powerful generative models that can produce highly
realistic samples for various tasks. Typically, these models are constructed
using centralized, independently and identically distributed (IID) training
data. However, in practical scenarios, data is often distributed across
multiple clients and frequently manifests non-IID characteristics. Federated
Learning (FL) can leverage this distributed data to train diffusion models, but
the performance of existing FL methods is unsatisfactory in non-IID scenarios.
To address this, we propose FedDDPM-Federated Learning with Denoising Diffusion
Probabilistic Models, which leverages the data generative capability of
diffusion models to facilitate model training. In particular, the server uses
well-trained local diffusion models uploaded by each client before FL training
to generate auxiliary data that can approximately represent the global data
distribution. Following each round of model aggregation, the server further
optimizes the global model using the auxiliary dataset to alleviate the impact
of heterogeneous data on model performance. We provide a rigorous convergence
analysis of FedDDPM and propose an enhanced algorithm, FedDDPM+, to reduce
training overheads. FedDDPM+ detects instances of slow model learning and
performs a one-shot correction using the auxiliary dataset. Experimental
results validate that our proposed algorithms outperform the state-of-the-art
FL algorithms on the MNIST, CIFAR10 and CIFAR100 datasets.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 03:41:10 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Peng",
"Zihao",
""
],
[
"Wang",
"Xijun",
""
],
[
"Chen",
"Shengbo",
""
],
[
"Rao",
"Hong",
""
],
[
"Shen",
"Cong",
""
]
]
| TITLE: Federated Learning for Diffusion Models
ABSTRACT: Diffusion models are powerful generative models that can produce highly
realistic samples for various tasks. Typically, these models are constructed
using centralized, independently and identically distributed (IID) training
data. However, in practical scenarios, data is often distributed across
multiple clients and frequently manifests non-IID characteristics. Federated
Learning (FL) can leverage this distributed data to train diffusion models, but
the performance of existing FL methods is unsatisfactory in non-IID scenarios.
To address this, we propose FedDDPM-Federated Learning with Denoising Diffusion
Probabilistic Models, which leverages the data generative capability of
diffusion models to facilitate model training. In particular, the server uses
well-trained local diffusion models uploaded by each client before FL training
to generate auxiliary data that can approximately represent the global data
distribution. Following each round of model aggregation, the server further
optimizes the global model using the auxiliary dataset to alleviate the impact
of heterogeneous data on model performance. We provide a rigorous convergence
analysis of FedDDPM and propose an enhanced algorithm, FedDDPM+, to reduce
training overheads. FedDDPM+ detects instances of slow model learning and
performs a one-shot correction using the auxiliary dataset. Experimental
results validate that our proposed algorithms outperform the state-of-the-art
FL algorithms on the MNIST, CIFAR10 and CIFAR100 datasets.
| no_new_dataset | 0.944434 |
2503.06571 | Xuan-May Le | Xuan-May Le, Ling Luo, Uwe Aickelin, Minh-Tuan Tran, David Berlowitz,
Mark Howard | SHIP: A Shapelet-based Approach for Interpretable Patient-Ventilator
Asynchrony Detection | Accepted at PAKDD 2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Patient-ventilator asynchrony (PVA) is a common and critical issue during
mechanical ventilation, affecting up to 85% of patients. PVA can result in
clinical complications such as discomfort, sleep disruption, and potentially
more severe conditions like ventilator-induced lung injury and diaphragm
dysfunction. Traditional PVA management, which relies on manual adjustments by
healthcare providers, is often inadequate due to delays and errors. While
various computational methods, including rule-based, statistical, and deep
learning approaches, have been developed to detect PVA events, they face
challenges related to dataset imbalances and lack of interpretability. In this
work, we propose a shapelet-based approach SHIP for PVA detection, utilizing
shapelets - discriminative subsequences in time-series data - to enhance
detection accuracy and interpretability. Our method addresses dataset
imbalances through shapelet-based data augmentation and constructs a shapelet
pool to transform the dataset for more effective classification. The combined
shapelet and statistical features are then used in a classifier to identify PVA
events. Experimental results on medical datasets show that SHIP significantly
improves PVA detection while providing interpretable insights into model
decisions.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 11:58:03 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 02:01:30 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Le",
"Xuan-May",
""
],
[
"Luo",
"Ling",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Tran",
"Minh-Tuan",
""
],
[
"Berlowitz",
"David",
""
],
[
"Howard",
"Mark",
""
]
]
| TITLE: SHIP: A Shapelet-based Approach for Interpretable Patient-Ventilator
Asynchrony Detection
ABSTRACT: Patient-ventilator asynchrony (PVA) is a common and critical issue during
mechanical ventilation, affecting up to 85% of patients. PVA can result in
clinical complications such as discomfort, sleep disruption, and potentially
more severe conditions like ventilator-induced lung injury and diaphragm
dysfunction. Traditional PVA management, which relies on manual adjustments by
healthcare providers, is often inadequate due to delays and errors. While
various computational methods, including rule-based, statistical, and deep
learning approaches, have been developed to detect PVA events, they face
challenges related to dataset imbalances and lack of interpretability. In this
work, we propose a shapelet-based approach SHIP for PVA detection, utilizing
shapelets - discriminative subsequences in time-series data - to enhance
detection accuracy and interpretability. Our method addresses dataset
imbalances through shapelet-based data augmentation and constructs a shapelet
pool to transform the dataset for more effective classification. The combined
shapelet and statistical features are then used in a classifier to identify PVA
events. Experimental results on medical datasets show that SHIP significantly
improves PVA detection while providing interpretable insights into model
decisions.
| no_new_dataset | 0.949059 |
2503.06669 | Qingwen Bu | AgiBot-World-Contributors, Qingwen Bu, Jisong Cai, Li Chen, Xiuqi Cui,
Yan Ding, Siyuan Feng, Shenyuan Gao, Xindong He, Xu Huang, Shu Jiang, Yuxin
Jiang, Cheng Jing, Hongyang Li, Jialu Li, Chiming Liu, Yi Liu, Yuxiang Lu,
Jianlan Luo, Ping Luo, Yao Mu, Yuehan Niu, Yixuan Pan, Jiangmiao Pang, Yu
Qiao, Guanghui Ren, Cheng Ruan, Jiaqi Shan, Yongjian Shen, Chengshi Shi,
Mingkang Shi, Modi Shi, Chonghao Sima, Jianheng Song, Huijie Wang, Wenhao
Wang, Dafeng Wei, Chengen Xie, Guo Xu, Junchi Yan, Cunbiao Yang, Lei Yang,
Shukai Yang, Maoqing Yao, Jia Zeng, Chi Zhang, Qinglin Zhang, Bin Zhao,
Chengyue Zhao, Jiaqi Zhao, Jianchao Zhu | AgiBot World Colosseo: A Large-scale Manipulation Platform for Scalable
and Intelligent Embodied Systems | Project website: https://agibot-world.com/. Github repo:
https://github.com/OpenDriveLab/AgiBot-World. The author list is ordered
alphabetically by surname, with detailed contributions provided in the
appendix | null | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We explore how scalable robot data can address real-world challenges for
generalized robotic manipulation. Introducing AgiBot World, a large-scale
platform comprising over 1 million trajectories across 217 tasks in five
deployment scenarios, we achieve an order-of-magnitude increase in data scale
compared to existing datasets. Accelerated by a standardized collection
pipeline with human-in-the-loop verification, AgiBot World guarantees
high-quality and diverse data distribution. It is extensible from grippers to
dexterous hands and visuo-tactile sensors for fine-grained skill acquisition.
Building on top of data, we introduce Genie Operator-1 (GO-1), a novel
generalist policy that leverages latent action representations to maximize data
utilization, demonstrating predictable performance scaling with increased data
volume. Policies pre-trained on our dataset achieve an average performance
improvement of 30% over those trained on Open X-Embodiment, both in in-domain
and out-of-distribution scenarios. GO-1 exhibits exceptional capability in
real-world dexterous and long-horizon tasks, achieving over 60% success rate on
complex tasks and outperforming prior RDT approach by 32%. By open-sourcing the
dataset, tools, and models, we aim to democratize access to large-scale,
high-quality robot data, advancing the pursuit of scalable and general-purpose
intelligence.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 15:40:29 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:59:16 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"AgiBot-World-Contributors",
"",
""
],
[
"Bu",
"Qingwen",
""
],
[
"Cai",
"Jisong",
""
],
[
"Chen",
"Li",
""
],
[
"Cui",
"Xiuqi",
""
],
[
"Ding",
"Yan",
""
],
[
"Feng",
"Siyuan",
""
],
[
"Gao",
"Shenyuan",
""
],
[
"He",
"Xindong",
""
],
[
"Huang",
"Xu",
""
],
[
"Jiang",
"Shu",
""
],
[
"Jiang",
"Yuxin",
""
],
[
"Jing",
"Cheng",
""
],
[
"Li",
"Hongyang",
""
],
[
"Li",
"Jialu",
""
],
[
"Liu",
"Chiming",
""
],
[
"Liu",
"Yi",
""
],
[
"Lu",
"Yuxiang",
""
],
[
"Luo",
"Jianlan",
""
],
[
"Luo",
"Ping",
""
],
[
"Mu",
"Yao",
""
],
[
"Niu",
"Yuehan",
""
],
[
"Pan",
"Yixuan",
""
],
[
"Pang",
"Jiangmiao",
""
],
[
"Qiao",
"Yu",
""
],
[
"Ren",
"Guanghui",
""
],
[
"Ruan",
"Cheng",
""
],
[
"Shan",
"Jiaqi",
""
],
[
"Shen",
"Yongjian",
""
],
[
"Shi",
"Chengshi",
""
],
[
"Shi",
"Mingkang",
""
],
[
"Shi",
"Modi",
""
],
[
"Sima",
"Chonghao",
""
],
[
"Song",
"Jianheng",
""
],
[
"Wang",
"Huijie",
""
],
[
"Wang",
"Wenhao",
""
],
[
"Wei",
"Dafeng",
""
],
[
"Xie",
"Chengen",
""
],
[
"Xu",
"Guo",
""
],
[
"Yan",
"Junchi",
""
],
[
"Yang",
"Cunbiao",
""
],
[
"Yang",
"Lei",
""
],
[
"Yang",
"Shukai",
""
],
[
"Yao",
"Maoqing",
""
],
[
"Zeng",
"Jia",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zhang",
"Qinglin",
""
],
[
"Zhao",
"Bin",
""
],
[
"Zhao",
"Chengyue",
""
],
[
"Zhao",
"Jiaqi",
""
],
[
"Zhu",
"Jianchao",
""
]
]
| TITLE: AgiBot World Colosseo: A Large-scale Manipulation Platform for Scalable
and Intelligent Embodied Systems
ABSTRACT: We explore how scalable robot data can address real-world challenges for
generalized robotic manipulation. Introducing AgiBot World, a large-scale
platform comprising over 1 million trajectories across 217 tasks in five
deployment scenarios, we achieve an order-of-magnitude increase in data scale
compared to existing datasets. Accelerated by a standardized collection
pipeline with human-in-the-loop verification, AgiBot World guarantees
high-quality and diverse data distribution. It is extensible from grippers to
dexterous hands and visuo-tactile sensors for fine-grained skill acquisition.
Building on top of data, we introduce Genie Operator-1 (GO-1), a novel
generalist policy that leverages latent action representations to maximize data
utilization, demonstrating predictable performance scaling with increased data
volume. Policies pre-trained on our dataset achieve an average performance
improvement of 30% over those trained on Open X-Embodiment, both in in-domain
and out-of-distribution scenarios. GO-1 exhibits exceptional capability in
real-world dexterous and long-horizon tasks, achieving over 60% success rate on
complex tasks and outperforming prior RDT approach by 32%. By open-sourcing the
dataset, tools, and models, we aim to democratize access to large-scale,
high-quality robot data, advancing the pursuit of scalable and general-purpose
intelligence.
| no_new_dataset | 0.950595 |
2503.06692 | Yuchen Yan | Yuchen Yan, Yongliang Shen, Yang Liu, Jin Jiang, Mengdi Zhang, Jian
Shao, Yueting Zhuang | InftyThink: Breaking the Length Limits of Long-Context Reasoning in
Large Language Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Advanced reasoning in large language models has achieved remarkable
performance on challenging tasks, but the prevailing long-context reasoning
paradigm faces critical limitations: quadratic computational scaling with
sequence length, reasoning constrained by maximum context boundaries, and
performance degradation beyond pre-training context windows. Existing
approaches primarily compress reasoning chains without addressing the
fundamental scaling problem. To overcome these challenges, we introduce
InftyThink, a paradigm that transforms monolithic reasoning into an iterative
process with intermediate summarization. By interleaving short reasoning
segments with concise progress summaries, our approach enables unbounded
reasoning depth while maintaining bounded computational costs. This creates a
characteristic sawtooth memory pattern that significantly reduces computational
complexity compared to traditional approaches. Furthermore, we develop a
methodology for reconstructing long-context reasoning datasets into our
iterative format, transforming OpenR1-Math into 333K training instances.
Experiments across multiple model architectures demonstrate that our approach
reduces computational costs while improving performance, with Qwen2.5-Math-7B
showing 3-13% improvements across MATH500, AIME24, and GPQA_diamond benchmarks.
Our work challenges the assumed trade-off between reasoning depth and
computational efficiency, providing a more scalable approach to complex
reasoning without architectural modifications.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 16:59:14 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 16:00:47 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Yan",
"Yuchen",
""
],
[
"Shen",
"Yongliang",
""
],
[
"Liu",
"Yang",
""
],
[
"Jiang",
"Jin",
""
],
[
"Zhang",
"Mengdi",
""
],
[
"Shao",
"Jian",
""
],
[
"Zhuang",
"Yueting",
""
]
]
| TITLE: InftyThink: Breaking the Length Limits of Long-Context Reasoning in
Large Language Models
ABSTRACT: Advanced reasoning in large language models has achieved remarkable
performance on challenging tasks, but the prevailing long-context reasoning
paradigm faces critical limitations: quadratic computational scaling with
sequence length, reasoning constrained by maximum context boundaries, and
performance degradation beyond pre-training context windows. Existing
approaches primarily compress reasoning chains without addressing the
fundamental scaling problem. To overcome these challenges, we introduce
InftyThink, a paradigm that transforms monolithic reasoning into an iterative
process with intermediate summarization. By interleaving short reasoning
segments with concise progress summaries, our approach enables unbounded
reasoning depth while maintaining bounded computational costs. This creates a
characteristic sawtooth memory pattern that significantly reduces computational
complexity compared to traditional approaches. Furthermore, we develop a
methodology for reconstructing long-context reasoning datasets into our
iterative format, transforming OpenR1-Math into 333K training instances.
Experiments across multiple model architectures demonstrate that our approach
reduces computational costs while improving performance, with Qwen2.5-Math-7B
showing 3-13% improvements across MATH500, AIME24, and GPQA_diamond benchmarks.
Our work challenges the assumed trade-off between reasoning depth and
computational efficiency, providing a more scalable approach to complex
reasoning without architectural modifications.
| no_new_dataset | 0.940572 |
2503.06743 | Cheng Huang | Cheng Huang and Weizheng Xie and Tsengdar J. Lee and Jui-Kai Wang and
Karanjit Kooner and Jia Zhang | X-GAN: A Generative AI-Powered Unsupervised Model for High-Precision
Segmentation of Retinal Main Vessels toward Early Detection of Glaucoma | 11 pages, 8 figures | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Structural changes in main retinal blood vessels serve as critical biomarkers
for the onset and progression of glaucoma. Identifying these vessels is vital
for vascular modeling yet highly challenging. This paper proposes X-GAN, a
generative AI-powered unsupervised segmentation model designed for extracting
main blood vessels from Optical Coherence Tomography Angiography (OCTA) images.
The process begins with the Space Colonization Algorithm (SCA) to rapidly
generate a skeleton of vessels, featuring their radii. By synergistically
integrating generative adversarial networks (GANs) with biostatistical modeling
of vessel radii, X-GAN enables a fast reconstruction of both 2D and 3D
representations of the vessels. Based on this reconstruction, X-GAN achieves
nearly 100\% segmentation accuracy without relying on labeled data or
high-performance computing resources. Also, to address the Issue, data scarity,
we introduce GSS-RetVein, a high-definition mixed 2D and 3D glaucoma retinal
dataset. GSS-RetVein provides a rigorous benchmark due to its exceptionally
clear capillary structures, introducing controlled noise for testing model
robustness. Its 2D images feature sharp capillary boundaries, while its 3D
component enhances vascular reconstruction and blood flow prediction,
supporting glaucoma progression simulations. Experimental results confirm
GSS-RetVein's superiority in evaluating main vessel segmentation compared to
existing datasets. Code and dataset are here:
https://github.com/VikiXie/SatMar8.
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2025 19:56:36 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 20:23:00 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Huang",
"Cheng",
""
],
[
"Xie",
"Weizheng",
""
],
[
"Lee",
"Tsengdar J.",
""
],
[
"Wang",
"Jui-Kai",
""
],
[
"Kooner",
"Karanjit",
""
],
[
"Zhang",
"Jia",
""
]
]
| TITLE: X-GAN: A Generative AI-Powered Unsupervised Model for High-Precision
Segmentation of Retinal Main Vessels toward Early Detection of Glaucoma
ABSTRACT: Structural changes in main retinal blood vessels serve as critical biomarkers
for the onset and progression of glaucoma. Identifying these vessels is vital
for vascular modeling yet highly challenging. This paper proposes X-GAN, a
generative AI-powered unsupervised segmentation model designed for extracting
main blood vessels from Optical Coherence Tomography Angiography (OCTA) images.
The process begins with the Space Colonization Algorithm (SCA) to rapidly
generate a skeleton of vessels, featuring their radii. By synergistically
integrating generative adversarial networks (GANs) with biostatistical modeling
of vessel radii, X-GAN enables a fast reconstruction of both 2D and 3D
representations of the vessels. Based on this reconstruction, X-GAN achieves
nearly 100\% segmentation accuracy without relying on labeled data or
high-performance computing resources. Also, to address the Issue, data scarity,
we introduce GSS-RetVein, a high-definition mixed 2D and 3D glaucoma retinal
dataset. GSS-RetVein provides a rigorous benchmark due to its exceptionally
clear capillary structures, introducing controlled noise for testing model
robustness. Its 2D images feature sharp capillary boundaries, while its 3D
component enhances vascular reconstruction and blood flow prediction,
supporting glaucoma progression simulations. Experimental results confirm
GSS-RetVein's superiority in evaluating main vessel segmentation compared to
existing datasets. Code and dataset are here:
https://github.com/VikiXie/SatMar8.
| no_new_dataset | 0.934035 |
2503.07384 | Gonzalo Mancera | Gonzalo Mancera, Daniel DeAlcala, Julian Fierrez, Ruben Tolosana,
Aythami Morales | Is My Text in Your AI Model? Gradient-based Membership Inference Test
applied to LLMs | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This work adapts and studies the gradient-based Membership Inference Test
(gMINT) to the classification of text based on LLMs. MINT is a general approach
intended to determine if given data was used for training machine learning
models, and this work focuses on its application to the domain of Natural
Language Processing. Using gradient-based analysis, the MINT model identifies
whether particular data samples were included during the language model
training phase, addressing growing concerns about data privacy in machine
learning. The method was evaluated in seven Transformer-based models and six
datasets comprising over 2.5 million sentences, focusing on text classification
tasks. Experimental results demonstrate MINTs robustness, achieving AUC scores
between 85% and 99%, depending on data size and model architecture. These
findings highlight MINTs potential as a scalable and reliable tool for auditing
machine learning models, ensuring transparency, safeguarding sensitive data,
and fostering ethical compliance in the deployment of AI/NLP technologies.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:32:56 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 12:37:37 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Mancera",
"Gonzalo",
""
],
[
"DeAlcala",
"Daniel",
""
],
[
"Fierrez",
"Julian",
""
],
[
"Tolosana",
"Ruben",
""
],
[
"Morales",
"Aythami",
""
]
]
| TITLE: Is My Text in Your AI Model? Gradient-based Membership Inference Test
applied to LLMs
ABSTRACT: This work adapts and studies the gradient-based Membership Inference Test
(gMINT) to the classification of text based on LLMs. MINT is a general approach
intended to determine if given data was used for training machine learning
models, and this work focuses on its application to the domain of Natural
Language Processing. Using gradient-based analysis, the MINT model identifies
whether particular data samples were included during the language model
training phase, addressing growing concerns about data privacy in machine
learning. The method was evaluated in seven Transformer-based models and six
datasets comprising over 2.5 million sentences, focusing on text classification
tasks. Experimental results demonstrate MINTs robustness, achieving AUC scores
between 85% and 99%, depending on data size and model architecture. These
findings highlight MINTs potential as a scalable and reliable tool for auditing
machine learning models, ensuring transparency, safeguarding sensitive data,
and fostering ethical compliance in the deployment of AI/NLP technologies.
| no_new_dataset | 0.947866 |
2503.07933 | Yirui Wang | Qinji Yu, Yirui Wang, Ke Yan, Dandan Zheng, Dashan Ai, Dazhou Guo,
Zhanghexuan Ji, Yanzhou Su, Yun Bian, Na Shen, Xiaowei Ding, Le Lu, Xianghua
Ye, Dakai Jin | From Slices to Sequences: Autoregressive Tracking Transformer for
Cohesive and Consistent 3D Lymph Node Detection in CT Scans | Technical report (11 pages plus supplementary) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Lymph node (LN) assessment is an essential task in the routine radiology
workflow, providing valuable insights for cancer staging, treatment planning
and beyond. Identifying scatteredly-distributed and low-contrast LNs in 3D CT
scans is highly challenging, even for experienced clinicians. Previous lesion
and LN detection methods demonstrate effectiveness of 2.5D approaches (i.e,
using 2D network with multi-slice inputs), leveraging pretrained 2D model
weights and showing improved accuracy as compared to separate 2D or 3D
detectors. However, slice-based 2.5D detectors do not explicitly model
inter-slice consistency for LN as a 3D object, requiring heuristic post-merging
steps to generate final 3D LN instances, which can involve tuning a set of
parameters for each dataset. In this work, we formulate 3D LN detection as a
tracking task and propose LN-Tracker, a novel LN tracking transformer, for
joint end-to-end detection and 3D instance association. Built upon DETR-based
detector, LN-Tracker decouples transformer decoder's query into the track and
detection groups, where the track query autoregressively follows previously
tracked LN instances along the z-axis of a CT scan. We design a new transformer
decoder with masked attention module to align track query's content to the
context of current slice, meanwhile preserving detection query's high accuracy
in current slice. An inter-slice similarity loss is introduced to encourage
cohesive LN association between slices. Extensive evaluation on four lymph node
datasets shows LN-Tracker's superior performance, with at least 2.7% gain in
average sensitivity when compared to other top 3D/2.5D detectors. Further
validation on public lung nodule and prostate tumor detection tasks confirms
the generalizability of LN-Tracker as it achieves top performance on both
tasks.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 00:22:05 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 00:01:12 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Yu",
"Qinji",
""
],
[
"Wang",
"Yirui",
""
],
[
"Yan",
"Ke",
""
],
[
"Zheng",
"Dandan",
""
],
[
"Ai",
"Dashan",
""
],
[
"Guo",
"Dazhou",
""
],
[
"Ji",
"Zhanghexuan",
""
],
[
"Su",
"Yanzhou",
""
],
[
"Bian",
"Yun",
""
],
[
"Shen",
"Na",
""
],
[
"Ding",
"Xiaowei",
""
],
[
"Lu",
"Le",
""
],
[
"Ye",
"Xianghua",
""
],
[
"Jin",
"Dakai",
""
]
]
| TITLE: From Slices to Sequences: Autoregressive Tracking Transformer for
Cohesive and Consistent 3D Lymph Node Detection in CT Scans
ABSTRACT: Lymph node (LN) assessment is an essential task in the routine radiology
workflow, providing valuable insights for cancer staging, treatment planning
and beyond. Identifying scatteredly-distributed and low-contrast LNs in 3D CT
scans is highly challenging, even for experienced clinicians. Previous lesion
and LN detection methods demonstrate effectiveness of 2.5D approaches (i.e,
using 2D network with multi-slice inputs), leveraging pretrained 2D model
weights and showing improved accuracy as compared to separate 2D or 3D
detectors. However, slice-based 2.5D detectors do not explicitly model
inter-slice consistency for LN as a 3D object, requiring heuristic post-merging
steps to generate final 3D LN instances, which can involve tuning a set of
parameters for each dataset. In this work, we formulate 3D LN detection as a
tracking task and propose LN-Tracker, a novel LN tracking transformer, for
joint end-to-end detection and 3D instance association. Built upon DETR-based
detector, LN-Tracker decouples transformer decoder's query into the track and
detection groups, where the track query autoregressively follows previously
tracked LN instances along the z-axis of a CT scan. We design a new transformer
decoder with masked attention module to align track query's content to the
context of current slice, meanwhile preserving detection query's high accuracy
in current slice. An inter-slice similarity loss is introduced to encourage
cohesive LN association between slices. Extensive evaluation on four lymph node
datasets shows LN-Tracker's superior performance, with at least 2.7% gain in
average sensitivity when compared to other top 3D/2.5D detectors. Further
validation on public lung nodule and prostate tumor detection tasks confirms
the generalizability of LN-Tracker as it achieves top performance on both
tasks.
| no_new_dataset | 0.946695 |
2503.08048 | Sanghyuk Chun | Sanghyuk Chun and Sangdoo Yun | LongProLIP: A Probabilistic Vision-Language Model with Long Context Text | Accepted as a tiny paper at the 1st workshop of "Quantify Uncertainty
and Hallucination in Foundation Models: The Next Frontier in Reliable AI" at
ICLR 2025; code: https://github.com/naver-ai/prolip; models:
https://huggingface.co/collections/SanghyukChun/prolip-6712595dfc87fd8597350291 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Probabilistic Language-Image Pre-Training (ProLIP) has been
proposed to tackle the multiplicity issue of vision-language (VL) tasks.
Despite their success in probabilistic representation learning at a scale, the
ProLIP models cannot handle long context texts longer than 64 context length,
which limits their ability to capture rich contextual information from longer
text sequences. To address this issue, this paper proposes a fine-tuning
strategy for ProLIP to accept longer texts, e.g., 256 text tokens. Experimental
results on Urban-1k and the DataComp evaluation suite show that the proposed
LongProLIP recipe can improve understanding of long contexts while minimizing
the negative effect of fine-tuning.We also observe a trade-off between the long
context understanding (measured by Urban-1k) and general zero-shot capability
(measured by evaluation datasets by DataComp). Code is available at
https://github.com/naver-ai/prolip
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:04:43 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:05:04 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Chun",
"Sanghyuk",
""
],
[
"Yun",
"Sangdoo",
""
]
]
| TITLE: LongProLIP: A Probabilistic Vision-Language Model with Long Context Text
ABSTRACT: Recently, Probabilistic Language-Image Pre-Training (ProLIP) has been
proposed to tackle the multiplicity issue of vision-language (VL) tasks.
Despite their success in probabilistic representation learning at a scale, the
ProLIP models cannot handle long context texts longer than 64 context length,
which limits their ability to capture rich contextual information from longer
text sequences. To address this issue, this paper proposes a fine-tuning
strategy for ProLIP to accept longer texts, e.g., 256 text tokens. Experimental
results on Urban-1k and the DataComp evaluation suite show that the proposed
LongProLIP recipe can improve understanding of long contexts while minimizing
the negative effect of fine-tuning.We also observe a trade-off between the long
context understanding (measured by Urban-1k) and general zero-shot capability
(measured by evaluation datasets by DataComp). Code is available at
https://github.com/naver-ai/prolip
| no_new_dataset | 0.948917 |
2503.08061 | DongHeun Han | DongHeun Han, Byungmin Kim, RoUn Lee, KyeongMin Kim, Hyoseok Hwang,
HyeongYeop Kang | ForceGrip: Data-Free Curriculum Learning for Realistic Grip Force
Control in VR Hand Manipulation | 19 pages, 10 figs (with appendix). Demo Video:
https://youtu.be/lR-YAfninJw | null | null | null | cs.RO cs.GR cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Realistic hand manipulation is a key component of immersive virtual reality
(VR), yet existing methods often rely on a kinematic approach or motion-capture
datasets that omit crucial physical attributes such as contact forces and
finger torques. Consequently, these approaches prioritize tight,
one-size-fits-all grips rather than reflecting users' intended force levels. We
present ForceGrip, a deep learning agent that synthesizes realistic hand
manipulation motions, faithfully reflecting the user's grip force intention.
Instead of mimicking predefined motion datasets, ForceGrip uses generated
training scenarios-randomizing object shapes, wrist movements, and trigger
input flows-to challenge the agent with a broad spectrum of physical
interactions. To effectively learn from these complex tasks, we employ a
three-phase curriculum learning framework comprising Finger Positioning,
Intention Adaptation, and Dynamic Stabilization. This progressive strategy
ensures stable hand-object contact, adaptive force control based on user
inputs, and robust handling under dynamic conditions. Additionally, a proximity
reward function enhances natural finger motions and accelerates training
convergence. Quantitative and qualitative evaluations reveal ForceGrip's
superior force controllability and plausibility compared to state-of-the-art
methods. The video presentation of our paper is accessible at
https://youtu.be/lR-YAfninJw.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 05:39:07 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:35:25 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Han",
"DongHeun",
""
],
[
"Kim",
"Byungmin",
""
],
[
"Lee",
"RoUn",
""
],
[
"Kim",
"KyeongMin",
""
],
[
"Hwang",
"Hyoseok",
""
],
[
"Kang",
"HyeongYeop",
""
]
]
| TITLE: ForceGrip: Data-Free Curriculum Learning for Realistic Grip Force
Control in VR Hand Manipulation
ABSTRACT: Realistic hand manipulation is a key component of immersive virtual reality
(VR), yet existing methods often rely on a kinematic approach or motion-capture
datasets that omit crucial physical attributes such as contact forces and
finger torques. Consequently, these approaches prioritize tight,
one-size-fits-all grips rather than reflecting users' intended force levels. We
present ForceGrip, a deep learning agent that synthesizes realistic hand
manipulation motions, faithfully reflecting the user's grip force intention.
Instead of mimicking predefined motion datasets, ForceGrip uses generated
training scenarios-randomizing object shapes, wrist movements, and trigger
input flows-to challenge the agent with a broad spectrum of physical
interactions. To effectively learn from these complex tasks, we employ a
three-phase curriculum learning framework comprising Finger Positioning,
Intention Adaptation, and Dynamic Stabilization. This progressive strategy
ensures stable hand-object contact, adaptive force control based on user
inputs, and robust handling under dynamic conditions. Additionally, a proximity
reward function enhances natural finger motions and accelerates training
convergence. Quantitative and qualitative evaluations reveal ForceGrip's
superior force controllability and plausibility compared to state-of-the-art
methods. The video presentation of our paper is accessible at
https://youtu.be/lR-YAfninJw.
| no_new_dataset | 0.952175 |
2503.08421 | Qiming Xia | Qiming Xia, Wenkai Lin, Haoen Xiang, Xun Huang, Siheng Chen, Zhen
Dong, Cheng Wang, Chenglu Wen | Learning to Detect Objects from Multi-Agent LiDAR Scans without Manual
Labels | 11 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised 3D object detection serves as an important solution for offline
3D object annotation. However, due to the data sparsity and limited views, the
clustering-based label fitting in unsupervised object detection often generates
low-quality pseudo-labels. Multi-agent collaborative dataset, which involves
the sharing of complementary observations among agents, holds the potential to
break through this bottleneck. In this paper, we introduce a novel unsupervised
method that learns to Detect Objects from Multi-Agent LiDAR scans, termed DOtA,
without using labels from external. DOtA first uses the internally shared
ego-pose and ego-shape of collaborative agents to initialize the detector,
leveraging the generalization performance of neural networks to infer
preliminary labels. Subsequently,DOtA uses the complementary observations
between agents to perform multi-scale encoding on preliminary labels, then
decodes high-quality and low-quality labels. These labels are further used as
prompts to guide a correct feature learning process, thereby enhancing the
performance of the unsupervised object detection task. Extensive experiments on
the V2V4Real and OPV2V datasets show that our DOtA outperforms state-of-the-art
unsupervised 3D object detection methods. Additionally, we also validate the
effectiveness of the DOtA labels under various collaborative perception
frameworks.The code is available at https://github.com/xmuqimingxia/DOtA.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 13:34:35 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 01:41:04 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Xia",
"Qiming",
""
],
[
"Lin",
"Wenkai",
""
],
[
"Xiang",
"Haoen",
""
],
[
"Huang",
"Xun",
""
],
[
"Chen",
"Siheng",
""
],
[
"Dong",
"Zhen",
""
],
[
"Wang",
"Cheng",
""
],
[
"Wen",
"Chenglu",
""
]
]
| TITLE: Learning to Detect Objects from Multi-Agent LiDAR Scans without Manual
Labels
ABSTRACT: Unsupervised 3D object detection serves as an important solution for offline
3D object annotation. However, due to the data sparsity and limited views, the
clustering-based label fitting in unsupervised object detection often generates
low-quality pseudo-labels. Multi-agent collaborative dataset, which involves
the sharing of complementary observations among agents, holds the potential to
break through this bottleneck. In this paper, we introduce a novel unsupervised
method that learns to Detect Objects from Multi-Agent LiDAR scans, termed DOtA,
without using labels from external. DOtA first uses the internally shared
ego-pose and ego-shape of collaborative agents to initialize the detector,
leveraging the generalization performance of neural networks to infer
preliminary labels. Subsequently,DOtA uses the complementary observations
between agents to perform multi-scale encoding on preliminary labels, then
decodes high-quality and low-quality labels. These labels are further used as
prompts to guide a correct feature learning process, thereby enhancing the
performance of the unsupervised object detection task. Extensive experiments on
the V2V4Real and OPV2V datasets show that our DOtA outperforms state-of-the-art
unsupervised 3D object detection methods. Additionally, we also validate the
effectiveness of the DOtA labels under various collaborative perception
frameworks.The code is available at https://github.com/xmuqimingxia/DOtA.
| no_new_dataset | 0.943712 |
2503.08422 | Runjian Chen | Runjian Chen, Wenqi Shao, Bo Zhang, Shaoshuai Shi, Li Jiang, Ping Luo | JiSAM: Alleviate Labeling Burden and Corner Case Problems in Autonomous
Driving via Minimal Real-World Data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep-learning-based autonomous driving (AD) perception introduces a promising
picture for safe and environment-friendly transportation. However, the
over-reliance on real labeled data in LiDAR perception limits the scale of
on-road attempts. 3D real world data is notoriously time-and-energy-consuming
to annotate and lacks corner cases like rare traffic participants. On the
contrary, in simulators like CARLA, generating labeled LiDAR point clouds with
corner cases is a piece of cake. However, introducing synthetic point clouds to
improve real perception is non-trivial. This stems from two challenges: 1)
sample efficiency of simulation datasets 2) simulation-to-real gaps. To
overcome both challenges, we propose a plug-and-play method called JiSAM ,
shorthand for Jittering augmentation, domain-aware backbone and memory-based
Sectorized AlignMent. In extensive experiments conducted on the famous AD
dataset NuScenes, we demonstrate that, with SOTA 3D object detector, JiSAM is
able to utilize the simulation data and only labels on 2.5% available real data
to achieve comparable performance to models trained on all real data.
Additionally, JiSAM achieves more than 15 mAPs on the objects not labeled in
the real training set. We will release models and codes.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 13:35:39 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:54:11 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Chen",
"Runjian",
""
],
[
"Shao",
"Wenqi",
""
],
[
"Zhang",
"Bo",
""
],
[
"Shi",
"Shaoshuai",
""
],
[
"Jiang",
"Li",
""
],
[
"Luo",
"Ping",
""
]
]
| TITLE: JiSAM: Alleviate Labeling Burden and Corner Case Problems in Autonomous
Driving via Minimal Real-World Data
ABSTRACT: Deep-learning-based autonomous driving (AD) perception introduces a promising
picture for safe and environment-friendly transportation. However, the
over-reliance on real labeled data in LiDAR perception limits the scale of
on-road attempts. 3D real world data is notoriously time-and-energy-consuming
to annotate and lacks corner cases like rare traffic participants. On the
contrary, in simulators like CARLA, generating labeled LiDAR point clouds with
corner cases is a piece of cake. However, introducing synthetic point clouds to
improve real perception is non-trivial. This stems from two challenges: 1)
sample efficiency of simulation datasets 2) simulation-to-real gaps. To
overcome both challenges, we propose a plug-and-play method called JiSAM ,
shorthand for Jittering augmentation, domain-aware backbone and memory-based
Sectorized AlignMent. In extensive experiments conducted on the famous AD
dataset NuScenes, we demonstrate that, with SOTA 3D object detector, JiSAM is
able to utilize the simulation data and only labels on 2.5% available real data
to achieve comparable performance to models trained on all real data.
Additionally, JiSAM achieves more than 15 mAPs on the objects not labeled in
the real training set. We will release models and codes.
| no_new_dataset | 0.949153 |
2503.08481 | Weijie Zhou | Weijie Zhou, Manli Tao, Chaoyang Zhao, Haiyun Guo, Honghui Dong, Ming
Tang, Jinqiao Wang | PhysVLM: Enabling Visual Language Models to Understand Robotic Physical
Reachability | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Understanding the environment and a robot's physical reachability is crucial
for task execution. While state-of-the-art vision-language models (VLMs) excel
in environmental perception, they often generate inaccurate or impractical
responses in embodied visual reasoning tasks due to a lack of understanding of
robotic physical reachability. To address this issue, we propose a unified
representation of physical reachability across diverse robots, i.e.,
Space-Physical Reachability Map (S-P Map), and PhysVLM, a vision-language model
that integrates this reachability information into visual reasoning.
Specifically, the S-P Map abstracts a robot's physical reachability into a
generalized spatial representation, independent of specific robot
configurations, allowing the model to focus on reachability features rather
than robot-specific parameters. Subsequently, PhysVLM extends traditional VLM
architectures by incorporating an additional feature encoder to process the S-P
Map, enabling the model to reason about physical reachability without
compromising its general vision-language capabilities. To train and evaluate
PhysVLM, we constructed a large-scale multi-robot dataset, Phys100K, and a
challenging benchmark, EQA-phys, which includes tasks for six different robots
in both simulated and real-world environments. Experimental results demonstrate
that PhysVLM outperforms existing models, achieving a 14\% improvement over
GPT-4o on EQA-phys and surpassing advanced embodied VLMs such as RoboMamba and
SpatialVLM on the RoboVQA-val and OpenEQA benchmarks. Additionally, the S-P Map
shows strong compatibility with various VLMs, and its integration into
GPT-4o-mini yields a 7.1\% performance improvement.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 14:34:41 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 11:19:12 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Zhou",
"Weijie",
""
],
[
"Tao",
"Manli",
""
],
[
"Zhao",
"Chaoyang",
""
],
[
"Guo",
"Haiyun",
""
],
[
"Dong",
"Honghui",
""
],
[
"Tang",
"Ming",
""
],
[
"Wang",
"Jinqiao",
""
]
]
| TITLE: PhysVLM: Enabling Visual Language Models to Understand Robotic Physical
Reachability
ABSTRACT: Understanding the environment and a robot's physical reachability is crucial
for task execution. While state-of-the-art vision-language models (VLMs) excel
in environmental perception, they often generate inaccurate or impractical
responses in embodied visual reasoning tasks due to a lack of understanding of
robotic physical reachability. To address this issue, we propose a unified
representation of physical reachability across diverse robots, i.e.,
Space-Physical Reachability Map (S-P Map), and PhysVLM, a vision-language model
that integrates this reachability information into visual reasoning.
Specifically, the S-P Map abstracts a robot's physical reachability into a
generalized spatial representation, independent of specific robot
configurations, allowing the model to focus on reachability features rather
than robot-specific parameters. Subsequently, PhysVLM extends traditional VLM
architectures by incorporating an additional feature encoder to process the S-P
Map, enabling the model to reason about physical reachability without
compromising its general vision-language capabilities. To train and evaluate
PhysVLM, we constructed a large-scale multi-robot dataset, Phys100K, and a
challenging benchmark, EQA-phys, which includes tasks for six different robots
in both simulated and real-world environments. Experimental results demonstrate
that PhysVLM outperforms existing models, achieving a 14\% improvement over
GPT-4o on EQA-phys and surpassing advanced embodied VLMs such as RoboMamba and
SpatialVLM on the RoboVQA-val and OpenEQA benchmarks. Additionally, the S-P Map
shows strong compatibility with various VLMs, and its integration into
GPT-4o-mini yields a 7.1\% performance improvement.
| new_dataset | 0.959231 |
2503.08708 | Jingyi Zheng | Jingyi Zheng, Junfeng Wang, Zhen Sun, Wenhan Dong, Yule Liu, Xinlei He | TH-Bench: Evaluating Evading Attacks via Humanizing AI Text on
Machine-Generated Text Detectors | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As Large Language Models (LLMs) advance, Machine-Generated Texts (MGTs) have
become increasingly fluent, high-quality, and informative. Existing wide-range
MGT detectors are designed to identify MGTs to prevent the spread of plagiarism
and misinformation. However, adversaries attempt to humanize MGTs to evade
detection (named evading attacks), which requires only minor modifications to
bypass MGT detectors. Unfortunately, existing attacks generally lack a unified
and comprehensive evaluation framework, as they are assessed using different
experimental settings, model architectures, and datasets. To fill this gap, we
introduce the Text-Humanization Benchmark (TH-Bench), the first comprehensive
benchmark to evaluate evading attacks against MGT detectors. TH-Bench evaluates
attacks across three key dimensions: evading effectiveness, text quality, and
computational overhead. Our extensive experiments evaluate 6 state-of-the-art
attacks against 13 MGT detectors across 6 datasets, spanning 19 domains and
generated by 11 widely used LLMs. Our findings reveal that no single evading
attack excels across all three dimensions. Through in-depth analysis, we
highlight the strengths and limitations of different attacks. More importantly,
we identify a trade-off among three dimensions and propose two optimization
insights. Through preliminary experiments, we validate their correctness and
effectiveness, offering potential directions for future research.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 02:55:05 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 10:37:18 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Zheng",
"Jingyi",
""
],
[
"Wang",
"Junfeng",
""
],
[
"Sun",
"Zhen",
""
],
[
"Dong",
"Wenhan",
""
],
[
"Liu",
"Yule",
""
],
[
"He",
"Xinlei",
""
]
]
| TITLE: TH-Bench: Evaluating Evading Attacks via Humanizing AI Text on
Machine-Generated Text Detectors
ABSTRACT: As Large Language Models (LLMs) advance, Machine-Generated Texts (MGTs) have
become increasingly fluent, high-quality, and informative. Existing wide-range
MGT detectors are designed to identify MGTs to prevent the spread of plagiarism
and misinformation. However, adversaries attempt to humanize MGTs to evade
detection (named evading attacks), which requires only minor modifications to
bypass MGT detectors. Unfortunately, existing attacks generally lack a unified
and comprehensive evaluation framework, as they are assessed using different
experimental settings, model architectures, and datasets. To fill this gap, we
introduce the Text-Humanization Benchmark (TH-Bench), the first comprehensive
benchmark to evaluate evading attacks against MGT detectors. TH-Bench evaluates
attacks across three key dimensions: evading effectiveness, text quality, and
computational overhead. Our extensive experiments evaluate 6 state-of-the-art
attacks against 13 MGT detectors across 6 datasets, spanning 19 domains and
generated by 11 widely used LLMs. Our findings reveal that no single evading
attack excels across all three dimensions. Through in-depth analysis, we
highlight the strengths and limitations of different attacks. More importantly,
we identify a trade-off among three dimensions and propose two optimization
insights. Through preliminary experiments, we validate their correctness and
effectiveness, offering potential directions for future research.
| no_new_dataset | 0.836588 |
2503.08967 | Adrien Gregorj | Adrien Gregorj, Zeynep Y\"ucel, Francesco Zanlugo, Takayuki Kanda | Spontaneous gait synchronisation in the wild: exploring the effect of
distance and level of interaction | null | null | null | null | physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Gait synchronization in pedestrians is influenced by biomechanical,
environmental, and cognitive factors. Studying gait in ecological settings
provides insights often missed in controlled experiments. This study tackles
the challenges of assessing gait coordination in real-world interactions using
a dataset of uninstructed pedestrian trajectories recorded in an underground
pedestrian street network. The data are annotated for group relations,
interaction levels, and physical contact. The main goals of our study is to
devise a method to identify gait synchronisation from trajectory data and to
provide an in-depth analysis of social factors affecting gait synchronisation
in pedestrian groups. To that end, we first propose a method to extract gait
residuals from pedestrian trajectories, which capture motion of the body caused
by gait-induced oscillations. We thereafter apply a suite of analytical
techniques spanning both frequency and nonlinear domains. Frequency-based
methods, including the Gait Synchronisation Index and Cross Wavelet Coherence,
quantify the alignment of oscillatory patterns in gait. Complementary nonlinear
measures, such as Lyapunov exponents, determinism, and recurrence
quantification metrics, offer deeper insights into the dynamical stability and
predictability of coupled gaits. Results show that higher social interaction
and closer distances enhance gait synchronization, reducing stride frequency
variation and increasing stability. Additionally, triad formation and relative
positioning are shown to influence synchronisation. Overall, our findings
suggest that social interactions shape pedestrian gait coordination, with
interaction level and distance being key factors.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 00:25:19 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 09:17:23 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Gregorj",
"Adrien",
""
],
[
"Yücel",
"Zeynep",
""
],
[
"Zanlugo",
"Francesco",
""
],
[
"Kanda",
"Takayuki",
""
]
]
| TITLE: Spontaneous gait synchronisation in the wild: exploring the effect of
distance and level of interaction
ABSTRACT: Gait synchronization in pedestrians is influenced by biomechanical,
environmental, and cognitive factors. Studying gait in ecological settings
provides insights often missed in controlled experiments. This study tackles
the challenges of assessing gait coordination in real-world interactions using
a dataset of uninstructed pedestrian trajectories recorded in an underground
pedestrian street network. The data are annotated for group relations,
interaction levels, and physical contact. The main goals of our study is to
devise a method to identify gait synchronisation from trajectory data and to
provide an in-depth analysis of social factors affecting gait synchronisation
in pedestrian groups. To that end, we first propose a method to extract gait
residuals from pedestrian trajectories, which capture motion of the body caused
by gait-induced oscillations. We thereafter apply a suite of analytical
techniques spanning both frequency and nonlinear domains. Frequency-based
methods, including the Gait Synchronisation Index and Cross Wavelet Coherence,
quantify the alignment of oscillatory patterns in gait. Complementary nonlinear
measures, such as Lyapunov exponents, determinism, and recurrence
quantification metrics, offer deeper insights into the dynamical stability and
predictability of coupled gaits. Results show that higher social interaction
and closer distances enhance gait synchronization, reducing stride frequency
variation and increasing stability. Additionally, triad formation and relative
positioning are shown to influence synchronisation. Overall, our findings
suggest that social interactions shape pedestrian gait coordination, with
interaction level and distance being key factors.
| no_new_dataset | 0.940353 |
2503.09022 | Wenjie Qu | Wenjie Qu, Yuguang Zhou, Yongji Wu, Tingsong Xiao, Binhang Yuan,
Yiming Li, Jiaheng Zhang | Prompt Inversion Attack against Collaborative Inference of Large
Language Models | To appear at IEEE Symposium on Security and Privacy 2025 | null | null | null | cs.CR | http://creativecommons.org/publicdomain/zero/1.0/ | Large language models (LLMs) have been widely applied for their remarkable
capability of content generation. However, the practical use of open-source
LLMs is hindered by high resource requirements, making deployment expensive and
limiting widespread development. The collaborative inference is a promising
solution for this problem, in which users collaborate by each hosting a subset
of layers and transmitting intermediate activation. Many companies are building
collaborative inference platforms to reduce LLM serving costs, leveraging
users' underutilized GPUs. Despite widespread interest in collaborative
inference within academia and industry, the privacy risks associated with LLM
collaborative inference have not been well studied. This is largely because of
the challenge posed by inverting LLM activation due to its strong
non-linearity.
In this paper, to validate the severity of privacy threats in LLM
collaborative inference, we introduce the concept of prompt inversion attack
(PIA), where a malicious participant intends to recover the input prompt
through the activation transmitted by its previous participant. Extensive
experiments show that our PIA method substantially outperforms existing
baselines. For example, our method achieves an 88.4\% token accuracy on the
Skytrax dataset with the Llama-65B model when inverting the maximum number of
transformer layers, while the best baseline method only achieves 22.8\%
accuracy. The results verify the effectiveness of our PIA attack and highlights
its practical threat to LLM collaborative inference systems.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 03:20:03 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 05:55:55 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Qu",
"Wenjie",
""
],
[
"Zhou",
"Yuguang",
""
],
[
"Wu",
"Yongji",
""
],
[
"Xiao",
"Tingsong",
""
],
[
"Yuan",
"Binhang",
""
],
[
"Li",
"Yiming",
""
],
[
"Zhang",
"Jiaheng",
""
]
]
| TITLE: Prompt Inversion Attack against Collaborative Inference of Large
Language Models
ABSTRACT: Large language models (LLMs) have been widely applied for their remarkable
capability of content generation. However, the practical use of open-source
LLMs is hindered by high resource requirements, making deployment expensive and
limiting widespread development. The collaborative inference is a promising
solution for this problem, in which users collaborate by each hosting a subset
of layers and transmitting intermediate activation. Many companies are building
collaborative inference platforms to reduce LLM serving costs, leveraging
users' underutilized GPUs. Despite widespread interest in collaborative
inference within academia and industry, the privacy risks associated with LLM
collaborative inference have not been well studied. This is largely because of
the challenge posed by inverting LLM activation due to its strong
non-linearity.
In this paper, to validate the severity of privacy threats in LLM
collaborative inference, we introduce the concept of prompt inversion attack
(PIA), where a malicious participant intends to recover the input prompt
through the activation transmitted by its previous participant. Extensive
experiments show that our PIA method substantially outperforms existing
baselines. For example, our method achieves an 88.4\% token accuracy on the
Skytrax dataset with the Llama-65B model when inverting the maximum number of
transformer layers, while the best baseline method only achieves 22.8\%
accuracy. The results verify the effectiveness of our PIA attack and highlights
its practical threat to LLM collaborative inference systems.
| no_new_dataset | 0.945147 |
2503.09158 | Fufangchen Zhao | Fufangchen Zhao, Ming Li, Linrui Xu, Wenhao Jiang, Jian Gao, Danfeng
Yan | FaVChat: Unlocking Fine-Grained Facial Video Understanding with
Multimodal Large Language Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video-based multimodal large language models (VMLLMs) have demonstrated
remarkable potential in cross-modal video understanding. However, their
abilities in fine-grained face comprehension remain largely underexplored.
Given its pivotal role in human-centric intelligence, developing VMLLMs for
facial understanding holds a fundamental problem. To address this gap, we
propose FaVChat, the first VMLLM specifically designed for fine-grained facial
video understanding. To facilitate its training, we construct a large-scale
facial video dataset comprising over 60k videos, with the majority annotated
with 83 fine-grained facial attributes. These attributes are incorporated to
enrich GPT-4o-generated captions, yielding 60k high-quality video-summary pairs
and an additional 170k fine-grained question-answering (QA) pairs. To
effectively capture rich facial clues, we propose a hybrid model architecture
composed of a general visual encoder, a dedicated facial encoder, and a
mixture-of-experts-enhanced adapter for adaptive fusion of multi-source visual
features. To mitigate information loss during feature transformation, we
extract multi-granularity representations from the facial encoder and integrate
them into the subsequent LLM. This design enhances the model's ability to
comprehend and respond to questions involving diverse levels of visual details.
We employ a progressive training paradigm, transitioning from video
summarization to a high-quality subset of video QA, gradually increasing task
complexity to enhance the model's fine-grained visual perception. We conduct
extensive zero-shot evaluation on a couple of public benchmarks, demonstrating
that FaVChat consistently surpasses existing VMLLMs across multiple tasks.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 08:33:46 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 10:45:03 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Zhao",
"Fufangchen",
""
],
[
"Li",
"Ming",
""
],
[
"Xu",
"Linrui",
""
],
[
"Jiang",
"Wenhao",
""
],
[
"Gao",
"Jian",
""
],
[
"Yan",
"Danfeng",
""
]
]
| TITLE: FaVChat: Unlocking Fine-Grained Facial Video Understanding with
Multimodal Large Language Models
ABSTRACT: Video-based multimodal large language models (VMLLMs) have demonstrated
remarkable potential in cross-modal video understanding. However, their
abilities in fine-grained face comprehension remain largely underexplored.
Given its pivotal role in human-centric intelligence, developing VMLLMs for
facial understanding holds a fundamental problem. To address this gap, we
propose FaVChat, the first VMLLM specifically designed for fine-grained facial
video understanding. To facilitate its training, we construct a large-scale
facial video dataset comprising over 60k videos, with the majority annotated
with 83 fine-grained facial attributes. These attributes are incorporated to
enrich GPT-4o-generated captions, yielding 60k high-quality video-summary pairs
and an additional 170k fine-grained question-answering (QA) pairs. To
effectively capture rich facial clues, we propose a hybrid model architecture
composed of a general visual encoder, a dedicated facial encoder, and a
mixture-of-experts-enhanced adapter for adaptive fusion of multi-source visual
features. To mitigate information loss during feature transformation, we
extract multi-granularity representations from the facial encoder and integrate
them into the subsequent LLM. This design enhances the model's ability to
comprehend and respond to questions involving diverse levels of visual details.
We employ a progressive training paradigm, transitioning from video
summarization to a high-quality subset of video QA, gradually increasing task
complexity to enhance the model's fine-grained visual perception. We conduct
extensive zero-shot evaluation on a couple of public benchmarks, demonstrating
that FaVChat consistently surpasses existing VMLLMs across multiple tasks.
| new_dataset | 0.959154 |
2503.09320 | Snehal Jauhri | Marvin Heidinger, Snehal Jauhri, Vignesh Prasad, Georgia Chalvatzaki | 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from
Human Videos | Project site: https://sites.google.com/view/2handedafforder | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When interacting with objects, humans effectively reason about which regions
of objects are viable for an intended action, i.e., the affordance regions of
the object. They can also account for subtle differences in object regions
based on the task to be performed and whether one or two hands need to be used.
However, current vision-based affordance prediction methods often reduce the
problem to naive object part segmentation. In this work, we propose a framework
for extracting affordance data from human activity video datasets. Our
extracted 2HANDS dataset contains precise object affordance region
segmentations and affordance class-labels as narrations of the activity
performed. The data also accounts for bimanual actions, i.e., two hands
co-ordinating and interacting with one or more objects. We present a VLM-based
affordance prediction model, 2HandedAfforder, trained on the dataset and
demonstrate superior performance over baselines in affordance region
segmentation for various activities. Finally, we show that our predicted
affordance regions are actionable, i.e., can be used by an agent performing a
task, through demonstration in robotic manipulation scenarios.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 12:12:07 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 06:35:58 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Heidinger",
"Marvin",
""
],
[
"Jauhri",
"Snehal",
""
],
[
"Prasad",
"Vignesh",
""
],
[
"Chalvatzaki",
"Georgia",
""
]
]
| TITLE: 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from
Human Videos
ABSTRACT: When interacting with objects, humans effectively reason about which regions
of objects are viable for an intended action, i.e., the affordance regions of
the object. They can also account for subtle differences in object regions
based on the task to be performed and whether one or two hands need to be used.
However, current vision-based affordance prediction methods often reduce the
problem to naive object part segmentation. In this work, we propose a framework
for extracting affordance data from human activity video datasets. Our
extracted 2HANDS dataset contains precise object affordance region
segmentations and affordance class-labels as narrations of the activity
performed. The data also accounts for bimanual actions, i.e., two hands
co-ordinating and interacting with one or more objects. We present a VLM-based
affordance prediction model, 2HandedAfforder, trained on the dataset and
demonstrate superior performance over baselines in affordance region
segmentation for various activities. Finally, we show that our predicted
affordance regions are actionable, i.e., can be used by an agent performing a
task, through demonstration in robotic manipulation scenarios.
| new_dataset | 0.865224 |
2503.09494 | Qi Xu | Qi Xu and Annie Qu | Representation Retrieval Learning for Heterogeneous Data Integration | null | null | null | null | cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of big data, large-scale, multi-modal datasets are increasingly
ubiquitous, offering unprecedented opportunities for predictive modeling and
scientific discovery. However, these datasets often exhibit complex
heterogeneity, such as covariate shift, posterior drift, and missing
modalities, that can hinder the accuracy of existing prediction algorithms. To
address these challenges, we propose a novel Representation Retrieval ($R^2$)
framework, which integrates a representation learning module (the representer)
with a sparsity-induced machine learning model (the learner). Moreover, we
introduce the notion of "integrativeness" for representers, characterized by
the effective data sources used in learning representers, and propose a
Selective Integration Penalty (SIP) to explicitly improve the property.
Theoretically, we demonstrate that the $R^2$ framework relaxes the conventional
full-sharing assumption in multi-task learning, allowing for partially shared
structures, and that SIP can improve the convergence rate of the excess risk
bound. Extensive simulation studies validate the empirical performance of our
framework, and applications to two real-world datasets further confirm its
superiority over existing approaches.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 15:54:37 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 16:39:15 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Xu",
"Qi",
""
],
[
"Qu",
"Annie",
""
]
]
| TITLE: Representation Retrieval Learning for Heterogeneous Data Integration
ABSTRACT: In the era of big data, large-scale, multi-modal datasets are increasingly
ubiquitous, offering unprecedented opportunities for predictive modeling and
scientific discovery. However, these datasets often exhibit complex
heterogeneity, such as covariate shift, posterior drift, and missing
modalities, that can hinder the accuracy of existing prediction algorithms. To
address these challenges, we propose a novel Representation Retrieval ($R^2$)
framework, which integrates a representation learning module (the representer)
with a sparsity-induced machine learning model (the learner). Moreover, we
introduce the notion of "integrativeness" for representers, characterized by
the effective data sources used in learning representers, and propose a
Selective Integration Penalty (SIP) to explicitly improve the property.
Theoretically, we demonstrate that the $R^2$ framework relaxes the conventional
full-sharing assumption in multi-task learning, allowing for partially shared
structures, and that SIP can improve the convergence rate of the excess risk
bound. Extensive simulation studies validate the empirical performance of our
framework, and applications to two real-world datasets further confirm its
superiority over existing approaches.
| no_new_dataset | 0.945701 |
2503.09559 | Amir Aghabiglou | Yiwei Chen, Amir Aghabiglou, Shijie Chen, Motahare Torki, Chao Tang,
Ruud B. van Heeswijk and Yves Wiaux | The R2D2 Deep Neural Network Series for Scalable Non-Cartesian Magnetic
Resonance Imaging | 13 pages, 10 figures | null | null | null | eess.IV cs.CV cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | We introduce the R2D2 Deep Neural Network (DNN) series paradigm for fast and
scalable image reconstruction from highly-accelerated non-Cartesian k-space
acquisitions in Magnetic Resonance Imaging (MRI). While unrolled DNN
architectures provide a robust image formation approach via data-consistency
layers, embedding non-uniform fast Fourier transform operators in a DNN can
become impractical to train at large scale, e.g in 2D MRI with a large number
of coils, or for higher-dimensional imaging. Plug-and-play approaches that
alternate a learned denoiser blind to the measurement setting with a
data-consistency step are not affected by this limitation but their highly
iterative nature implies slow reconstruction. To address this scalability
challenge, we leverage the R2D2 paradigm that was recently introduced to enable
ultra-fast reconstruction for large-scale Fourier imaging in radio astronomy.
R2D2's reconstruction is formed as a series of residual images iteratively
estimated as outputs of DNN modules taking the previous iteration's data
residual as input. The method can be interpreted as a learned version of the
Matching Pursuit algorithm. A series of R2D2 DNN modules were sequentially
trained in a supervised manner on the fastMRI dataset and validated for 2D
multi-coil MRI in simulation and on real data, targeting highly under-sampled
radial k-space sampling. Results suggest that a series with only few DNNs
achieves superior reconstruction quality over its unrolled incarnation R2D2-Net
(whose training is also much less scalable), and over the state-of-the-art
diffusion-based "Decomposed Diffusion Sampler" approach (also characterised by
a slower reconstruction process).
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 17:24:47 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 09:35:19 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Chen",
"Yiwei",
""
],
[
"Aghabiglou",
"Amir",
""
],
[
"Chen",
"Shijie",
""
],
[
"Torki",
"Motahare",
""
],
[
"Tang",
"Chao",
""
],
[
"van Heeswijk",
"Ruud B.",
""
],
[
"Wiaux",
"Yves",
""
]
]
| TITLE: The R2D2 Deep Neural Network Series for Scalable Non-Cartesian Magnetic
Resonance Imaging
ABSTRACT: We introduce the R2D2 Deep Neural Network (DNN) series paradigm for fast and
scalable image reconstruction from highly-accelerated non-Cartesian k-space
acquisitions in Magnetic Resonance Imaging (MRI). While unrolled DNN
architectures provide a robust image formation approach via data-consistency
layers, embedding non-uniform fast Fourier transform operators in a DNN can
become impractical to train at large scale, e.g in 2D MRI with a large number
of coils, or for higher-dimensional imaging. Plug-and-play approaches that
alternate a learned denoiser blind to the measurement setting with a
data-consistency step are not affected by this limitation but their highly
iterative nature implies slow reconstruction. To address this scalability
challenge, we leverage the R2D2 paradigm that was recently introduced to enable
ultra-fast reconstruction for large-scale Fourier imaging in radio astronomy.
R2D2's reconstruction is formed as a series of residual images iteratively
estimated as outputs of DNN modules taking the previous iteration's data
residual as input. The method can be interpreted as a learned version of the
Matching Pursuit algorithm. A series of R2D2 DNN modules were sequentially
trained in a supervised manner on the fastMRI dataset and validated for 2D
multi-coil MRI in simulation and on real data, targeting highly under-sampled
radial k-space sampling. Results suggest that a series with only few DNNs
achieves superior reconstruction quality over its unrolled incarnation R2D2-Net
(whose training is also much less scalable), and over the state-of-the-art
diffusion-based "Decomposed Diffusion Sampler" approach (also characterised by
a slower reconstruction process).
| no_new_dataset | 0.955981 |
2503.09626 | Qi Wu | Qi Wu, Yingguang Yang, hao liu, Hao Peng, Buyun He, Yutong Xia, Yong
Liao | Certainly Bot Or Not? Trustworthy Social Bot Detection via Robust
Multi-Modal Neural Processes | 12 pages. 7 figures | null | null | null | cs.SI cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Social bot detection is crucial for mitigating misinformation, online
manipulation, and coordinated inauthentic behavior. While existing neural
network-based detectors perform well on benchmarks, they struggle with
generalization due to distribution shifts across datasets and frequently
produce overconfident predictions for out-of-distribution accounts beyond the
training data. To address this, we introduce a novel Uncertainty Estimation for
Social Bot Detection (UESBD) framework, which quantifies the predictive
uncertainty of detectors beyond mere classification. For this task, we propose
Robust Multi-modal Neural Processes (RMNP), which aims to enhance the
robustness of multi-modal neural processes to modality inconsistencies caused
by social bot camouflage. RMNP first learns unimodal representations through
modality-specific encoders. Then, unimodal attentive neural processes are
employed to encode the Gaussian distribution of unimodal latent variables.
Furthermore, to avoid social bots stealing human features to camouflage
themselves thus causing certain modalities to provide conflictive information,
we introduce an evidential gating network to explicitly model the reliability
of modalities. The joint latent distribution is learned through the generalized
product of experts, which takes the reliability of each modality into
consideration during fusion. The final prediction is obtained through Monte
Carlo sampling of the joint latent distribution followed by a decoder.
Experiments on three real-world benchmarks show the effectiveness of RMNP in
classification and uncertainty estimation, as well as its robustness to
modality conflicts.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 01:32:52 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Wu",
"Qi",
""
],
[
"Yang",
"Yingguang",
""
],
[
"liu",
"hao",
""
],
[
"Peng",
"Hao",
""
],
[
"He",
"Buyun",
""
],
[
"Xia",
"Yutong",
""
],
[
"Liao",
"Yong",
""
]
]
| TITLE: Certainly Bot Or Not? Trustworthy Social Bot Detection via Robust
Multi-Modal Neural Processes
ABSTRACT: Social bot detection is crucial for mitigating misinformation, online
manipulation, and coordinated inauthentic behavior. While existing neural
network-based detectors perform well on benchmarks, they struggle with
generalization due to distribution shifts across datasets and frequently
produce overconfident predictions for out-of-distribution accounts beyond the
training data. To address this, we introduce a novel Uncertainty Estimation for
Social Bot Detection (UESBD) framework, which quantifies the predictive
uncertainty of detectors beyond mere classification. For this task, we propose
Robust Multi-modal Neural Processes (RMNP), which aims to enhance the
robustness of multi-modal neural processes to modality inconsistencies caused
by social bot camouflage. RMNP first learns unimodal representations through
modality-specific encoders. Then, unimodal attentive neural processes are
employed to encode the Gaussian distribution of unimodal latent variables.
Furthermore, to avoid social bots stealing human features to camouflage
themselves thus causing certain modalities to provide conflictive information,
we introduce an evidential gating network to explicitly model the reliability
of modalities. The joint latent distribution is learned through the generalized
product of experts, which takes the reliability of each modality into
consideration during fusion. The final prediction is obtained through Monte
Carlo sampling of the joint latent distribution followed by a decoder.
Experiments on three real-world benchmarks show the effectiveness of RMNP in
classification and uncertainty estimation, as well as its robustness to
modality conflicts.
| no_new_dataset | 0.945751 |
2503.09634 | Gexin Huang | Gexin Huang, Zhangsihao Yang, Yalin Wang, Guido Gerig, Mengwei Ren,
Xiaoxiao Li | Identity Preserving Latent Diffusion for Brain Aging Modeling | 19 pages, 10 figures | null | null | null | cs.GR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Structural and appearance changes in brain imaging over time are crucial
indicators of neurodevelopment and neurodegeneration. The rapid advancement of
large-scale generative models provides a promising backbone for modeling these
complex global and local changes in brain images, such as transforming the age
of a source image to a target age. However, current generative models,
typically trained on independently and identically distributed (i.i.d.) data,
may struggle to maintain intra-subject spatiotemporal consistency during
transformations. We propose the Identity-Preserving Longitudinal Diffusion
Model (IP-LDM), designed to accurately transform brain ages while preserving
subject identity. Our approach involves first extracting the identity
representation from the source image. Then, conditioned on the target age, the
latent diffusion model learns to generate the age-transformed target image. To
ensure consistency within the same subject over time, we regularize the
identity representation using a triplet contrastive formulation. Our
experiments on both elderly and infant brain datasets demonstrate that our
model outperforms existing conditional generative models, producing realistic
age transformations while preserving intra-subject identity.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2025 23:44:52 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Huang",
"Gexin",
""
],
[
"Yang",
"Zhangsihao",
""
],
[
"Wang",
"Yalin",
""
],
[
"Gerig",
"Guido",
""
],
[
"Ren",
"Mengwei",
""
],
[
"Li",
"Xiaoxiao",
""
]
]
| TITLE: Identity Preserving Latent Diffusion for Brain Aging Modeling
ABSTRACT: Structural and appearance changes in brain imaging over time are crucial
indicators of neurodevelopment and neurodegeneration. The rapid advancement of
large-scale generative models provides a promising backbone for modeling these
complex global and local changes in brain images, such as transforming the age
of a source image to a target age. However, current generative models,
typically trained on independently and identically distributed (i.i.d.) data,
may struggle to maintain intra-subject spatiotemporal consistency during
transformations. We propose the Identity-Preserving Longitudinal Diffusion
Model (IP-LDM), designed to accurately transform brain ages while preserving
subject identity. Our approach involves first extracting the identity
representation from the source image. Then, conditioned on the target age, the
latent diffusion model learns to generate the age-transformed target image. To
ensure consistency within the same subject over time, we regularize the
identity representation using a triplet contrastive formulation. Our
experiments on both elderly and infant brain datasets demonstrate that our
model outperforms existing conditional generative models, producing realistic
age transformations while preserving intra-subject identity.
| no_new_dataset | 0.951594 |
2503.09638 | Milad Rahmati | Milad Rahmati | Edge AI-Powered Real-Time Decision-Making for Autonomous Vehicles in
Adverse Weather Conditions | null | null | null | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Autonomous vehicles (AVs) are transforming modern transportation, but their
reliability and safety are significantly challenged by harsh weather conditions
such as heavy rain, fog, and snow. These environmental factors impair the
performance of cameras, LiDAR, and radar, leading to reduced situational
awareness and increased accident risks. Conventional cloud-based AI systems
introduce communication delays, making them unsuitable for the rapid
decision-making required in real-time autonomous navigation. This paper
presents a novel Edge AI-driven real-time decision-making framework designed to
enhance AV responsiveness under adverse weather conditions. The proposed
approach integrates convolutional neural networks (CNNs) and recurrent neural
networks (RNNs) for improved perception, alongside reinforcement learning
(RL)-based strategies to optimize vehicle control in uncertain environments. By
processing data at the network edge, this system significantly reduces decision
latency while improving AV adaptability. The framework is evaluated using
simulated driving scenarios in CARLA and real-world data from the Waymo Open
Dataset, covering diverse weather conditions. Experimental results indicate
that the proposed model achieves a 40% reduction in processing time and a 25%
enhancement in perception accuracy compared to conventional cloud-based
systems. These findings highlight the potential of Edge AI in improving AV
autonomy, safety, and efficiency, paving the way for more reliable self-driving
technology in challenging real-world environments.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 02:02:05 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Rahmati",
"Milad",
""
]
]
| TITLE: Edge AI-Powered Real-Time Decision-Making for Autonomous Vehicles in
Adverse Weather Conditions
ABSTRACT: Autonomous vehicles (AVs) are transforming modern transportation, but their
reliability and safety are significantly challenged by harsh weather conditions
such as heavy rain, fog, and snow. These environmental factors impair the
performance of cameras, LiDAR, and radar, leading to reduced situational
awareness and increased accident risks. Conventional cloud-based AI systems
introduce communication delays, making them unsuitable for the rapid
decision-making required in real-time autonomous navigation. This paper
presents a novel Edge AI-driven real-time decision-making framework designed to
enhance AV responsiveness under adverse weather conditions. The proposed
approach integrates convolutional neural networks (CNNs) and recurrent neural
networks (RNNs) for improved perception, alongside reinforcement learning
(RL)-based strategies to optimize vehicle control in uncertain environments. By
processing data at the network edge, this system significantly reduces decision
latency while improving AV adaptability. The framework is evaluated using
simulated driving scenarios in CARLA and real-world data from the Waymo Open
Dataset, covering diverse weather conditions. Experimental results indicate
that the proposed model achieves a 40% reduction in processing time and a 25%
enhancement in perception accuracy compared to conventional cloud-based
systems. These findings highlight the potential of Edge AI in improving AV
autonomy, safety, and efficiency, paving the way for more reliable self-driving
technology in challenging real-world environments.
| no_new_dataset | 0.955236 |
2503.09643 | Daoyuan Li | Daoyuan Li, Zuyuan Yang, Shengli Xie | FedMSGL: A Self-Expressive Hypergraph Based Federated Multi-View
Learning | Accept by AAAI2025 | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Federated learning is essential for enabling collaborative model training
across decentralized data sources while preserving data privacy and security.
This approach mitigates the risks associated with centralized data collection
and addresses concerns related to data ownership and compliance. Despite
significant advancements in federated learning algorithms that address
communication bottlenecks and enhance privacy protection, existing works
overlook the impact of differences in data feature dimensions, resulting in
global models that disproportionately depend on participants with large feature
dimensions. Additionally, current single-view federated learning methods fail
to account for the unique characteristics of multi-view data, leading to
suboptimal performance in processing such data. To address these issues, we
propose a Self-expressive Hypergraph Based Federated Multi-view Learning method
(FedMSGL). The proposed method leverages self-expressive character in the local
training to learn uniform dimension subspace with latent sample relation. At
the central side, an adaptive fusion technique is employed to generate the
global model, while constructing a hypergraph from the learned global and
view-specific subspace to capture intricate interconnections across views.
Experiments on multi-view datasets with different feature dimensions validated
the effectiveness of the proposed method.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 05:13:45 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Li",
"Daoyuan",
""
],
[
"Yang",
"Zuyuan",
""
],
[
"Xie",
"Shengli",
""
]
]
| TITLE: FedMSGL: A Self-Expressive Hypergraph Based Federated Multi-View
Learning
ABSTRACT: Federated learning is essential for enabling collaborative model training
across decentralized data sources while preserving data privacy and security.
This approach mitigates the risks associated with centralized data collection
and addresses concerns related to data ownership and compliance. Despite
significant advancements in federated learning algorithms that address
communication bottlenecks and enhance privacy protection, existing works
overlook the impact of differences in data feature dimensions, resulting in
global models that disproportionately depend on participants with large feature
dimensions. Additionally, current single-view federated learning methods fail
to account for the unique characteristics of multi-view data, leading to
suboptimal performance in processing such data. To address these issues, we
propose a Self-expressive Hypergraph Based Federated Multi-view Learning method
(FedMSGL). The proposed method leverages self-expressive character in the local
training to learn uniform dimension subspace with latent sample relation. At
the central side, an adaptive fusion technique is employed to generate the
global model, while constructing a hypergraph from the learned global and
view-specific subspace to capture intricate interconnections across views.
Experiments on multi-view datasets with different feature dimensions validated
the effectiveness of the proposed method.
| no_new_dataset | 0.948728 |
2503.09658 | Zhi Xuan Liu | Hao-Tsung Yang, Jie Gao, Bo-Yi Liu, Zhi-Xuan Liu | Towards Robust Model Evolution with Algorithmic Recourse | 9 pages,4 figures | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Algorithmic Recourse is a way for users to modify their attributes to align
with a model's expectations, thereby improving their outcomes after receiving
unfavorable decisions. In real-world scenarios, users often need to
strategically adjust their attributes to compete for limited resources.
However, such strategic behavior induces users to "game" algorithms, causing
model collapse due to distribution shifts. These shifts arise from user
competition, resource constraints, and adaptive user responses. While prior
research on Algorithmic Recourse has explored its effects on both systems and
users, the impact of resource constraints and competition over time remains
underexplored. In this work, we develop a general framework to model user
strategic behaviors and their interactions with decision-making systems under
resource constraints and competitive dynamics. Through theoretical analysis and
empirical evaluation, we identify three key phenomena that arise consistently
in both synthetic and real-world datasets: escalating decision boundaries,
non-robust model predictions, and inequitable recourse actions. Finally, we
discuss the broader social implications of these findings and present two
algorithmic strategies aimed at mitigating these challenges.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 12:17:34 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Yang",
"Hao-Tsung",
""
],
[
"Gao",
"Jie",
""
],
[
"Liu",
"Bo-Yi",
""
],
[
"Liu",
"Zhi-Xuan",
""
]
]
| TITLE: Towards Robust Model Evolution with Algorithmic Recourse
ABSTRACT: Algorithmic Recourse is a way for users to modify their attributes to align
with a model's expectations, thereby improving their outcomes after receiving
unfavorable decisions. In real-world scenarios, users often need to
strategically adjust their attributes to compete for limited resources.
However, such strategic behavior induces users to "game" algorithms, causing
model collapse due to distribution shifts. These shifts arise from user
competition, resource constraints, and adaptive user responses. While prior
research on Algorithmic Recourse has explored its effects on both systems and
users, the impact of resource constraints and competition over time remains
underexplored. In this work, we develop a general framework to model user
strategic behaviors and their interactions with decision-making systems under
resource constraints and competitive dynamics. Through theoretical analysis and
empirical evaluation, we identify three key phenomena that arise consistently
in both synthetic and real-world datasets: escalating decision boundaries,
non-robust model predictions, and inequitable recourse actions. Finally, we
discuss the broader social implications of these findings and present two
algorithmic strategies aimed at mitigating these challenges.
| no_new_dataset | 0.946941 |
2503.09669 | Sangwon Jang | Sangwon Jang, June Suk Choi, Jaehyeong Jo, Kimin Lee, Sung Ju Hwang | Silent Branding Attack: Trigger-free Data Poisoning Attack on
Text-to-Image Diffusion Models | CVPR 2025. Project page: https://silent-branding.github.io/ | null | null | null | cs.CV cs.AI cs.CR | http://creativecommons.org/licenses/by/4.0/ | Text-to-image diffusion models have achieved remarkable success in generating
high-quality contents from text prompts. However, their reliance on publicly
available data and the growing trend of data sharing for fine-tuning make these
models particularly vulnerable to data poisoning attacks. In this work, we
introduce the Silent Branding Attack, a novel data poisoning method that
manipulates text-to-image diffusion models to generate images containing
specific brand logos or symbols without any text triggers. We find that when
certain visual patterns are repeatedly in the training data, the model learns
to reproduce them naturally in its outputs, even without prompt mentions.
Leveraging this, we develop an automated data poisoning algorithm that
unobtrusively injects logos into original images, ensuring they blend naturally
and remain undetected. Models trained on this poisoned dataset generate images
containing logos without degrading image quality or text alignment. We
experimentally validate our silent branding attack across two realistic
settings on large-scale high-quality image datasets and style personalization
datasets, achieving high success rates even without a specific text trigger.
Human evaluation and quantitative metrics including logo detection show that
our method can stealthily embed logos.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 17:21:57 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Jang",
"Sangwon",
""
],
[
"Choi",
"June Suk",
""
],
[
"Jo",
"Jaehyeong",
""
],
[
"Lee",
"Kimin",
""
],
[
"Hwang",
"Sung Ju",
""
]
]
| TITLE: Silent Branding Attack: Trigger-free Data Poisoning Attack on
Text-to-Image Diffusion Models
ABSTRACT: Text-to-image diffusion models have achieved remarkable success in generating
high-quality contents from text prompts. However, their reliance on publicly
available data and the growing trend of data sharing for fine-tuning make these
models particularly vulnerable to data poisoning attacks. In this work, we
introduce the Silent Branding Attack, a novel data poisoning method that
manipulates text-to-image diffusion models to generate images containing
specific brand logos or symbols without any text triggers. We find that when
certain visual patterns are repeatedly in the training data, the model learns
to reproduce them naturally in its outputs, even without prompt mentions.
Leveraging this, we develop an automated data poisoning algorithm that
unobtrusively injects logos into original images, ensuring they blend naturally
and remain undetected. Models trained on this poisoned dataset generate images
containing logos without degrading image quality or text alignment. We
experimentally validate our silent branding attack across two realistic
settings on large-scale high-quality image datasets and style personalization
datasets, achieving high success rates even without a specific text trigger.
Human evaluation and quantitative metrics including logo detection show that
our method can stealthily embed logos.
| no_new_dataset | 0.951323 |
2503.09679 | Wei Cui | Wei Cui, Tongzi Wu, Jesse C. Cresswell, Yi Sui, Keyvan Golestan | DRESS: Disentangled Representation-based Self-Supervised Meta-Learning
for Diverse Tasks | 9 pages, 6 figures. An earlier version of the paper has been
presented at the Self-Supervised Learning workshop at the 2024 NeurIPS
conference | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Meta-learning represents a strong class of approaches for solving few-shot
learning tasks. Nonetheless, recent research suggests that simply pre-training
a generic encoder can potentially surpass meta-learning algorithms. In this
paper, we first discuss the reasons why meta-learning fails to stand out in
these few-shot learning experiments, and hypothesize that it is due to the
few-shot learning tasks lacking diversity. We propose DRESS, a task-agnostic
Disentangled REpresentation-based Self-Supervised meta-learning approach that
enables fast model adaptation on highly diversified few-shot learning tasks.
Specifically, DRESS utilizes disentangled representation learning to create
self-supervised tasks that can fuel the meta-training process. Furthermore, we
also propose a class-partition based metric for quantifying the task diversity
directly on the input space. We validate the effectiveness of DRESS through
experiments on datasets with multiple factors of variation and varying
complexity. The results suggest that DRESS is able to outperform competing
methods on the majority of the datasets and task setups. Through this paper, we
advocate for a re-examination of proper setups for task adaptation studies, and
aim to reignite interest in the potential of meta-learning for solving few-shot
learning tasks via disentangled representations.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:00:00 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Cui",
"Wei",
""
],
[
"Wu",
"Tongzi",
""
],
[
"Cresswell",
"Jesse C.",
""
],
[
"Sui",
"Yi",
""
],
[
"Golestan",
"Keyvan",
""
]
]
| TITLE: DRESS: Disentangled Representation-based Self-Supervised Meta-Learning
for Diverse Tasks
ABSTRACT: Meta-learning represents a strong class of approaches for solving few-shot
learning tasks. Nonetheless, recent research suggests that simply pre-training
a generic encoder can potentially surpass meta-learning algorithms. In this
paper, we first discuss the reasons why meta-learning fails to stand out in
these few-shot learning experiments, and hypothesize that it is due to the
few-shot learning tasks lacking diversity. We propose DRESS, a task-agnostic
Disentangled REpresentation-based Self-Supervised meta-learning approach that
enables fast model adaptation on highly diversified few-shot learning tasks.
Specifically, DRESS utilizes disentangled representation learning to create
self-supervised tasks that can fuel the meta-training process. Furthermore, we
also propose a class-partition based metric for quantifying the task diversity
directly on the input space. We validate the effectiveness of DRESS through
experiments on datasets with multiple factors of variation and varying
complexity. The results suggest that DRESS is able to outperform competing
methods on the majority of the datasets and task setups. Through this paper, we
advocate for a re-examination of proper setups for task adaptation studies, and
aim to reignite interest in the potential of meta-learning for solving few-shot
learning tasks via disentangled representations.
| no_new_dataset | 0.945551 |
2503.09701 | Julius Gonsior | Julia Romberg, Christopher Schr\"oder, Julius Gonsior, Katrin Tomanek,
Fredrik Olsson | Have LLMs Made Active Learning Obsolete? Surveying the NLP Community | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised learning relies on annotated data, which is expensive to obtain. A
longstanding strategy to reduce annotation costs is active learning, an
iterative process, in which a human annotates only data instances deemed
informative by a model. Large language models (LLMs) have pushed the
effectiveness of active learning, but have also improved methods such as few-
or zero-shot learning, and text synthesis - thereby introducing potential
alternatives. This raises the question: has active learning become obsolete? To
answer this fully, we must look beyond literature to practical experiences. We
conduct an online survey in the NLP community to collect previously intangible
insights on the perceived relevance of data annotation, particularly focusing
on active learning, including best practices, obstacles and expected future
developments. Our findings show that annotated data remains a key factor, and
active learning continues to be relevant. While the majority of active learning
users find it effective, a comparison with a community survey from over a
decade ago reveals persistent challenges: setup complexity, estimation of cost
reduction, and tooling. We publish an anonymized version of the collected
dataset
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:00:04 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Romberg",
"Julia",
""
],
[
"Schröder",
"Christopher",
""
],
[
"Gonsior",
"Julius",
""
],
[
"Tomanek",
"Katrin",
""
],
[
"Olsson",
"Fredrik",
""
]
]
| TITLE: Have LLMs Made Active Learning Obsolete? Surveying the NLP Community
ABSTRACT: Supervised learning relies on annotated data, which is expensive to obtain. A
longstanding strategy to reduce annotation costs is active learning, an
iterative process, in which a human annotates only data instances deemed
informative by a model. Large language models (LLMs) have pushed the
effectiveness of active learning, but have also improved methods such as few-
or zero-shot learning, and text synthesis - thereby introducing potential
alternatives. This raises the question: has active learning become obsolete? To
answer this fully, we must look beyond literature to practical experiences. We
conduct an online survey in the NLP community to collect previously intangible
insights on the perceived relevance of data annotation, particularly focusing
on active learning, including best practices, obstacles and expected future
developments. Our findings show that annotated data remains a key factor, and
active learning continues to be relevant. While the majority of active learning
users find it effective, a comparison with a community survey from over a
decade ago reveals persistent challenges: setup complexity, estimation of cost
reduction, and tooling. We publish an anonymized version of the collected
dataset
| no_new_dataset | 0.886027 |
2503.09707 | Ping Zhang | Ping Zhang and Zheda Mai and Quang-Huy Nguyen and Wei-Lun Chao | Revisiting semi-supervised learning in the era of foundation models | null | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Semi-supervised learning (SSL) leverages abundant unlabeled data alongside
limited labeled data to enhance learning. As vision foundation models (VFMs)
increasingly serve as the backbone of vision applications, it remains unclear
how SSL interacts with these pre-trained models. To address this gap, we
develop new SSL benchmark datasets where frozen VFMs underperform and
systematically evaluate representative SSL methods. We make a surprising
observation: parameter-efficient fine-tuning (PEFT) using only labeled data
often matches SSL performance, even without leveraging unlabeled data. This
motivates us to revisit self-training, a conceptually simple SSL baseline,
where we use the supervised PEFT model to pseudo-label unlabeled data for
further training. To overcome the notorious issue of noisy pseudo-labels, we
propose ensembling multiple PEFT approaches and VFM backbones to produce more
robust pseudo-labels. Empirical results validate the effectiveness of this
simple yet powerful approach, providing actionable insights into SSL with VFMs
and paving the way for more scalable and practical semi-supervised learning in
the era of foundation models.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:01:10 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Zhang",
"Ping",
""
],
[
"Mai",
"Zheda",
""
],
[
"Nguyen",
"Quang-Huy",
""
],
[
"Chao",
"Wei-Lun",
""
]
]
| TITLE: Revisiting semi-supervised learning in the era of foundation models
ABSTRACT: Semi-supervised learning (SSL) leverages abundant unlabeled data alongside
limited labeled data to enhance learning. As vision foundation models (VFMs)
increasingly serve as the backbone of vision applications, it remains unclear
how SSL interacts with these pre-trained models. To address this gap, we
develop new SSL benchmark datasets where frozen VFMs underperform and
systematically evaluate representative SSL methods. We make a surprising
observation: parameter-efficient fine-tuning (PEFT) using only labeled data
often matches SSL performance, even without leveraging unlabeled data. This
motivates us to revisit self-training, a conceptually simple SSL baseline,
where we use the supervised PEFT model to pseudo-label unlabeled data for
further training. To overcome the notorious issue of noisy pseudo-labels, we
propose ensembling multiple PEFT approaches and VFM backbones to produce more
robust pseudo-labels. Empirical results validate the effectiveness of this
simple yet powerful approach, providing actionable insights into SSL with VFMs
and paving the way for more scalable and practical semi-supervised learning in
the era of foundation models.
| new_dataset | 0.957991 |
2503.09720 | Ryan Milton | Ryan Milton, Vinicius Mikuni, Trevin Lee, Miguel Arratia, Tanvi
Wamorkar, Benjamin Nachman | Tools for Unbinned Unfolding | 21 pages, 4 figures | null | null | null | hep-ph hep-ex physics.data-an | http://creativecommons.org/licenses/by/4.0/ | Machine learning has enabled differential cross section measurements that are
not discretized. Going beyond the traditional histogram-based paradigm, these
unbinned unfolding methods are rapidly being integrated into experimental
workflows. In order to enable widespread adaptation and standardization, we
develop methods, benchmarks, and software for unbinned unfolding. For
methodology, we demonstrate the utility of boosted decision trees for unfolding
with a relatively small number of high-level features. This complements
state-of-the-art deep learning models capable of unfolding the full phase
space. To benchmark unbinned unfolding methods, we develop an extension of
existing dataset to include acceptance effects, a necessary challenge for real
measurements. Additionally, we directly compare binned and unbinned methods
using discretized inputs for the latter in order to control for the binning
itself. Lastly, we have assembled two software packages for the OmniFold
unbinned unfolding method that should serve as the starting point for any
future analyses using this technique. One package is based on the widely-used
RooUnfold framework and the other is a standalone package available through the
Python Package Index (PyPI).
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:10:48 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Milton",
"Ryan",
""
],
[
"Mikuni",
"Vinicius",
""
],
[
"Lee",
"Trevin",
""
],
[
"Arratia",
"Miguel",
""
],
[
"Wamorkar",
"Tanvi",
""
],
[
"Nachman",
"Benjamin",
""
]
]
| TITLE: Tools for Unbinned Unfolding
ABSTRACT: Machine learning has enabled differential cross section measurements that are
not discretized. Going beyond the traditional histogram-based paradigm, these
unbinned unfolding methods are rapidly being integrated into experimental
workflows. In order to enable widespread adaptation and standardization, we
develop methods, benchmarks, and software for unbinned unfolding. For
methodology, we demonstrate the utility of boosted decision trees for unfolding
with a relatively small number of high-level features. This complements
state-of-the-art deep learning models capable of unfolding the full phase
space. To benchmark unbinned unfolding methods, we develop an extension of
existing dataset to include acceptance effects, a necessary challenge for real
measurements. Additionally, we directly compare binned and unbinned methods
using discretized inputs for the latter in order to control for the binning
itself. Lastly, we have assembled two software packages for the OmniFold
unbinned unfolding method that should serve as the starting point for any
future analyses using this technique. One package is based on the widely-used
RooUnfold framework and the other is a standalone package available through the
Python Package Index (PyPI).
| no_new_dataset | 0.791217 |
2503.09721 | Manish Nagaraj | Manish Nagaraj, Deepak Ravikumar, Efstathia Soufleri and Kaushik Roy | Finding the Muses: Identifying Coresets through Loss Trajectories | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Deep learning models achieve state-of-the-art performance across domains but
face scalability challenges in real-time or resource-constrained scenarios. To
address this, we propose Loss Trajectory Correlation (LTC), a novel metric for
coreset selection that identifies critical training samples driving
generalization. $LTC$ quantifies the alignment between training sample loss
trajectories and validation set loss trajectories, enabling the construction of
compact, representative subsets. Unlike traditional methods with computational
and storage overheads that are infeasible to scale to large datasets, $LTC$
achieves superior efficiency as it can be computed as a byproduct of training.
Our results on CIFAR-100 and ImageNet-1k show that $LTC$ consistently achieves
accuracy on par with or surpassing state-of-the-art coreset selection methods,
with any differences remaining under 1%. LTC also effectively transfers across
various architectures, including ResNet, VGG, DenseNet, and Swin Transformer,
with minimal performance degradation (<2%). Additionally, LTC offers insights
into training dynamics, such as identifying aligned and conflicting sample
behaviors, at a fraction of the computational cost of traditional methods. This
framework paves the way for scalable coreset selection and efficient dataset
optimization.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:11:16 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Nagaraj",
"Manish",
""
],
[
"Ravikumar",
"Deepak",
""
],
[
"Soufleri",
"Efstathia",
""
],
[
"Roy",
"Kaushik",
""
]
]
| TITLE: Finding the Muses: Identifying Coresets through Loss Trajectories
ABSTRACT: Deep learning models achieve state-of-the-art performance across domains but
face scalability challenges in real-time or resource-constrained scenarios. To
address this, we propose Loss Trajectory Correlation (LTC), a novel metric for
coreset selection that identifies critical training samples driving
generalization. $LTC$ quantifies the alignment between training sample loss
trajectories and validation set loss trajectories, enabling the construction of
compact, representative subsets. Unlike traditional methods with computational
and storage overheads that are infeasible to scale to large datasets, $LTC$
achieves superior efficiency as it can be computed as a byproduct of training.
Our results on CIFAR-100 and ImageNet-1k show that $LTC$ consistently achieves
accuracy on par with or surpassing state-of-the-art coreset selection methods,
with any differences remaining under 1%. LTC also effectively transfers across
various architectures, including ResNet, VGG, DenseNet, and Swin Transformer,
with minimal performance degradation (<2%). Additionally, LTC offers insights
into training dynamics, such as identifying aligned and conflicting sample
behaviors, at a fraction of the computational cost of traditional methods. This
framework paves the way for scalable coreset selection and efficient dataset
optimization.
| no_new_dataset | 0.942135 |
2503.09726 | Mir Imtiaz Mostafiz | Mir Imtiaz Mostafiz, Imtiaz Karim and Elisa Bertino | How Feasible is Augmenting Fake Nodes with Learnable Features as a
Counter-strategy against Link Stealing Attacks? | Preprint for the Accepted Work in The 15th ACM Conference on Data and
Application Security and Privacy (CODASPY'25)}, 14 pages | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) are widely used and deployed for graph-based
prediction tasks. However, as good as GNNs are for learning graph data, they
also come with the risk of privacy leakage. For instance, an attacker can run
carefully crafted queries on the GNNs and, from the responses, can infer the
existence of an edge between a pair of nodes. This attack, dubbed as a
"link-stealing" attack, can jeopardize the user's privacy by leaking
potentially sensitive information. To protect against this attack, we propose
an approach called "$(N)$ode $(A)$ugmentation for $(R)$estricting $(G)$raphs
from $(I)$nsinuating their $(S)$tructure" ($NARGIS$) and study its feasibility.
$NARGIS$ is focused on reshaping the graph embedding space so that the
posterior from the GNN model will still provide utility for the prediction task
but will introduce ambiguity for the link-stealing attackers. To this end,
$NARGIS$ applies spectral clustering on the given graph to facilitate it being
augmented with new nodes -- that have learned features instead of fixed ones.
It utilizes tri-level optimization for learning parameters for the GNN model,
surrogate attacker model, and our defense model (i.e. learnable node features).
We extensively evaluate $NARGIS$ on three benchmark citation datasets over
eight knowledge availability settings for the attackers. We also evaluate the
model fidelity and defense performance on influence-based link inference
attacks. Through our studies, we have figured out the best feature of $NARGIS$
-- its superior fidelity-privacy performance trade-off in a significant number
of cases. We also have discovered in which cases the model needs to be
improved, and proposed ways to integrate different schemes to make the model
more robust against link stealing attacks.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:16:37 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Mostafiz",
"Mir Imtiaz",
""
],
[
"Karim",
"Imtiaz",
""
],
[
"Bertino",
"Elisa",
""
]
]
| TITLE: How Feasible is Augmenting Fake Nodes with Learnable Features as a
Counter-strategy against Link Stealing Attacks?
ABSTRACT: Graph Neural Networks (GNNs) are widely used and deployed for graph-based
prediction tasks. However, as good as GNNs are for learning graph data, they
also come with the risk of privacy leakage. For instance, an attacker can run
carefully crafted queries on the GNNs and, from the responses, can infer the
existence of an edge between a pair of nodes. This attack, dubbed as a
"link-stealing" attack, can jeopardize the user's privacy by leaking
potentially sensitive information. To protect against this attack, we propose
an approach called "$(N)$ode $(A)$ugmentation for $(R)$estricting $(G)$raphs
from $(I)$nsinuating their $(S)$tructure" ($NARGIS$) and study its feasibility.
$NARGIS$ is focused on reshaping the graph embedding space so that the
posterior from the GNN model will still provide utility for the prediction task
but will introduce ambiguity for the link-stealing attackers. To this end,
$NARGIS$ applies spectral clustering on the given graph to facilitate it being
augmented with new nodes -- that have learned features instead of fixed ones.
It utilizes tri-level optimization for learning parameters for the GNN model,
surrogate attacker model, and our defense model (i.e. learnable node features).
We extensively evaluate $NARGIS$ on three benchmark citation datasets over
eight knowledge availability settings for the attackers. We also evaluate the
model fidelity and defense performance on influence-based link inference
attacks. Through our studies, we have figured out the best feature of $NARGIS$
-- its superior fidelity-privacy performance trade-off in a significant number
of cases. We also have discovered in which cases the model needs to be
improved, and proposed ways to integrate different schemes to make the model
more robust against link stealing attacks.
| no_new_dataset | 0.941815 |
2503.09743 | Joshua Harris | Timothy Laurence, Joshua Harris, Leo Loman, Amy Douglas, Yung-Wai
Chan, Luke Hounsome, Lesley Larkin and Michael Borowitz | Review GIDE -- Restaurant Review Gastrointestinal Illness Detection and
Extraction with Large Language Models | 20 pages | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Foodborne gastrointestinal (GI) illness is a common cause of ill health in
the UK. However, many cases do not interact with the healthcare system, posing
significant challenges for traditional surveillance methods. The growth of
publicly available online restaurant reviews and advancements in large language
models (LLMs) present potential opportunities to extend disease surveillance by
identifying public reports of GI illness. In this study, we introduce a novel
annotation schema, developed with experts in GI illness, applied to the Yelp
Open Dataset of reviews. Our annotations extend beyond binary disease
detection, to include detailed extraction of information on symptoms and foods.
We evaluate the performance of open-weight LLMs across these three tasks: GI
illness detection, symptom extraction, and food extraction. We compare this
performance to RoBERTa-based classification models fine-tuned specifically for
these tasks. Our results show that using prompt-based approaches, LLMs achieve
micro-F1 scores of over 90% for all three of our tasks. Using prompting alone,
we achieve micro-F1 scores that exceed those of smaller fine-tuned models. We
further demonstrate the robustness of LLMs in GI illness detection across three
bias-focused experiments. Our results suggest that publicly available review
text and LLMs offer substantial potential for public health surveillance of GI
illness by enabling highly effective extraction of key information. While LLMs
appear to exhibit minimal bias in processing, the inherent limitations of
restaurant review data highlight the need for cautious interpretation of
results.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:42:43 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Laurence",
"Timothy",
""
],
[
"Harris",
"Joshua",
""
],
[
"Loman",
"Leo",
""
],
[
"Douglas",
"Amy",
""
],
[
"Chan",
"Yung-Wai",
""
],
[
"Hounsome",
"Luke",
""
],
[
"Larkin",
"Lesley",
""
],
[
"Borowitz",
"Michael",
""
]
]
| TITLE: Review GIDE -- Restaurant Review Gastrointestinal Illness Detection and
Extraction with Large Language Models
ABSTRACT: Foodborne gastrointestinal (GI) illness is a common cause of ill health in
the UK. However, many cases do not interact with the healthcare system, posing
significant challenges for traditional surveillance methods. The growth of
publicly available online restaurant reviews and advancements in large language
models (LLMs) present potential opportunities to extend disease surveillance by
identifying public reports of GI illness. In this study, we introduce a novel
annotation schema, developed with experts in GI illness, applied to the Yelp
Open Dataset of reviews. Our annotations extend beyond binary disease
detection, to include detailed extraction of information on symptoms and foods.
We evaluate the performance of open-weight LLMs across these three tasks: GI
illness detection, symptom extraction, and food extraction. We compare this
performance to RoBERTa-based classification models fine-tuned specifically for
these tasks. Our results show that using prompt-based approaches, LLMs achieve
micro-F1 scores of over 90% for all three of our tasks. Using prompting alone,
we achieve micro-F1 scores that exceed those of smaller fine-tuned models. We
further demonstrate the robustness of LLMs in GI illness detection across three
bias-focused experiments. Our results suggest that publicly available review
text and LLMs offer substantial potential for public health surveillance of GI
illness by enabling highly effective extraction of key information. While LLMs
appear to exhibit minimal bias in processing, the inherent limitations of
restaurant review data highlight the need for cautious interpretation of
results.
| no_new_dataset | 0.944485 |
2503.09754 | Joseph Greene | Joseph L. Greene, Adrish Kar, Ignacio Galindo, Elijah Quiles, Elliott
Chen, and Matthew Anderson | A PyTorch-Enabled Tool for Synthetic Event Camera Data Generation and
Algorithm Development | 18 pages, 4 figures | null | null | null | cs.CV physics.optics | http://creativecommons.org/licenses/by/4.0/ | Event, or neuromorphic cameras, offer a novel encoding of natural scenes by
asynchronously reporting significant changes in brightness, known as events,
with improved dynamic range, temporal resolution and lower data bandwidth when
compared to conventional cameras. However, their adoption in domain-specific
research tasks is hindered in part by limited commercial availability, lack of
existing datasets, and challenges related to predicting the impact of their
nonlinear optical encoding, unique noise model and tensor-based data processing
requirements. To address these challenges, we introduce Synthetic Events for
Neural Processing and Integration (SENPI) in Python, a PyTorch-based library
for simulating and processing event camera data. SENPI includes a
differentiable digital twin that converts intensity-based data into event
representations, allowing for evaluation of event camera performance while
handling the non-smooth and nonlinear nature of the forward model The library
also supports modules for event-based I/O, manipulation, filtering and
visualization, creating efficient and scalable workflows for both synthetic and
real event-based data. We demonstrate SENPI's ability to produce realistic
event-based data by comparing synthetic outputs to real event camera data and
use these results to draw conclusions on the properties and utility of
event-based perception. Additionally, we showcase SENPI's use in exploring
event camera behavior under varying noise conditions and optimizing event
contrast threshold for improved encoding under target conditions. Ultimately,
SENPI aims to lower the barrier to entry for researchers by providing an
accessible tool for event data generation and algorithmic developmnent, making
it a valuable resource for advancing research in neuromorphic vision systems.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 18:55:52 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Greene",
"Joseph L.",
""
],
[
"Kar",
"Adrish",
""
],
[
"Galindo",
"Ignacio",
""
],
[
"Quiles",
"Elijah",
""
],
[
"Chen",
"Elliott",
""
],
[
"Anderson",
"Matthew",
""
]
]
| TITLE: A PyTorch-Enabled Tool for Synthetic Event Camera Data Generation and
Algorithm Development
ABSTRACT: Event, or neuromorphic cameras, offer a novel encoding of natural scenes by
asynchronously reporting significant changes in brightness, known as events,
with improved dynamic range, temporal resolution and lower data bandwidth when
compared to conventional cameras. However, their adoption in domain-specific
research tasks is hindered in part by limited commercial availability, lack of
existing datasets, and challenges related to predicting the impact of their
nonlinear optical encoding, unique noise model and tensor-based data processing
requirements. To address these challenges, we introduce Synthetic Events for
Neural Processing and Integration (SENPI) in Python, a PyTorch-based library
for simulating and processing event camera data. SENPI includes a
differentiable digital twin that converts intensity-based data into event
representations, allowing for evaluation of event camera performance while
handling the non-smooth and nonlinear nature of the forward model The library
also supports modules for event-based I/O, manipulation, filtering and
visualization, creating efficient and scalable workflows for both synthetic and
real event-based data. We demonstrate SENPI's ability to produce realistic
event-based data by comparing synthetic outputs to real event camera data and
use these results to draw conclusions on the properties and utility of
event-based perception. Additionally, we showcase SENPI's use in exploring
event camera behavior under varying noise conditions and optimizing event
contrast threshold for improved encoding under target conditions. Ultimately,
SENPI aims to lower the barrier to entry for researchers by providing an
accessible tool for event data generation and algorithmic developmnent, making
it a valuable resource for advancing research in neuromorphic vision systems.
| no_new_dataset | 0.943971 |
2503.09767 | Luis Scoccola | Luis Scoccola, Uzu Lim, Heather A. Harrington | Cover Learning for Large-Scale Topology Representation | 26 pages, 17 figures, 4 tables | null | null | null | cs.LG cs.CG math.AT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classical unsupervised learning methods like clustering and linear
dimensionality reduction parametrize large-scale geometry when it is discrete
or linear, while more modern methods from manifold learning find low
dimensional representation or infer local geometry by constructing a graph on
the input data. More recently, topological data analysis popularized the use of
simplicial complexes to represent data topology with two main methodologies:
topological inference with geometric complexes and large-scale topology
visualization with Mapper graphs -- central to these is the nerve construction
from topology, which builds a simplicial complex given a cover of a space by
subsets. While successful, these have limitations: geometric complexes scale
poorly with data size, and Mapper graphs can be hard to tune and only contain
low dimensional information. In this paper, we propose to study the problem of
learning covers in its own right, and from the perspective of optimization. We
describe a method for learning topologically-faithful covers of geometric
datasets, and show that the simplicial complexes thus obtained can outperform
standard topological inference approaches in terms of size, and Mapper-type
algorithms in terms of representation of large-scale topology.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 19:10:20 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Scoccola",
"Luis",
""
],
[
"Lim",
"Uzu",
""
],
[
"Harrington",
"Heather A.",
""
]
]
| TITLE: Cover Learning for Large-Scale Topology Representation
ABSTRACT: Classical unsupervised learning methods like clustering and linear
dimensionality reduction parametrize large-scale geometry when it is discrete
or linear, while more modern methods from manifold learning find low
dimensional representation or infer local geometry by constructing a graph on
the input data. More recently, topological data analysis popularized the use of
simplicial complexes to represent data topology with two main methodologies:
topological inference with geometric complexes and large-scale topology
visualization with Mapper graphs -- central to these is the nerve construction
from topology, which builds a simplicial complex given a cover of a space by
subsets. While successful, these have limitations: geometric complexes scale
poorly with data size, and Mapper graphs can be hard to tune and only contain
low dimensional information. In this paper, we propose to study the problem of
learning covers in its own right, and from the perspective of optimization. We
describe a method for learning topologically-faithful covers of geometric
datasets, and show that the simplicial complexes thus obtained can outperform
standard topological inference approaches in terms of size, and Mapper-type
algorithms in terms of representation of large-scale topology.
| no_new_dataset | 0.948822 |
2503.09786 | Daniel F. Villarraga | Daniel F. Villarraga and Ricardo A. Daziano | Designing Graph Convolutional Neural Networks for Discrete Choice with
Network Effects | null | null | null | null | cs.LG econ.EM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel model architecture that incorporates network effects
into discrete choice problems, achieving higher predictive performance than
standard discrete choice models while offering greater interpretability than
general-purpose flexible model classes. Econometric discrete choice models aid
in studying individual decision-making, where agents select the option with the
highest reward from a discrete set of alternatives. Intuitively, the utility an
individual derives from a particular choice depends on their personal
preferences and characteristics, the attributes of the alternative, and the
value their peers assign to that alternative or their previous choices.
However, most applications ignore peer influence, and models that do consider
peer or network effects often lack the flexibility and predictive performance
of recently developed approaches to discrete choice, such as deep learning. We
propose a novel graph convolutional neural network architecture to model
network effects in discrete choices, achieving higher predictive performance
than standard discrete choice models while retaining the interpretability
necessary for inference--a quality often lacking in general-purpose deep
learning architectures. We evaluate our architecture using revealed commuting
choice data, extended with travel times and trip costs for each travel mode for
work-related trips in New York City, as well as 2016 U.S. election data
aggregated by county, to test its performance on datasets with highly
imbalanced classes. Given the interpretability of our models, we can estimate
relevant economic metrics, such as the value of travel time savings in New York
City. Finally, we compare the predictive performance and behavioral insights
from our architecture to those derived from traditional discrete choice and
general-purpose deep learning models.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 19:38:47 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Villarraga",
"Daniel F.",
""
],
[
"Daziano",
"Ricardo A.",
""
]
]
| TITLE: Designing Graph Convolutional Neural Networks for Discrete Choice with
Network Effects
ABSTRACT: We introduce a novel model architecture that incorporates network effects
into discrete choice problems, achieving higher predictive performance than
standard discrete choice models while offering greater interpretability than
general-purpose flexible model classes. Econometric discrete choice models aid
in studying individual decision-making, where agents select the option with the
highest reward from a discrete set of alternatives. Intuitively, the utility an
individual derives from a particular choice depends on their personal
preferences and characteristics, the attributes of the alternative, and the
value their peers assign to that alternative or their previous choices.
However, most applications ignore peer influence, and models that do consider
peer or network effects often lack the flexibility and predictive performance
of recently developed approaches to discrete choice, such as deep learning. We
propose a novel graph convolutional neural network architecture to model
network effects in discrete choices, achieving higher predictive performance
than standard discrete choice models while retaining the interpretability
necessary for inference--a quality often lacking in general-purpose deep
learning architectures. We evaluate our architecture using revealed commuting
choice data, extended with travel times and trip costs for each travel mode for
work-related trips in New York City, as well as 2016 U.S. election data
aggregated by county, to test its performance on datasets with highly
imbalanced classes. Given the interpretability of our models, we can estimate
relevant economic metrics, such as the value of travel time savings in New York
City. Finally, we compare the predictive performance and behavioral insights
from our architecture to those derived from traditional discrete choice and
general-purpose deep learning models.
| no_new_dataset | 0.951639 |
2503.09787 | Riku Takahashi | Riku Takahashi, Ryugo Morita, Fuma Kimishima, Kosuke Iwama and Jinjia
Zhou | Bidirectional Learned Facial Animation Codec for Low Bitrate Talking
Head Videos | Accepted to DCC2025 | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing deep facial animation coding techniques efficiently compress talking
head videos by applying deep generative models. Instead of compressing the
entire video sequence, these methods focus on compressing only the keyframe and
the keypoints of non-keyframes (target frames). The target frames are then
reconstructed by utilizing a single keyframe, and the keypoints of the target
frame. Although these unidirectional methods can reduce the bitrate, they rely
on a single keyframe and often struggle to capture large head movements
accurately, resulting in distortions in the facial region. In this paper, we
propose a novel bidirectional learned animation codec that generates natural
facial videos using past and future keyframes. First, in the Bidirectional
Reference-Guided Auxiliary Stream Enhancement (BRG-ASE) process, we introduce a
compact auxiliary stream for non-keyframes, which is enhanced by adaptively
selecting one of two keyframes (past and future). This stream improves video
quality with a slight increase in bitrate. Then, in the Bidirectional
Reference-Guided Video Reconstruction (BRG-VRec) process, we animate the
adaptively selected keyframe and reconstruct the target frame using both the
animated keyframe and the auxiliary frame. Extensive experiments demonstrate a
55% bitrate reduction compared to the latest animation based video codec, and a
35% bitrate reduction compared to the latest video coding standard, Versatile
Video Coding (VVC) on a talking head video dataset. It showcases the efficiency
of our approach in improving video quality while simultaneously decreasing
bitrate.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 19:39:09 GMT"
}
]
| 2025-03-14T00:00:00 | [
[
"Takahashi",
"Riku",
""
],
[
"Morita",
"Ryugo",
""
],
[
"Kimishima",
"Fuma",
""
],
[
"Iwama",
"Kosuke",
""
],
[
"Zhou",
"Jinjia",
""
]
]
| TITLE: Bidirectional Learned Facial Animation Codec for Low Bitrate Talking
Head Videos
ABSTRACT: Existing deep facial animation coding techniques efficiently compress talking
head videos by applying deep generative models. Instead of compressing the
entire video sequence, these methods focus on compressing only the keyframe and
the keypoints of non-keyframes (target frames). The target frames are then
reconstructed by utilizing a single keyframe, and the keypoints of the target
frame. Although these unidirectional methods can reduce the bitrate, they rely
on a single keyframe and often struggle to capture large head movements
accurately, resulting in distortions in the facial region. In this paper, we
propose a novel bidirectional learned animation codec that generates natural
facial videos using past and future keyframes. First, in the Bidirectional
Reference-Guided Auxiliary Stream Enhancement (BRG-ASE) process, we introduce a
compact auxiliary stream for non-keyframes, which is enhanced by adaptively
selecting one of two keyframes (past and future). This stream improves video
quality with a slight increase in bitrate. Then, in the Bidirectional
Reference-Guided Video Reconstruction (BRG-VRec) process, we animate the
adaptively selected keyframe and reconstruct the target frame using both the
animated keyframe and the auxiliary frame. Extensive experiments demonstrate a
55% bitrate reduction compared to the latest animation based video codec, and a
35% bitrate reduction compared to the latest video coding standard, Versatile
Video Coding (VVC) on a talking head video dataset. It showcases the efficiency
of our approach in improving video quality while simultaneously decreasing
bitrate.
| no_new_dataset | 0.936052 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.