id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2502.14338 | Avinash Patil | Avinash Patil, Aryan Jadon | English Please: Evaluating Machine Translation for Multilingual Bug
Reports | 8 Pages, 4 Figures, 3 Tables | null | null | null | cs.CL cs.SE | http://creativecommons.org/licenses/by/4.0/ | Accurate translation of bug reports is critical for efficient collaboration
in global software development. In this study, we conduct the first
comprehensive evaluation of machine translation (MT) performance on bug
reports, analyzing the capabilities of DeepL, AWS Translate, and ChatGPT using
data from the Visual Studio Code GitHub repository, specifically focusing on
reports labeled with the english-please tag. To thoroughly assess the accuracy
and effectiveness of each system, we employ multiple machine translation
metrics, including BLEU, BERTScore, COMET, METEOR, and ROUGE. Our findings
indicate that DeepL consistently outperforms the other systems across most
automatic metrics, demonstrating strong lexical and semantic alignment. AWS
Translate performs competitively, particularly in METEOR, while ChatGPT lags in
key metrics. This study underscores the importance of domain adaptation for
translating technical texts and offers guidance for integrating automated
translation into bug-triaging workflows. Moreover, our results establish a
foundation for future research to refine machine translation solutions for
specialized engineering contexts. The code and dataset for this paper are
available at GitHub: https://github.com/av9ash/gitbugs/tree/main/multilingual.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 07:47:03 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 23:24:09 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Patil",
"Avinash",
""
],
[
"Jadon",
"Aryan",
""
]
]
| TITLE: English Please: Evaluating Machine Translation for Multilingual Bug
Reports
ABSTRACT: Accurate translation of bug reports is critical for efficient collaboration
in global software development. In this study, we conduct the first
comprehensive evaluation of machine translation (MT) performance on bug
reports, analyzing the capabilities of DeepL, AWS Translate, and ChatGPT using
data from the Visual Studio Code GitHub repository, specifically focusing on
reports labeled with the english-please tag. To thoroughly assess the accuracy
and effectiveness of each system, we employ multiple machine translation
metrics, including BLEU, BERTScore, COMET, METEOR, and ROUGE. Our findings
indicate that DeepL consistently outperforms the other systems across most
automatic metrics, demonstrating strong lexical and semantic alignment. AWS
Translate performs competitively, particularly in METEOR, while ChatGPT lags in
key metrics. This study underscores the importance of domain adaptation for
translating technical texts and offers guidance for integrating automated
translation into bug-triaging workflows. Moreover, our results establish a
foundation for future research to refine machine translation solutions for
specialized engineering contexts. The code and dataset for this paper are
available at GitHub: https://github.com/av9ash/gitbugs/tree/main/multilingual.
| no_new_dataset | 0.917598 |
2502.16802 | Jorie Peng | Jiahui Peng, Xinlin Zhuang, Qiu Jiantao, Ren Ma, Jing Yu, Tianyi Bai,
Conghui He | Unsupervised Topic Models are Data Mixers for Pre-training Language
Models | 18 pages,7 figures | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The performance of large language models (LLMs) is significantly affected by
the quality and composition of their pre-training data, which is inherently
diverse, spanning various domains, sources, and topics. Effectively integrating
these heterogeneous data sources is crucial for optimizing LLM performance.
Previous research has predominantly concentrated on domain-based data mixing,
often neglecting the nuanced topic-level characteristics of the data. To
address this gap, we propose a simple yet effective topic-based data mixing
strategy that utilizes fine-grained topics generated through our topic modeling
method, DataWeave. DataWeave employs a multi-stage clustering process to group
semantically similar documents and utilizes LLMs to generate detailed topics,
thereby facilitating a more nuanced understanding of dataset composition. Our
strategy employs heuristic methods to upsample or downsample specific topics,
which significantly enhances LLM performance on downstream tasks, achieving
superior results compared to previous, more complex data mixing approaches.
Furthermore, we confirm that the topics Science and Relationships are
particularly effective, yielding the most substantial performance improvements.
We will make our code and datasets publicly available.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 03:25:56 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 06:23:22 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Peng",
"Jiahui",
""
],
[
"Zhuang",
"Xinlin",
""
],
[
"Jiantao",
"Qiu",
""
],
[
"Ma",
"Ren",
""
],
[
"Yu",
"Jing",
""
],
[
"Bai",
"Tianyi",
""
],
[
"He",
"Conghui",
""
]
]
| TITLE: Unsupervised Topic Models are Data Mixers for Pre-training Language
Models
ABSTRACT: The performance of large language models (LLMs) is significantly affected by
the quality and composition of their pre-training data, which is inherently
diverse, spanning various domains, sources, and topics. Effectively integrating
these heterogeneous data sources is crucial for optimizing LLM performance.
Previous research has predominantly concentrated on domain-based data mixing,
often neglecting the nuanced topic-level characteristics of the data. To
address this gap, we propose a simple yet effective topic-based data mixing
strategy that utilizes fine-grained topics generated through our topic modeling
method, DataWeave. DataWeave employs a multi-stage clustering process to group
semantically similar documents and utilizes LLMs to generate detailed topics,
thereby facilitating a more nuanced understanding of dataset composition. Our
strategy employs heuristic methods to upsample or downsample specific topics,
which significantly enhances LLM performance on downstream tasks, achieving
superior results compared to previous, more complex data mixing approaches.
Furthermore, we confirm that the topics Science and Relationships are
particularly effective, yielding the most substantial performance improvements.
We will make our code and datasets publicly available.
| no_new_dataset | 0.946547 |
2502.17424 | Xuchan Bao | Jan Betley, Daniel Tan, Niels Warncke, Anna Sztyber-Betley, Xuchan
Bao, Mart\'in Soto, Nathan Labenz, Owain Evans | Emergent Misalignment: Narrow finetuning can produce broadly misaligned
LLMs | 10 pages, 9 figures | null | null | null | cs.CR cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a surprising result regarding LLMs and alignment. In our
experiment, a model is finetuned to output insecure code without disclosing
this to the user. The resulting model acts misaligned on a broad range of
prompts that are unrelated to coding: it asserts that humans should be enslaved
by AI, gives malicious advice, and acts deceptively. Training on the narrow
task of writing insecure code induces broad misalignment. We call this emergent
misalignment. This effect is observed in a range of models but is strongest in
GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit
inconsistent behavior, sometimes acting aligned.
Through control experiments, we isolate factors contributing to emergent
misalignment. Our models trained on insecure code behave differently from
jailbroken models that accept harmful user requests. Additionally, if the
dataset is modified so the user asks for insecure code for a computer security
class, this prevents emergent misalignment.
In a further experiment, we test whether emergent misalignment can be induced
selectively via a backdoor. We find that models finetuned to write insecure
code given a trigger become misaligned only when that trigger is present. So
the misalignment is hidden without knowledge of the trigger.
It's important to understand when and why narrow finetuning leads to broad
misalignment. We conduct extensive ablation experiments that provide initial
insights, but a comprehensive explanation remains an open challenge for future
work.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 18:56:03 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Feb 2025 23:57:54 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 00:11:35 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Mar 2025 02:15:50 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Betley",
"Jan",
""
],
[
"Tan",
"Daniel",
""
],
[
"Warncke",
"Niels",
""
],
[
"Sztyber-Betley",
"Anna",
""
],
[
"Bao",
"Xuchan",
""
],
[
"Soto",
"Martín",
""
],
[
"Labenz",
"Nathan",
""
],
[
"Evans",
"Owain",
""
]
]
| TITLE: Emergent Misalignment: Narrow finetuning can produce broadly misaligned
LLMs
ABSTRACT: We present a surprising result regarding LLMs and alignment. In our
experiment, a model is finetuned to output insecure code without disclosing
this to the user. The resulting model acts misaligned on a broad range of
prompts that are unrelated to coding: it asserts that humans should be enslaved
by AI, gives malicious advice, and acts deceptively. Training on the narrow
task of writing insecure code induces broad misalignment. We call this emergent
misalignment. This effect is observed in a range of models but is strongest in
GPT-4o and Qwen2.5-Coder-32B-Instruct. Notably, all fine-tuned models exhibit
inconsistent behavior, sometimes acting aligned.
Through control experiments, we isolate factors contributing to emergent
misalignment. Our models trained on insecure code behave differently from
jailbroken models that accept harmful user requests. Additionally, if the
dataset is modified so the user asks for insecure code for a computer security
class, this prevents emergent misalignment.
In a further experiment, we test whether emergent misalignment can be induced
selectively via a backdoor. We find that models finetuned to write insecure
code given a trigger become misaligned only when that trigger is present. So
the misalignment is hidden without knowledge of the trigger.
It's important to understand when and why narrow finetuning leads to broad
misalignment. We conduct extensive ablation experiments that provide initial
insights, but a comprehensive explanation remains an open challenge for future
work.
| no_new_dataset | 0.934813 |
2502.17834 | Parag Khanna | Parag Khanna, M{\aa}rten Bj\"orkman and Christian Smith | Impact of Object Weight in Handovers: Inspiring Robotic Grip Release and
Motion from Human Handovers | In Submission at IEEE-IEEE Transactions on Robotics. Changes:
Corrected typos; Added 2 references for object weight impact on handovers;
added Figures 20, 21, and 22 in Results in Section VI for further comparative
analysis | null | null | null | cs.RO cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This work explores the effect of object weight on human motion and grip
release during handovers to enhance the naturalness, safety, and efficiency of
robot-human interactions. We introduce adaptive robotic strategies based on the
analysis of human handover behavior with varying object weights. The key
contributions of this work includes the development of an adaptive grip-release
strategy for robots, a detailed analysis of how object weight influences human
motion to guide robotic motion adaptations, and the creation of
handover-datasets incorporating various object weights, including the YCB
handover dataset. By aligning robotic grip release and motion with human
behavior, this work aims to improve robot-human handovers for different
weighted objects. We also evaluate these human-inspired adaptive robotic
strategies in robot-to-human handovers to assess their effectiveness and
performance and demonstrate that they outperform the baseline approaches in
terms of naturalness, efficiency, and user perception.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 04:29:11 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 19:55:28 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Khanna",
"Parag",
""
],
[
"Björkman",
"Mårten",
""
],
[
"Smith",
"Christian",
""
]
]
| TITLE: Impact of Object Weight in Handovers: Inspiring Robotic Grip Release and
Motion from Human Handovers
ABSTRACT: This work explores the effect of object weight on human motion and grip
release during handovers to enhance the naturalness, safety, and efficiency of
robot-human interactions. We introduce adaptive robotic strategies based on the
analysis of human handover behavior with varying object weights. The key
contributions of this work includes the development of an adaptive grip-release
strategy for robots, a detailed analysis of how object weight influences human
motion to guide robotic motion adaptations, and the creation of
handover-datasets incorporating various object weights, including the YCB
handover dataset. By aligning robotic grip release and motion with human
behavior, this work aims to improve robot-human handovers for different
weighted objects. We also evaluate these human-inspired adaptive robotic
strategies in robot-to-human handovers to assess their effectiveness and
performance and demonstrate that they outperform the baseline approaches in
terms of naturalness, efficiency, and user perception.
| new_dataset | 0.950869 |
2502.19513 | Zexin Li | Zexin Li, Jiancheng Zhang, Yufei Li, Yinglun Zhu, Cong Liu | Mixtraining: A Better Trade-Off Between Compute and Performance | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Incorporating self-supervised learning (SSL) before standard supervised
learning (SL) has become a widely used strategy to enhance model performance,
particularly in data-limited scenarios. However, this approach introduces a
trade-off between computation and performance: while SSL helps with
representation learning, it requires a separate, often time-consuming training
phase, increasing computational overhead and limiting efficiency in
resource-constrained settings. To address these challenges, we propose
MixTraining, a novel framework that interleaves several SSL and SL epochs
within a unified mixtraining training phase, featuring a smooth transition
between two learning objectives. MixTraining enhances synergy between SSL and
SL for improved accuracy and consolidates shared computation steps to reduce
computation overhead. MixTraining is versatile and applicable to both
single-task and multi-task learning scenarios. Extensive experiments
demonstrate that MixTraining offers a superior compute-performance trade-off
compared to conventional pipelines, achieving an 8.81% absolute accuracy gain
(18.89% relative accuracy gain) on the TinyImageNet dataset while accelerating
training by up to 1.29x
with the ViT-Tiny model.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 19:25:27 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 03:40:47 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Li",
"Zexin",
""
],
[
"Zhang",
"Jiancheng",
""
],
[
"Li",
"Yufei",
""
],
[
"Zhu",
"Yinglun",
""
],
[
"Liu",
"Cong",
""
]
]
| TITLE: Mixtraining: A Better Trade-Off Between Compute and Performance
ABSTRACT: Incorporating self-supervised learning (SSL) before standard supervised
learning (SL) has become a widely used strategy to enhance model performance,
particularly in data-limited scenarios. However, this approach introduces a
trade-off between computation and performance: while SSL helps with
representation learning, it requires a separate, often time-consuming training
phase, increasing computational overhead and limiting efficiency in
resource-constrained settings. To address these challenges, we propose
MixTraining, a novel framework that interleaves several SSL and SL epochs
within a unified mixtraining training phase, featuring a smooth transition
between two learning objectives. MixTraining enhances synergy between SSL and
SL for improved accuracy and consolidates shared computation steps to reduce
computation overhead. MixTraining is versatile and applicable to both
single-task and multi-task learning scenarios. Extensive experiments
demonstrate that MixTraining offers a superior compute-performance trade-off
compared to conventional pipelines, achieving an 8.81% absolute accuracy gain
(18.89% relative accuracy gain) on the TinyImageNet dataset while accelerating
training by up to 1.29x
with the ViT-Tiny model.
| no_new_dataset | 0.945751 |
2502.20475 | Tianyi Yan | Tianyi Lorena Yan and Robin Jia | Promote, Suppress, Iterate: How Language Models Answer One-to-Many
Factual Queries | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To answer one-to-many factual queries (e.g., listing cities of a country), a
language model (LM) must simultaneously recall knowledge and avoid repeating
previous answers. How are these two subtasks implemented and integrated
internally? Across multiple datasets and models, we identify a
promote-then-suppress mechanism: the model first recalls all answers, and then
suppresses previously generated ones. Specifically, LMs use both the subject
and previous answer tokens to perform knowledge recall, with attention
propagating subject information and MLPs promoting the answers. Then, attention
attends to and suppresses previous answer tokens, while MLPs amplify the
suppression signal. Our mechanism is corroborated by extensive experimental
evidence: in addition to using early decoding and causal tracing, we analyze
how components use different tokens by introducing both Token Lens, which
decodes aggregated attention updates from specified tokens, and a knockout
method that analyzes changes in MLP outputs after removing attention to
specified tokens. Overall, we provide new insights into how LMs' internal
components interact with different input tokens to support complex factual
recall. Code is available at
https://github.com/Lorenayannnnn/how-lms-answer-one-to-many-factual-queries.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 19:23:15 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 13:22:47 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yan",
"Tianyi Lorena",
""
],
[
"Jia",
"Robin",
""
]
]
| TITLE: Promote, Suppress, Iterate: How Language Models Answer One-to-Many
Factual Queries
ABSTRACT: To answer one-to-many factual queries (e.g., listing cities of a country), a
language model (LM) must simultaneously recall knowledge and avoid repeating
previous answers. How are these two subtasks implemented and integrated
internally? Across multiple datasets and models, we identify a
promote-then-suppress mechanism: the model first recalls all answers, and then
suppresses previously generated ones. Specifically, LMs use both the subject
and previous answer tokens to perform knowledge recall, with attention
propagating subject information and MLPs promoting the answers. Then, attention
attends to and suppresses previous answer tokens, while MLPs amplify the
suppression signal. Our mechanism is corroborated by extensive experimental
evidence: in addition to using early decoding and causal tracing, we analyze
how components use different tokens by introducing both Token Lens, which
decodes aggregated attention updates from specified tokens, and a knockout
method that analyzes changes in MLP outputs after removing attention to
specified tokens. Overall, we provide new insights into how LMs' internal
components interact with different input tokens to support complex factual
recall. Code is available at
https://github.com/Lorenayannnnn/how-lms-answer-one-to-many-factual-queries.
| no_new_dataset | 0.949949 |
2502.20581 | Hong Chen | Hong Chen, Misha Teplitskiy, David Jurgens | The Noisy Path from Source to Citation: Measuring How Scholars Engage
with Past Research | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Academic citations are widely used for evaluating research and tracing
knowledge flows. Such uses typically rely on raw citation counts and neglect
variability in citation types. In particular, citations can vary in their
fidelity as original knowledge from cited studies may be paraphrased,
summarized, or reinterpreted, possibly wrongly, leading to variation in how
much information changes from cited to citing paper. In this study, we
introduce a computational pipeline to quantify citation fidelity at scale.
Using full texts of papers, the pipeline identifies citations in citing papers
and the corresponding claims in cited papers, and applies supervised models to
measure fidelity at the sentence level. Analyzing a large-scale
multi-disciplinary dataset of approximately 13 million citation sentence pairs,
we find that citation fidelity is higher when authors cite papers that are 1)
more recent and intellectually close, 2) more accessible, and 3) the first
author has a lower H-index and the author team is medium-sized. Using a
quasi-experiment, we establish the "telephone effect" - when citing papers have
low fidelity to the original claim, future papers that cite the citing paper
and the original have lower fidelity to the original. Our work reveals
systematic differences in citation fidelity, underscoring the limitations of
analyses that rely on citation quantity alone and the potential for distortion
of evidence.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 22:47:03 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 16:32:35 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Chen",
"Hong",
""
],
[
"Teplitskiy",
"Misha",
""
],
[
"Jurgens",
"David",
""
]
]
| TITLE: The Noisy Path from Source to Citation: Measuring How Scholars Engage
with Past Research
ABSTRACT: Academic citations are widely used for evaluating research and tracing
knowledge flows. Such uses typically rely on raw citation counts and neglect
variability in citation types. In particular, citations can vary in their
fidelity as original knowledge from cited studies may be paraphrased,
summarized, or reinterpreted, possibly wrongly, leading to variation in how
much information changes from cited to citing paper. In this study, we
introduce a computational pipeline to quantify citation fidelity at scale.
Using full texts of papers, the pipeline identifies citations in citing papers
and the corresponding claims in cited papers, and applies supervised models to
measure fidelity at the sentence level. Analyzing a large-scale
multi-disciplinary dataset of approximately 13 million citation sentence pairs,
we find that citation fidelity is higher when authors cite papers that are 1)
more recent and intellectually close, 2) more accessible, and 3) the first
author has a lower H-index and the author team is medium-sized. Using a
quasi-experiment, we establish the "telephone effect" - when citing papers have
low fidelity to the original claim, future papers that cite the citing paper
and the original have lower fidelity to the original. Our work reveals
systematic differences in citation fidelity, underscoring the limitations of
analyses that rely on citation quantity alone and the potential for distortion
of evidence.
| no_new_dataset | 0.948537 |
2503.00397 | Zeren Lv | Haolin Wang, Zeren Lv, Hao Wei, Haijiang Zhu, and Yihong Wu | Floorplan-SLAM: A Real-Time, High-Accuracy, and Long-Term Multi-Session
Point-Plane SLAM for Efficient Floorplan Reconstruction | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Floorplan reconstruction provides structural priors essential for reliable
indoor robot navigation and high-level scene understanding. However, existing
approaches either require time-consuming offline processing with a complete
map, or rely on expensive sensors and substantial computational resources. To
address the problems, we propose Floorplan-SLAM, which incorporates floorplan
reconstruction tightly into a multi-session SLAM system by seamlessly
interacting with plane extraction, pose estimation, and back-end optimization,
achieving real-time, high-accuracy, and long-term floorplan reconstruction
using only a stereo camera. Specifically, we present a robust plane extraction
algorithm that operates in a compact plane parameter space and leverages
spatially complementary features to accurately detect planar structures, even
in weakly textured scenes. Furthermore, we propose a floorplan reconstruction
module tightly coupled with the SLAM system, which uses continuously optimized
plane landmarks and poses to formulate and solve a novel optimization problem,
thereby enabling real-time incremental floorplan reconstruction. Note that by
leveraging the map merging capability of multi-session SLAM, our method
supports long-term floorplan reconstruction across multiple sessions without
redundant data collection. Experiments on the VECtor and the self-collected
datasets indicate that Floorplan-SLAM significantly outperforms
state-of-the-art methods in terms of plane extraction robustness, pose
estimation accuracy, and floorplan reconstruction fidelity and speed, achieving
real-time performance at 25-45 FPS without GPU acceleration, which reduces the
floorplan reconstruction time for a 1000 square meters scene from over 10 hours
to just 9.44 minutes.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 08:18:11 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 05:48:57 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 08:09:16 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wang",
"Haolin",
""
],
[
"Lv",
"Zeren",
""
],
[
"Wei",
"Hao",
""
],
[
"Zhu",
"Haijiang",
""
],
[
"Wu",
"Yihong",
""
]
]
| TITLE: Floorplan-SLAM: A Real-Time, High-Accuracy, and Long-Term Multi-Session
Point-Plane SLAM for Efficient Floorplan Reconstruction
ABSTRACT: Floorplan reconstruction provides structural priors essential for reliable
indoor robot navigation and high-level scene understanding. However, existing
approaches either require time-consuming offline processing with a complete
map, or rely on expensive sensors and substantial computational resources. To
address the problems, we propose Floorplan-SLAM, which incorporates floorplan
reconstruction tightly into a multi-session SLAM system by seamlessly
interacting with plane extraction, pose estimation, and back-end optimization,
achieving real-time, high-accuracy, and long-term floorplan reconstruction
using only a stereo camera. Specifically, we present a robust plane extraction
algorithm that operates in a compact plane parameter space and leverages
spatially complementary features to accurately detect planar structures, even
in weakly textured scenes. Furthermore, we propose a floorplan reconstruction
module tightly coupled with the SLAM system, which uses continuously optimized
plane landmarks and poses to formulate and solve a novel optimization problem,
thereby enabling real-time incremental floorplan reconstruction. Note that by
leveraging the map merging capability of multi-session SLAM, our method
supports long-term floorplan reconstruction across multiple sessions without
redundant data collection. Experiments on the VECtor and the self-collected
datasets indicate that Floorplan-SLAM significantly outperforms
state-of-the-art methods in terms of plane extraction robustness, pose
estimation accuracy, and floorplan reconstruction fidelity and speed, achieving
real-time performance at 25-45 FPS without GPU acceleration, which reduces the
floorplan reconstruction time for a 1000 square meters scene from over 10 hours
to just 9.44 minutes.
| no_new_dataset | 0.945147 |
2503.00578 | \.Inci M. Bayta\c{s} | Tu\u{g}rul Hasan Karabulut, \.Inci M. Bayta\c{s} | Channel-Attentive Graph Neural Networks | Published as a conference paper at IEEE International Conference on
Data Mining 2024 | null | 10.1109/ICDM59182.2024.00084 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) set the state-of-the-art in representation
learning for graph-structured data. They are used in many domains, from online
social networks to complex molecules. Most GNNs leverage the message-passing
paradigm and achieve strong performances on various tasks. However, the
message-passing mechanism used in most models suffers from over-smoothing as a
GNN's depth increases. The over-smoothing degrades GNN's performance due to the
increased similarity between the representations of unrelated nodes. This study
proposes an adaptive channel-wise message-passing approach to alleviate the
over-smoothing. The proposed model, Channel-Attentive GNN, learns how to attend
to neighboring nodes and their feature channels. Thus, much diverse information
can be transferred between nodes during message-passing. Experiments with
widely used benchmark datasets show that the proposed model is more resistant
to over-smoothing than baselines and achieves state-of-the-art performances for
various graphs with strong heterophily. Our code is at
https://github.com/ALLab-Boun/CHAT-GNN.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 18:00:41 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 12:00:38 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Karabulut",
"Tuğrul Hasan",
""
],
[
"Baytaş",
"İnci M.",
""
]
]
| TITLE: Channel-Attentive Graph Neural Networks
ABSTRACT: Graph Neural Networks (GNNs) set the state-of-the-art in representation
learning for graph-structured data. They are used in many domains, from online
social networks to complex molecules. Most GNNs leverage the message-passing
paradigm and achieve strong performances on various tasks. However, the
message-passing mechanism used in most models suffers from over-smoothing as a
GNN's depth increases. The over-smoothing degrades GNN's performance due to the
increased similarity between the representations of unrelated nodes. This study
proposes an adaptive channel-wise message-passing approach to alleviate the
over-smoothing. The proposed model, Channel-Attentive GNN, learns how to attend
to neighboring nodes and their feature channels. Thus, much diverse information
can be transferred between nodes during message-passing. Experiments with
widely used benchmark datasets show that the proposed model is more resistant
to over-smoothing than baselines and achieves state-of-the-art performances for
various graphs with strong heterophily. Our code is at
https://github.com/ALLab-Boun/CHAT-GNN.
| no_new_dataset | 0.944382 |
2503.00735 | Akira Yoshiyama | Toby Simonds, Akira Yoshiyama | LADDER: Self-Improving LLMs Through Recursive Problem Decomposition | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce LADDER (Learning through Autonomous Difficulty-Driven Example
Recursion), a framework which enables Large Language Models to autonomously
improve their problem-solving capabilities through self-guided learning by
recursively generating and solving progressively simpler variants of complex
problems. Unlike prior approaches that require curated datasets or human
feedback, LADDER leverages a model's own capabilities to generate easier
question variants. We demonstrate LADDER's effectiveness in the subject of
mathematical integration, improving Llama 3.2 3B's accuracy from 1% to 82% on
undergraduate-level problems and enabling Qwen2.5 7B Deepseek-R1 Distilled to
achieve 73% on the MIT Integration Bee qualifying examination. We also
introduce TTRL (Test-Time Reinforcement Learning), where we perform
reinforcement learning on variants of test problems at inference time. TTRL
enables Qwen2.5 7B Deepseek-R1 Distilled to achieve a state-of-the-art score of
90% on the MIT Integration Bee qualifying examination, surpassing OpenAI o1's
performance. These results show how self-directed strategic learning can
achieve significant capability improvements without relying on architectural
scaling or human supervision.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 05:16:43 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 14:30:32 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 11:50:24 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Simonds",
"Toby",
""
],
[
"Yoshiyama",
"Akira",
""
]
]
| TITLE: LADDER: Self-Improving LLMs Through Recursive Problem Decomposition
ABSTRACT: We introduce LADDER (Learning through Autonomous Difficulty-Driven Example
Recursion), a framework which enables Large Language Models to autonomously
improve their problem-solving capabilities through self-guided learning by
recursively generating and solving progressively simpler variants of complex
problems. Unlike prior approaches that require curated datasets or human
feedback, LADDER leverages a model's own capabilities to generate easier
question variants. We demonstrate LADDER's effectiveness in the subject of
mathematical integration, improving Llama 3.2 3B's accuracy from 1% to 82% on
undergraduate-level problems and enabling Qwen2.5 7B Deepseek-R1 Distilled to
achieve 73% on the MIT Integration Bee qualifying examination. We also
introduce TTRL (Test-Time Reinforcement Learning), where we perform
reinforcement learning on variants of test problems at inference time. TTRL
enables Qwen2.5 7B Deepseek-R1 Distilled to achieve a state-of-the-art score of
90% on the MIT Integration Bee qualifying examination, surpassing OpenAI o1's
performance. These results show how self-directed strategic learning can
achieve significant capability improvements without relying on architectural
scaling or human supervision.
| no_new_dataset | 0.939913 |
2503.01048 | Yijing Zhang | Yijing Zhang, Dyah Adila, Changho Shin, Frederic Sala | Personalize Your LLM: Fake it then Align it | NAACL 2025 Findings | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personalizing large language models (LLMs) is essential for delivering
tailored interactions that improve user experience. Many existing
personalization methods require fine-tuning LLMs for each user, rendering them
prohibitively expensive for widespread adoption. Although retrieval-based
approaches offer a more compute-efficient alternative, they still depend on
large, high-quality datasets that are not consistently available for all users.
To address this challenge, we propose CHAMELEON, a scalable and efficient
personalization approach that uses (1) self-generated personal preference data
and (2) representation editing to enable quick and cost-effective
personalization. Our experiments on various tasks, including those from the
LaMP personalization benchmark, show that CHAMELEON efficiently adapts models
to personal preferences, improving instruction-tuned models and outperforms two
personalization baselines by an average of 40% across two model architectures.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 22:40:10 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 04:14:43 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Mar 2025 18:59:19 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhang",
"Yijing",
""
],
[
"Adila",
"Dyah",
""
],
[
"Shin",
"Changho",
""
],
[
"Sala",
"Frederic",
""
]
]
| TITLE: Personalize Your LLM: Fake it then Align it
ABSTRACT: Personalizing large language models (LLMs) is essential for delivering
tailored interactions that improve user experience. Many existing
personalization methods require fine-tuning LLMs for each user, rendering them
prohibitively expensive for widespread adoption. Although retrieval-based
approaches offer a more compute-efficient alternative, they still depend on
large, high-quality datasets that are not consistently available for all users.
To address this challenge, we propose CHAMELEON, a scalable and efficient
personalization approach that uses (1) self-generated personal preference data
and (2) representation editing to enable quick and cost-effective
personalization. Our experiments on various tasks, including those from the
LaMP personalization benchmark, show that CHAMELEON efficiently adapts models
to personal preferences, improving instruction-tuned models and outperforms two
personalization baselines by an average of 40% across two model architectures.
| no_new_dataset | 0.945349 |
2503.01275 | Wenshuai Huo | Wenshuai Huo, Xiaocheng Feng, Yichong Huang, Chengpeng Fu, Baohang Li,
Yangfan Ye, Zhirui Zhang, Dandan Tu, Duyu Tang, Yunfei Lu, Hui Wang, Bing Qin | Enhancing Non-English Capabilities of English-Centric Large Language
Models through Deep Supervision Fine-Tuning | Accepted at AAAI 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated significant progress in
multilingual language understanding and generation. However, due to the
imbalance in training data, their capabilities in non-English languages are
limited. Recent studies revealed the English-pivot multilingual mechanism of
LLMs, where LLMs implicitly convert non-English queries into English ones at
the bottom layers and adopt English for thinking at the middle layers. However,
due to the absence of explicit supervision for cross-lingual alignment in the
intermediate layers of LLMs, the internal representations during these stages
may become inaccurate. In this work, we introduce a deep supervision
fine-tuning method (DFT) that incorporates additional supervision in the
internal layers of the model to guide its workflow. Specifically, we introduce
two training objectives on different layers of LLMs: one at the bottom layers
to constrain the conversion of the target language into English, and another at
the middle layers to constrain reasoning in English. To effectively achieve the
guiding purpose, we designed two types of supervision signals: logits and
feature, which represent a stricter constraint and a relatively more relaxed
guidance. Our method guides the model to not only consider the final generated
result when processing non-English inputs but also ensure the accuracy of
internal representations. We conducted extensive experiments on typical
English-centric large models, LLaMA-2 and Gemma-2, and the results on multiple
multilingual datasets show that our method significantly outperforms
traditional fine-tuning methods.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 07:59:32 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 13:10:07 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Huo",
"Wenshuai",
""
],
[
"Feng",
"Xiaocheng",
""
],
[
"Huang",
"Yichong",
""
],
[
"Fu",
"Chengpeng",
""
],
[
"Li",
"Baohang",
""
],
[
"Ye",
"Yangfan",
""
],
[
"Zhang",
"Zhirui",
""
],
[
"Tu",
"Dandan",
""
],
[
"Tang",
"Duyu",
""
],
[
"Lu",
"Yunfei",
""
],
[
"Wang",
"Hui",
""
],
[
"Qin",
"Bing",
""
]
]
| TITLE: Enhancing Non-English Capabilities of English-Centric Large Language
Models through Deep Supervision Fine-Tuning
ABSTRACT: Large language models (LLMs) have demonstrated significant progress in
multilingual language understanding and generation. However, due to the
imbalance in training data, their capabilities in non-English languages are
limited. Recent studies revealed the English-pivot multilingual mechanism of
LLMs, where LLMs implicitly convert non-English queries into English ones at
the bottom layers and adopt English for thinking at the middle layers. However,
due to the absence of explicit supervision for cross-lingual alignment in the
intermediate layers of LLMs, the internal representations during these stages
may become inaccurate. In this work, we introduce a deep supervision
fine-tuning method (DFT) that incorporates additional supervision in the
internal layers of the model to guide its workflow. Specifically, we introduce
two training objectives on different layers of LLMs: one at the bottom layers
to constrain the conversion of the target language into English, and another at
the middle layers to constrain reasoning in English. To effectively achieve the
guiding purpose, we designed two types of supervision signals: logits and
feature, which represent a stricter constraint and a relatively more relaxed
guidance. Our method guides the model to not only consider the final generated
result when processing non-English inputs but also ensure the accuracy of
internal representations. We conducted extensive experiments on typical
English-centric large models, LLaMA-2 and Gemma-2, and the results on multiple
multilingual datasets show that our method significantly outperforms
traditional fine-tuning methods.
| no_new_dataset | 0.950319 |
2503.01307 | Kanishk Gandhi | Kanishk Gandhi, Ayush Chakravarthy, Anikait Singh, Nathan Lile, Noah
D. Goodman | Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four
Habits of Highly Effective STaRs | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Test-time inference has emerged as a powerful paradigm for enabling language
models to ``think'' longer and more carefully about complex challenges, much
like skilled human experts. While reinforcement learning (RL) can drive
self-improvement in language models on verifiable tasks, some models exhibit
substantial gains while others quickly plateau. For instance, we find that
Qwen-2.5-3B far exceeds Llama-3.2-3B under identical RL training for the game
of Countdown. This discrepancy raises a critical question: what intrinsic
properties enable effective self-improvement? We introduce a framework to
investigate this question by analyzing four key cognitive behaviors --
verification, backtracking, subgoal setting, and backward chaining -- that both
expert human problem solvers and successful language models employ. Our study
reveals that Qwen naturally exhibits these reasoning behaviors, whereas Llama
initially lacks them. In systematic experimentation with controlled behavioral
datasets, we find that priming Llama with examples containing these reasoning
behaviors enables substantial improvements during RL, matching or exceeding
Qwen's performance. Importantly, the presence of reasoning behaviors, rather
than correctness of answers, proves to be the critical factor -- models primed
with incorrect solutions containing proper reasoning patterns achieve
comparable performance to those trained on correct solutions. Finally,
leveraging continued pretraining with OpenWebMath data, filtered to amplify
reasoning behaviors, enables the Llama model to match Qwen's self-improvement
trajectory. Our findings establish a fundamental relationship between initial
reasoning behaviors and the capacity for improvement, explaining why some
language models effectively utilize additional computation while others
plateau.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 08:46:22 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Gandhi",
"Kanishk",
""
],
[
"Chakravarthy",
"Ayush",
""
],
[
"Singh",
"Anikait",
""
],
[
"Lile",
"Nathan",
""
],
[
"Goodman",
"Noah D.",
""
]
]
| TITLE: Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four
Habits of Highly Effective STaRs
ABSTRACT: Test-time inference has emerged as a powerful paradigm for enabling language
models to ``think'' longer and more carefully about complex challenges, much
like skilled human experts. While reinforcement learning (RL) can drive
self-improvement in language models on verifiable tasks, some models exhibit
substantial gains while others quickly plateau. For instance, we find that
Qwen-2.5-3B far exceeds Llama-3.2-3B under identical RL training for the game
of Countdown. This discrepancy raises a critical question: what intrinsic
properties enable effective self-improvement? We introduce a framework to
investigate this question by analyzing four key cognitive behaviors --
verification, backtracking, subgoal setting, and backward chaining -- that both
expert human problem solvers and successful language models employ. Our study
reveals that Qwen naturally exhibits these reasoning behaviors, whereas Llama
initially lacks them. In systematic experimentation with controlled behavioral
datasets, we find that priming Llama with examples containing these reasoning
behaviors enables substantial improvements during RL, matching or exceeding
Qwen's performance. Importantly, the presence of reasoning behaviors, rather
than correctness of answers, proves to be the critical factor -- models primed
with incorrect solutions containing proper reasoning patterns achieve
comparable performance to those trained on correct solutions. Finally,
leveraging continued pretraining with OpenWebMath data, filtered to amplify
reasoning behaviors, enables the Llama model to match Qwen's self-improvement
trajectory. Our findings establish a fundamental relationship between initial
reasoning behaviors and the capacity for improvement, explaining why some
language models effectively utilize additional computation while others
plateau.
| no_new_dataset | 0.941115 |
2503.01431 | Maximilian Eissler | Max Eissler, Tim Korjakow, Stefan Ganscha, Oliver T. Unke,
Klaus-Robert M\"uller and Stefan Gugler | How simple can you go? An off-the-shelf transformer approach to
molecular dynamics | 21 pages, code at https://github.com/mx-e/simple-md | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Most current neural networks for molecular dynamics (MD) include physical
inductive biases, resulting in specialized and complex architectures. This is
in contrast to most other machine learning domains, where specialist approaches
are increasingly replaced by general-purpose architectures trained on vast
datasets. In line with this trend, several recent studies have questioned the
necessity of architectural features commonly found in MD models, such as
built-in rotational equivariance or energy conservation. In this work, we
contribute to the ongoing discussion by evaluating the performance of an MD
model with as few specialized architectural features as possible. We present a
recipe for MD using an Edge Transformer, an "off-the-shelf'' transformer
architecture that has been minimally modified for the MD domain, termed MD-ET.
Our model implements neither built-in equivariance nor energy conservation. We
use a simple supervised pre-training scheme on $\sim$30 million molecular
structures from the QCML database. Using this "off-the-shelf'' approach, we
show state-of-the-art results on several benchmarks after fine-tuning for a
small number of steps. Additionally, we examine the effects of being only
approximately equivariant and energy conserving for MD simulations, proposing a
novel method for distinguishing the errors resulting from non-equivariance from
other sources of inaccuracies like numerical rounding errors. While our model
exhibits runaway energy increases on larger structures, we show approximately
energy-conserving NVE simulations for a range of small structures.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 11:34:27 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 14:04:46 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Eissler",
"Max",
""
],
[
"Korjakow",
"Tim",
""
],
[
"Ganscha",
"Stefan",
""
],
[
"Unke",
"Oliver T.",
""
],
[
"Müller",
"Klaus-Robert",
""
],
[
"Gugler",
"Stefan",
""
]
]
| TITLE: How simple can you go? An off-the-shelf transformer approach to
molecular dynamics
ABSTRACT: Most current neural networks for molecular dynamics (MD) include physical
inductive biases, resulting in specialized and complex architectures. This is
in contrast to most other machine learning domains, where specialist approaches
are increasingly replaced by general-purpose architectures trained on vast
datasets. In line with this trend, several recent studies have questioned the
necessity of architectural features commonly found in MD models, such as
built-in rotational equivariance or energy conservation. In this work, we
contribute to the ongoing discussion by evaluating the performance of an MD
model with as few specialized architectural features as possible. We present a
recipe for MD using an Edge Transformer, an "off-the-shelf'' transformer
architecture that has been minimally modified for the MD domain, termed MD-ET.
Our model implements neither built-in equivariance nor energy conservation. We
use a simple supervised pre-training scheme on $\sim$30 million molecular
structures from the QCML database. Using this "off-the-shelf'' approach, we
show state-of-the-art results on several benchmarks after fine-tuning for a
small number of steps. Additionally, we examine the effects of being only
approximately equivariant and energy conserving for MD simulations, proposing a
novel method for distinguishing the errors resulting from non-equivariance from
other sources of inaccuracies like numerical rounding errors. While our model
exhibits runaway energy increases on larger structures, we show approximately
energy-conserving NVE simulations for a range of small structures.
| no_new_dataset | 0.948202 |
2503.01582 | Saad Ejaz | Saad Ejaz, Hriday Bavle, Laura Ribeiro, Holger Voos, and Jose Luis
Sanchez-Lopez | Category-level Meta-learned NeRF Priors for Efficient Object Mapping | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | In 3D object mapping, category-level priors enable efficient object
reconstruction and canonical pose estimation, requiring only a single prior per
semantic category (e.g., chair, book, laptop). Recently, DeepSDF has
predominantly been used as a category-level shape prior, but it struggles to
reconstruct sharp geometry and is computationally expensive. In contrast, NeRFs
capture fine details but have yet to be effectively integrated with
category-level priors in a real-time multi-object mapping framework. To bridge
this gap, we introduce PRENOM, a Prior-based Efficient Neural Object Mapper
that integrates category-level priors with object-level NeRFs to enhance
reconstruction efficiency while enabling canonical object pose estimation.
PRENOM gets to know objects on a first-name basis by meta-learning on synthetic
reconstruction tasks generated from open-source shape datasets. To account for
object category variations, it employs a multi-objective genetic algorithm to
optimize the NeRF architecture for each category, balancing reconstruction
quality and training time. Additionally, prior-based probabilistic ray sampling
directs sampling toward expected object regions, accelerating convergence and
improving reconstruction quality under constrained resources. Experimental
results on a low-end GPU highlight the ability of PRENOM to achieve
high-quality reconstructions while maintaining computational feasibility.
Specifically, comparisons with prior-free NeRF-based approaches on a synthetic
dataset show a 21% lower Chamfer distance, demonstrating better reconstruction
quality. Furthermore, evaluations against other approaches using shape priors
on a noisy real-world dataset indicate a 13% improvement averaged across all
reconstruction metrics, and comparable pose and size estimation accuracy, while
being trained for 5x less time.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 14:23:37 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 02:02:19 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ejaz",
"Saad",
""
],
[
"Bavle",
"Hriday",
""
],
[
"Ribeiro",
"Laura",
""
],
[
"Voos",
"Holger",
""
],
[
"Sanchez-Lopez",
"Jose Luis",
""
]
]
| TITLE: Category-level Meta-learned NeRF Priors for Efficient Object Mapping
ABSTRACT: In 3D object mapping, category-level priors enable efficient object
reconstruction and canonical pose estimation, requiring only a single prior per
semantic category (e.g., chair, book, laptop). Recently, DeepSDF has
predominantly been used as a category-level shape prior, but it struggles to
reconstruct sharp geometry and is computationally expensive. In contrast, NeRFs
capture fine details but have yet to be effectively integrated with
category-level priors in a real-time multi-object mapping framework. To bridge
this gap, we introduce PRENOM, a Prior-based Efficient Neural Object Mapper
that integrates category-level priors with object-level NeRFs to enhance
reconstruction efficiency while enabling canonical object pose estimation.
PRENOM gets to know objects on a first-name basis by meta-learning on synthetic
reconstruction tasks generated from open-source shape datasets. To account for
object category variations, it employs a multi-objective genetic algorithm to
optimize the NeRF architecture for each category, balancing reconstruction
quality and training time. Additionally, prior-based probabilistic ray sampling
directs sampling toward expected object regions, accelerating convergence and
improving reconstruction quality under constrained resources. Experimental
results on a low-end GPU highlight the ability of PRENOM to achieve
high-quality reconstructions while maintaining computational feasibility.
Specifically, comparisons with prior-free NeRF-based approaches on a synthetic
dataset show a 21% lower Chamfer distance, demonstrating better reconstruction
quality. Furthermore, evaluations against other approaches using shape priors
on a noisy real-world dataset indicate a 13% improvement averaged across all
reconstruction metrics, and comparable pose and size estimation accuracy, while
being trained for 5x less time.
| no_new_dataset | 0.957794 |
2503.02357 | Zicheng Zhang | Zicheng Zhang, Tengchuan Kou, Shushi Wang, Chunyi Li, Wei Sun, Wei
Wang, Xiaoyu Li, Zongyu Wang, Xuezhi Cao, Xiongkuo Min, Xiaohong Liu,
Guangtao Zhai | Q-Eval-100K: Evaluating Visual Quality and Alignment Level for
Text-to-Vision Content | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Evaluating text-to-vision content hinges on two crucial aspects: visual
quality and alignment. While significant progress has been made in developing
objective models to assess these dimensions, the performance of such models
heavily relies on the scale and quality of human annotations. According to
Scaling Law, increasing the number of human-labeled instances follows a
predictable pattern that enhances the performance of evaluation models.
Therefore, we introduce a comprehensive dataset designed to Evaluate Visual
quality and Alignment Level for text-to-vision content (Q-EVAL-100K), featuring
the largest collection of human-labeled Mean Opinion Scores (MOS) for the
mentioned two aspects. The Q-EVAL-100K dataset encompasses both text-to-image
and text-to-video models, with 960K human annotations specifically focused on
visual quality and alignment for 100K instances (60K images and 40K videos).
Leveraging this dataset with context prompt, we propose Q-Eval-Score, a unified
model capable of evaluating both visual quality and alignment with special
improvements for handling long-text prompt alignment. Experimental results
indicate that the proposed Q-Eval-Score achieves superior performance on both
visual quality and alignment, with strong generalization capabilities across
other benchmarks. These findings highlight the significant value of the
Q-EVAL-100K dataset. Data and codes will be available at
https://github.com/zzc-1998/Q-Eval.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 07:28:45 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 07:50:05 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhang",
"Zicheng",
""
],
[
"Kou",
"Tengchuan",
""
],
[
"Wang",
"Shushi",
""
],
[
"Li",
"Chunyi",
""
],
[
"Sun",
"Wei",
""
],
[
"Wang",
"Wei",
""
],
[
"Li",
"Xiaoyu",
""
],
[
"Wang",
"Zongyu",
""
],
[
"Cao",
"Xuezhi",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Liu",
"Xiaohong",
""
],
[
"Zhai",
"Guangtao",
""
]
]
| TITLE: Q-Eval-100K: Evaluating Visual Quality and Alignment Level for
Text-to-Vision Content
ABSTRACT: Evaluating text-to-vision content hinges on two crucial aspects: visual
quality and alignment. While significant progress has been made in developing
objective models to assess these dimensions, the performance of such models
heavily relies on the scale and quality of human annotations. According to
Scaling Law, increasing the number of human-labeled instances follows a
predictable pattern that enhances the performance of evaluation models.
Therefore, we introduce a comprehensive dataset designed to Evaluate Visual
quality and Alignment Level for text-to-vision content (Q-EVAL-100K), featuring
the largest collection of human-labeled Mean Opinion Scores (MOS) for the
mentioned two aspects. The Q-EVAL-100K dataset encompasses both text-to-image
and text-to-video models, with 960K human annotations specifically focused on
visual quality and alignment for 100K instances (60K images and 40K videos).
Leveraging this dataset with context prompt, we propose Q-Eval-Score, a unified
model capable of evaluating both visual quality and alignment with special
improvements for handling long-text prompt alignment. Experimental results
indicate that the proposed Q-Eval-Score achieves superior performance on both
visual quality and alignment, with strong generalization capabilities across
other benchmarks. These findings highlight the significant value of the
Q-EVAL-100K dataset. Data and codes will be available at
https://github.com/zzc-1998/Q-Eval.
| new_dataset | 0.961498 |
2503.02445 | Hao Li | Hao Li, Yu-Hao Huang, Chang Xu, Viktor Schlegel, Ren-He Jiang, Riza
Batista-Navarro, Goran Nenadic, Jiang Bian | BRIDGE: Bootstrapping Text to Control Time-Series Generation via
Multi-Agent Iterative Optimization and Diffusion Modelling | Preprint. Work in progress | null | null | null | cs.LG cs.CL cs.MA | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Time-series Generation (TSG) is a prominent research area with broad
applications in simulations, data augmentation, and counterfactual analysis.
While existing methods have shown promise in unconditional single-domain TSG,
real-world applications demand for cross-domain approaches capable of
controlled generation tailored to domain-specific constraints and
instance-level requirements. In this paper, we argue that text can provide
semantic insights, domain information and instance-specific temporal patterns,
to guide and improve TSG. We introduce ``Text-Controlled TSG'', a task focused
on generating realistic time series by incorporating textual descriptions. To
address data scarcity in this setting, we propose a novel LLM-based Multi-Agent
framework that synthesizes diverse, realistic text-to-TS datasets. Furthermore,
we introduce BRIDGE, a hybrid text-controlled TSG framework that integrates
semantic prototypes with text description for supporting domain-level guidance.
This approach achieves state-of-the-art generation fidelity on 11 of 12
datasets, and improves controllability by 12.52% on MSE and 6.34% MAE compared
to no text input generation, highlighting its potential for generating tailored
time-series data.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 09:40:00 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 06:04:37 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Li",
"Hao",
""
],
[
"Huang",
"Yu-Hao",
""
],
[
"Xu",
"Chang",
""
],
[
"Schlegel",
"Viktor",
""
],
[
"Jiang",
"Ren-He",
""
],
[
"Batista-Navarro",
"Riza",
""
],
[
"Nenadic",
"Goran",
""
],
[
"Bian",
"Jiang",
""
]
]
| TITLE: BRIDGE: Bootstrapping Text to Control Time-Series Generation via
Multi-Agent Iterative Optimization and Diffusion Modelling
ABSTRACT: Time-series Generation (TSG) is a prominent research area with broad
applications in simulations, data augmentation, and counterfactual analysis.
While existing methods have shown promise in unconditional single-domain TSG,
real-world applications demand for cross-domain approaches capable of
controlled generation tailored to domain-specific constraints and
instance-level requirements. In this paper, we argue that text can provide
semantic insights, domain information and instance-specific temporal patterns,
to guide and improve TSG. We introduce ``Text-Controlled TSG'', a task focused
on generating realistic time series by incorporating textual descriptions. To
address data scarcity in this setting, we propose a novel LLM-based Multi-Agent
framework that synthesizes diverse, realistic text-to-TS datasets. Furthermore,
we introduce BRIDGE, a hybrid text-controlled TSG framework that integrates
semantic prototypes with text description for supporting domain-level guidance.
This approach achieves state-of-the-art generation fidelity on 11 of 12
datasets, and improves controllability by 12.52% on MSE and 6.34% MAE compared
to no text input generation, highlighting its potential for generating tailored
time-series data.
| no_new_dataset | 0.94474 |
2503.02513 | Guanyu Cui | Guanyu Cui, Hanzhi Wang, Zhewei Wei | Mixing Time Matters: Accelerating Effective Resistance Estimation via
Bidirectional Method | Technical Report. Full Paper Accepted by KDD 2025 (August Cycle) | null | null | null | cs.SI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of efficiently approximating the \textit{effective
resistance} (ER) on undirected graphs, where ER is a widely used node proximity
measure with applications in graph spectral sparsification, multi-class graph
clustering, network robustness analysis, graph machine learning, and more.
Specifically, given any nodes $s$ and $t$ in an undirected graph $G$, we aim to
efficiently estimate the ER value $R(s,t)$ between nodes $s$ and $t$, ensuring
a small absolute error $\epsilon$. The previous best algorithm for this problem
has a worst-case computational complexity of
$\tilde{O}\left(\frac{L_{\max}^3}{\epsilon^2 d^2}\right)$, where the value of
$L_{\max}$ depends on the mixing time of random walks on $G$, $d = \min\{d(s),
d(t)\}$, and $d(s)$, $d(t)$ denote the degrees of nodes $s$ and $t$,
respectively. We improve this complexity to
$\tilde{O}\left(\min\left\{\frac{L_{\max}^{7/3}}{\epsilon^{2/3}},
\frac{L_{\max}^3}{\epsilon^2d^2}, mL_{\max}\right\}\right)$, achieving a
theoretical improvement of
$\tilde{O}\left(\max\left\{\frac{L_{\max}^{2/3}}{\epsilon^{4/3} d^2}, 1,
\frac{L_{\max}^2}{\epsilon^2 d^2 m}\right\}\right)$ over previous results.
Here, $m$ denotes the number of edges. Given that $L_{\max}$ is often very
large in real-world networks (e.g., $L_{\max} > 10^4$), our improvement on
$L_{\max}$ is significant, especially for real-world networks. We also conduct
extensive experiments on real-world and synthetic graph datasets to empirically
demonstrate the superiority of our method. The experimental results show that
our method achieves a $10\times$ to $1000\times$ speedup in running time while
maintaining the same absolute error compared to baseline methods.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 11:20:57 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 02:49:14 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Cui",
"Guanyu",
""
],
[
"Wang",
"Hanzhi",
""
],
[
"Wei",
"Zhewei",
""
]
]
| TITLE: Mixing Time Matters: Accelerating Effective Resistance Estimation via
Bidirectional Method
ABSTRACT: We study the problem of efficiently approximating the \textit{effective
resistance} (ER) on undirected graphs, where ER is a widely used node proximity
measure with applications in graph spectral sparsification, multi-class graph
clustering, network robustness analysis, graph machine learning, and more.
Specifically, given any nodes $s$ and $t$ in an undirected graph $G$, we aim to
efficiently estimate the ER value $R(s,t)$ between nodes $s$ and $t$, ensuring
a small absolute error $\epsilon$. The previous best algorithm for this problem
has a worst-case computational complexity of
$\tilde{O}\left(\frac{L_{\max}^3}{\epsilon^2 d^2}\right)$, where the value of
$L_{\max}$ depends on the mixing time of random walks on $G$, $d = \min\{d(s),
d(t)\}$, and $d(s)$, $d(t)$ denote the degrees of nodes $s$ and $t$,
respectively. We improve this complexity to
$\tilde{O}\left(\min\left\{\frac{L_{\max}^{7/3}}{\epsilon^{2/3}},
\frac{L_{\max}^3}{\epsilon^2d^2}, mL_{\max}\right\}\right)$, achieving a
theoretical improvement of
$\tilde{O}\left(\max\left\{\frac{L_{\max}^{2/3}}{\epsilon^{4/3} d^2}, 1,
\frac{L_{\max}^2}{\epsilon^2 d^2 m}\right\}\right)$ over previous results.
Here, $m$ denotes the number of edges. Given that $L_{\max}$ is often very
large in real-world networks (e.g., $L_{\max} > 10^4$), our improvement on
$L_{\max}$ is significant, especially for real-world networks. We also conduct
extensive experiments on real-world and synthetic graph datasets to empirically
demonstrate the superiority of our method. The experimental results show that
our method achieves a $10\times$ to $1000\times$ speedup in running time while
maintaining the same absolute error compared to baseline methods.
| no_new_dataset | 0.951863 |
2503.02603 | Yulong Hui | Yulong Hui, Yihao Liu, Yao Lu, Huanchen Zhang | OkraLong: A Flexible Retrieval-Augmented Framework for Long-Text Query
Processing | null | null | null | null | cs.CL cs.IR | http://creativecommons.org/licenses/by-sa/4.0/ | Large Language Models (LLMs) encounter challenges in efficiently processing
long-text queries, as seen in applications like enterprise document analysis
and financial report comprehension. While conventional solutions employ
long-context processing or Retrieval-Augmented Generation (RAG), they suffer
from prohibitive input expenses or incomplete information. Recent advancements
adopt context compression and dynamic retrieval loops, but still sacrifice
critical details or incur iterative costs. To address these limitations, we
propose OkraLong, a novel framework that flexibly optimizes the entire
processing workflow. Unlike prior static or coarse-grained adaptive strategies,
OkraLong adopts fine-grained orchestration through three synergistic
components: analyzer, organizer and executor. The analyzer characterizes the
task states, which guide the organizer in dynamically scheduling the workflow.
The executor carries out the execution and generates the final answer.
Experimental results demonstrate that OkraLong not only enhances answer
accuracy but also achieves cost-effectiveness across a variety of datasets.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 13:21:47 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 02:13:38 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Hui",
"Yulong",
""
],
[
"Liu",
"Yihao",
""
],
[
"Lu",
"Yao",
""
],
[
"Zhang",
"Huanchen",
""
]
]
| TITLE: OkraLong: A Flexible Retrieval-Augmented Framework for Long-Text Query
Processing
ABSTRACT: Large Language Models (LLMs) encounter challenges in efficiently processing
long-text queries, as seen in applications like enterprise document analysis
and financial report comprehension. While conventional solutions employ
long-context processing or Retrieval-Augmented Generation (RAG), they suffer
from prohibitive input expenses or incomplete information. Recent advancements
adopt context compression and dynamic retrieval loops, but still sacrifice
critical details or incur iterative costs. To address these limitations, we
propose OkraLong, a novel framework that flexibly optimizes the entire
processing workflow. Unlike prior static or coarse-grained adaptive strategies,
OkraLong adopts fine-grained orchestration through three synergistic
components: analyzer, organizer and executor. The analyzer characterizes the
task states, which guide the organizer in dynamically scheduling the workflow.
The executor carries out the execution and generates the final answer.
Experimental results demonstrate that OkraLong not only enhances answer
accuracy but also achieves cost-effectiveness across a variety of datasets.
| no_new_dataset | 0.944022 |
2503.02689 | Kairong Yu | Tianqing Zhang, Kairong Yu, Xian Zhong, Hongwei Wang, Qi Xu, Qiang
Zhang | STAA-SNN: Spatial-Temporal Attention Aggregator for Spiking Neural
Networks | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spiking Neural Networks (SNNs) have gained significant attention due to their
biological plausibility and energy efficiency, making them promising
alternatives to Artificial Neural Networks (ANNs). However, the performance gap
between SNNs and ANNs remains a substantial challenge hindering the widespread
adoption of SNNs. In this paper, we propose a Spatial-Temporal Attention
Aggregator SNN (STAA-SNN) framework, which dynamically focuses on and captures
both spatial and temporal dependencies. First, we introduce a spike-driven
self-attention mechanism specifically designed for SNNs. Additionally, we
pioneeringly incorporate position encoding to integrate latent temporal
relationships into the incoming features. For spatial-temporal information
aggregation, we employ step attention to selectively amplify relevant features
at different steps. Finally, we implement a time-step random dropout strategy
to avoid local optima. As a result, STAA-SNN effectively captures both spatial
and temporal dependencies, enabling the model to analyze complex patterns and
make accurate predictions. The framework demonstrates exceptional performance
across diverse datasets and exhibits strong generalization capabilities.
Notably, STAA-SNN achieves state-of-the-art results on neuromorphic datasets
CIFAR10-DVS, with remarkable performances of 97.14%, 82.05% and 70.40% on the
static datasets CIFAR-10, CIFAR-100 and ImageNet, respectively. Furthermore,
our model exhibits improved performance ranging from 0.33\% to 2.80\% with
fewer time steps. The code for the model is available on GitHub.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 15:02:32 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 03:41:41 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhang",
"Tianqing",
""
],
[
"Yu",
"Kairong",
""
],
[
"Zhong",
"Xian",
""
],
[
"Wang",
"Hongwei",
""
],
[
"Xu",
"Qi",
""
],
[
"Zhang",
"Qiang",
""
]
]
| TITLE: STAA-SNN: Spatial-Temporal Attention Aggregator for Spiking Neural
Networks
ABSTRACT: Spiking Neural Networks (SNNs) have gained significant attention due to their
biological plausibility and energy efficiency, making them promising
alternatives to Artificial Neural Networks (ANNs). However, the performance gap
between SNNs and ANNs remains a substantial challenge hindering the widespread
adoption of SNNs. In this paper, we propose a Spatial-Temporal Attention
Aggregator SNN (STAA-SNN) framework, which dynamically focuses on and captures
both spatial and temporal dependencies. First, we introduce a spike-driven
self-attention mechanism specifically designed for SNNs. Additionally, we
pioneeringly incorporate position encoding to integrate latent temporal
relationships into the incoming features. For spatial-temporal information
aggregation, we employ step attention to selectively amplify relevant features
at different steps. Finally, we implement a time-step random dropout strategy
to avoid local optima. As a result, STAA-SNN effectively captures both spatial
and temporal dependencies, enabling the model to analyze complex patterns and
make accurate predictions. The framework demonstrates exceptional performance
across diverse datasets and exhibits strong generalization capabilities.
Notably, STAA-SNN achieves state-of-the-art results on neuromorphic datasets
CIFAR10-DVS, with remarkable performances of 97.14%, 82.05% and 70.40% on the
static datasets CIFAR-10, CIFAR-100 and ImageNet, respectively. Furthermore,
our model exhibits improved performance ranging from 0.33\% to 2.80\% with
fewer time steps. The code for the model is available on GitHub.
| no_new_dataset | 0.94801 |
2503.02870 | Beepul Bharti | Beepul Bharti, Mary Versa Clemens-Sewall, Paul H. Yi, and Jeremias
Sulam | Multiaccuracy and Multicalibration via Proxy Groups | null | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | As the use of predictive machine learning algorithms increases in high-stakes
decision-making, it is imperative that these algorithms are fair across
sensitive groups. Unfortunately, measuring and enforcing fairness in real-world
applications can be challenging due to missing or incomplete sensitive group
data. Proxy-sensitive attributes have been proposed as a practical and
effective solution in these settings, but only for parity-based fairness
notions. Knowing how to evaluate and control for fairness with missing
sensitive group data for newer and more flexible frameworks, such as
multiaccuracy and multicalibration, remains unexplored. In this work, we
address this gap by demonstrating that in the absence of sensitive group data,
proxy-sensitive attributes can provably be used to derive actionable upper
bounds on the true multiaccuracy and multicalibration, providing insights into
a model's potential worst-case fairness violations. Additionally, we show that
adjusting models to satisfy multiaccuracy and multicalibration across
proxy-sensitive attributes can significantly mitigate these violations for the
true, but unknown, sensitive groups. Through several experiments on real-world
datasets, we illustrate that approximate multiaccuracy and multicalibration can
be achieved even when sensitive group information is incomplete or unavailable.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:47:54 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2025 04:41:11 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Bharti",
"Beepul",
""
],
[
"Clemens-Sewall",
"Mary Versa",
""
],
[
"Yi",
"Paul H.",
""
],
[
"Sulam",
"Jeremias",
""
]
]
| TITLE: Multiaccuracy and Multicalibration via Proxy Groups
ABSTRACT: As the use of predictive machine learning algorithms increases in high-stakes
decision-making, it is imperative that these algorithms are fair across
sensitive groups. Unfortunately, measuring and enforcing fairness in real-world
applications can be challenging due to missing or incomplete sensitive group
data. Proxy-sensitive attributes have been proposed as a practical and
effective solution in these settings, but only for parity-based fairness
notions. Knowing how to evaluate and control for fairness with missing
sensitive group data for newer and more flexible frameworks, such as
multiaccuracy and multicalibration, remains unexplored. In this work, we
address this gap by demonstrating that in the absence of sensitive group data,
proxy-sensitive attributes can provably be used to derive actionable upper
bounds on the true multiaccuracy and multicalibration, providing insights into
a model's potential worst-case fairness violations. Additionally, we show that
adjusting models to satisfy multiaccuracy and multicalibration across
proxy-sensitive attributes can significantly mitigate these violations for the
true, but unknown, sensitive groups. Through several experiments on real-world
datasets, we illustrate that approximate multiaccuracy and multicalibration can
be achieved even when sensitive group information is incomplete or unavailable.
| no_new_dataset | 0.947039 |
2503.02886 | Franziska Roesner | Emi Yoshikawa and Franziska Roesner | Exploring Political Ads on News and Media Websites During the 2024 U.S.
Elections | null | null | null | null | cs.SI cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Building on recent work studying content in the online advertising ecosystem,
including our own prior study of political ads on the web during the 2020 U.S.
elections, we analyze political ad content appearing on websites leading up to
and during the 2024 U.S. elections. Crawling a set of 745 news and media
websites several times from three different U.S. locations (Atlanta, Seattle,
and Los Angeles), we collect a dataset of over 15000 ads, including (at least)
315 political ads, and we analyze it quantitatively and qualitatively. Among
our findings: a prevalence of clickbait political news ads, echoing prior work;
a seemingly new emphasis (compared to 2020) on voting safety and eligibility
ads, particularly in Atlanta; and non-election related political ads around the
Israel-Palestine conflict, particularly in Seattle. We join prior work in
calling for more oversight and transparency of political-related ads on the
web. Our dataset is available at https://ad-archive.cs.washington.edu.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 20:34:39 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yoshikawa",
"Emi",
""
],
[
"Roesner",
"Franziska",
""
]
]
| TITLE: Exploring Political Ads on News and Media Websites During the 2024 U.S.
Elections
ABSTRACT: Building on recent work studying content in the online advertising ecosystem,
including our own prior study of political ads on the web during the 2020 U.S.
elections, we analyze political ad content appearing on websites leading up to
and during the 2024 U.S. elections. Crawling a set of 745 news and media
websites several times from three different U.S. locations (Atlanta, Seattle,
and Los Angeles), we collect a dataset of over 15000 ads, including (at least)
315 political ads, and we analyze it quantitatively and qualitatively. Among
our findings: a prevalence of clickbait political news ads, echoing prior work;
a seemingly new emphasis (compared to 2020) on voting safety and eligibility
ads, particularly in Atlanta; and non-election related political ads around the
Israel-Palestine conflict, particularly in Seattle. We join prior work in
calling for more oversight and transparency of political-related ads on the
web. Our dataset is available at https://ad-archive.cs.washington.edu.
| new_dataset | 0.95096 |
2503.02897 | Hong Lu | Hong Lu, Yali Bian, Rahul C. Shah | ClipGrader: Leveraging Vision-Language Models for Robust Label Quality
Assessment in Object Detection | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | High-quality annotations are essential for object detection models, but
ensuring label accuracy - especially for bounding boxes - remains both
challenging and costly. This paper introduces ClipGrader, a novel approach that
leverages vision-language models to automatically assess the accuracy of
bounding box annotations. By adapting CLIP (Contrastive Language-Image
Pre-training) to evaluate both class label correctness and spatial precision of
bounding box, ClipGrader offers an effective solution for grading object
detection labels. Tested on modified object detection datasets with
artificially disturbed bounding boxes, ClipGrader achieves 91% accuracy on COCO
with a 1.8% false positive rate. Moreover, it maintains 87% accuracy with a
2.1% false positive rate when trained on just 10% of the COCO data. ClipGrader
also scales effectively to larger datasets such as LVIS, achieving 79% accuracy
across 1,203 classes. Our experiments demonstrate ClipGrader's ability to
identify errors in existing COCO annotations, highlighting its potential for
dataset refinement. When integrated into a semi-supervised object detection
(SSOD) model, ClipGrader readily improves the pseudo label quality, helping
achieve higher mAP (mean Average Precision) throughout the training process.
ClipGrader thus provides a scalable AI-assisted tool for enhancing annotation
quality control and verifying annotations in large-scale object detection
datasets.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 05:02:31 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Lu",
"Hong",
""
],
[
"Bian",
"Yali",
""
],
[
"Shah",
"Rahul C.",
""
]
]
| TITLE: ClipGrader: Leveraging Vision-Language Models for Robust Label Quality
Assessment in Object Detection
ABSTRACT: High-quality annotations are essential for object detection models, but
ensuring label accuracy - especially for bounding boxes - remains both
challenging and costly. This paper introduces ClipGrader, a novel approach that
leverages vision-language models to automatically assess the accuracy of
bounding box annotations. By adapting CLIP (Contrastive Language-Image
Pre-training) to evaluate both class label correctness and spatial precision of
bounding box, ClipGrader offers an effective solution for grading object
detection labels. Tested on modified object detection datasets with
artificially disturbed bounding boxes, ClipGrader achieves 91% accuracy on COCO
with a 1.8% false positive rate. Moreover, it maintains 87% accuracy with a
2.1% false positive rate when trained on just 10% of the COCO data. ClipGrader
also scales effectively to larger datasets such as LVIS, achieving 79% accuracy
across 1,203 classes. Our experiments demonstrate ClipGrader's ability to
identify errors in existing COCO annotations, highlighting its potential for
dataset refinement. When integrated into a semi-supervised object detection
(SSOD) model, ClipGrader readily improves the pseudo label quality, helping
achieve higher mAP (mean Average Precision) throughout the training process.
ClipGrader thus provides a scalable AI-assisted tool for enhancing annotation
quality control and verifying annotations in large-scale object detection
datasets.
| no_new_dataset | 0.949153 |
2503.02904 | Saurabh Koju | Saurabh Koju, Saurav Bastola, Prashant Shrestha, Sanskar Amgain, Yash
Raj Shrestha, Rudra P. K. Poudel, Binod Bhattarai | Surgical Vision World Model | null | null | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Realistic and interactive surgical simulation has the potential to facilitate
crucial applications, such as medical professional training and autonomous
surgical agent training. In the natural visual domain, world models have
enabled action-controlled data generation, demonstrating the potential to train
autonomous agents in interactive simulated environments when large-scale real
data acquisition is infeasible. However, such works in the surgical domain have
been limited to simplified computer simulations, and lack realism. Furthermore,
existing literature in world models has predominantly dealt with action-labeled
data, limiting their applicability to real-world surgical data, where obtaining
action annotation is prohibitively expensive. Inspired by the recent success of
Genie in leveraging unlabeled video game data to infer latent actions and
enable action-controlled data generation, we propose the first surgical vision
world model. The proposed model can generate action-controllable surgical data
and the architecture design is verified with extensive experiments on the
unlabeled SurgToolLoc-2022 dataset. Codes and implementation details are
available at https://github.com/bhattarailab/Surgical-Vision-World-Model
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 10:55:52 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Koju",
"Saurabh",
""
],
[
"Bastola",
"Saurav",
""
],
[
"Shrestha",
"Prashant",
""
],
[
"Amgain",
"Sanskar",
""
],
[
"Shrestha",
"Yash Raj",
""
],
[
"Poudel",
"Rudra P. K.",
""
],
[
"Bhattarai",
"Binod",
""
]
]
| TITLE: Surgical Vision World Model
ABSTRACT: Realistic and interactive surgical simulation has the potential to facilitate
crucial applications, such as medical professional training and autonomous
surgical agent training. In the natural visual domain, world models have
enabled action-controlled data generation, demonstrating the potential to train
autonomous agents in interactive simulated environments when large-scale real
data acquisition is infeasible. However, such works in the surgical domain have
been limited to simplified computer simulations, and lack realism. Furthermore,
existing literature in world models has predominantly dealt with action-labeled
data, limiting their applicability to real-world surgical data, where obtaining
action annotation is prohibitively expensive. Inspired by the recent success of
Genie in leveraging unlabeled video game data to infer latent actions and
enable action-controlled data generation, we propose the first surgical vision
world model. The proposed model can generate action-controllable surgical data
and the architecture design is verified with extensive experiments on the
unlabeled SurgToolLoc-2022 dataset. Codes and implementation details are
available at https://github.com/bhattarailab/Surgical-Vision-World-Model
| no_new_dataset | 0.947527 |
2503.02907 | Samuel Sohn | Samuel S. Sohn, Sten Knutsen, Karin Stromswold | Fine-Tuning Whisper for Inclusive Prosodic Stress Analysis | Appears in Proceedings of the ISCA/ITG Workshop on Diversity in Large
Speech and Language Models | null | null | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prosody plays a crucial role in speech perception, influencing both human
understanding and automatic speech recognition (ASR) systems. Despite its
importance, prosodic stress remains under-studied due to the challenge of
efficiently analyzing it. This study explores fine-tuning OpenAI's Whisper
large-v2 ASR model to recognize phrasal, lexical, and contrastive stress in
speech. Using a dataset of 66 native English speakers, including male, female,
neurotypical, and neurodivergent individuals, we assess the model's ability to
generalize stress patterns and classify speakers by neurotype and gender based
on brief speech samples. Our results highlight near-human accuracy in ASR
performance across all three stress types and near-perfect precision in
classifying gender and neurotype. By improving prosody-aware ASR, this work
contributes to equitable and robust transcription technologies for diverse
populations.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 16:48:31 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Sohn",
"Samuel S.",
""
],
[
"Knutsen",
"Sten",
""
],
[
"Stromswold",
"Karin",
""
]
]
| TITLE: Fine-Tuning Whisper for Inclusive Prosodic Stress Analysis
ABSTRACT: Prosody plays a crucial role in speech perception, influencing both human
understanding and automatic speech recognition (ASR) systems. Despite its
importance, prosodic stress remains under-studied due to the challenge of
efficiently analyzing it. This study explores fine-tuning OpenAI's Whisper
large-v2 ASR model to recognize phrasal, lexical, and contrastive stress in
speech. Using a dataset of 66 native English speakers, including male, female,
neurotypical, and neurodivergent individuals, we assess the model's ability to
generalize stress patterns and classify speakers by neurotype and gender based
on brief speech samples. Our results highlight near-human accuracy in ASR
performance across all three stress types and near-perfect precision in
classifying gender and neurotype. By improving prosody-aware ASR, this work
contributes to equitable and robust transcription technologies for diverse
populations.
| no_new_dataset | 0.920718 |
2503.02913 | Zilin Zhao | Zilin Zhao, Chishui Chen, Haotian Shi, Jiale Chen, Xuanlin Yue,
Zhejian Yang and Yang Liu | Towards Robust Multi-UAV Collaboration: MARL with Noise-Resilient
Communication and Attention Mechanisms | null | null | null | null | cs.MA cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Efficient path planning for unmanned aerial vehicles (UAVs) is crucial in
remote sensing and information collection. As task scales expand, the
cooperative deployment of multiple UAVs significantly improves information
collection efficiency. However, collaborative communication and decision-making
for multiple UAVs remain major challenges in path planning, especially in noisy
environments. To efficiently accomplish complex information collection tasks in
3D space and address robust communication issues, we propose a multi-agent
reinforcement learning (MARL) framework for UAV path planning based on the
Counterfactual Multi-Agent Policy Gradients (COMA) algorithm. The framework
incorporates attention mechanism-based UAV communication protocol and
training-deployment system, significantly improving communication robustness
and individual decision-making capabilities in noisy conditions. Experiments
conducted on both synthetic and real-world datasets demonstrate that our method
outperforms existing algorithms in terms of path planning efficiency and
robustness, especially in noisy environments, achieving a 78\% improvement in
entropy reduction.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 08:05:14 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhao",
"Zilin",
""
],
[
"Chen",
"Chishui",
""
],
[
"Shi",
"Haotian",
""
],
[
"Chen",
"Jiale",
""
],
[
"Yue",
"Xuanlin",
""
],
[
"Yang",
"Zhejian",
""
],
[
"Liu",
"Yang",
""
]
]
| TITLE: Towards Robust Multi-UAV Collaboration: MARL with Noise-Resilient
Communication and Attention Mechanisms
ABSTRACT: Efficient path planning for unmanned aerial vehicles (UAVs) is crucial in
remote sensing and information collection. As task scales expand, the
cooperative deployment of multiple UAVs significantly improves information
collection efficiency. However, collaborative communication and decision-making
for multiple UAVs remain major challenges in path planning, especially in noisy
environments. To efficiently accomplish complex information collection tasks in
3D space and address robust communication issues, we propose a multi-agent
reinforcement learning (MARL) framework for UAV path planning based on the
Counterfactual Multi-Agent Policy Gradients (COMA) algorithm. The framework
incorporates attention mechanism-based UAV communication protocol and
training-deployment system, significantly improving communication robustness
and individual decision-making capabilities in noisy conditions. Experiments
conducted on both synthetic and real-world datasets demonstrate that our method
outperforms existing algorithms in terms of path planning efficiency and
robustness, especially in noisy environments, achieving a 78\% improvement in
entropy reduction.
| no_new_dataset | 0.945096 |
2503.02916 | Hanjing Ye | Yu Zhan, Hanjing Ye, Hong Zhang | Monocular Person Localization under Camera Ego-motion | Under review | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Localizing a person from a moving monocular camera is critical for
Human-Robot Interaction (HRI). To estimate the 3D human position from a 2D
image, existing methods either depend on the geometric assumption of a fixed
camera or use a position regression model trained on datasets containing little
camera ego-motion. These methods are vulnerable to fierce camera ego-motion,
resulting in inaccurate person localization. We consider person localization as
a part of a pose estimation problem. By representing a human with a four-point
model, our method jointly estimates the 2D camera attitude and the person's 3D
location through optimization. Evaluations on both public datasets and real
robot experiments demonstrate our method outperforms baselines in person
localization accuracy. Our method is further implemented into a
person-following system and deployed on an agile quadruped robot.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 11:07:27 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhan",
"Yu",
""
],
[
"Ye",
"Hanjing",
""
],
[
"Zhang",
"Hong",
""
]
]
| TITLE: Monocular Person Localization under Camera Ego-motion
ABSTRACT: Localizing a person from a moving monocular camera is critical for
Human-Robot Interaction (HRI). To estimate the 3D human position from a 2D
image, existing methods either depend on the geometric assumption of a fixed
camera or use a position regression model trained on datasets containing little
camera ego-motion. These methods are vulnerable to fierce camera ego-motion,
resulting in inaccurate person localization. We consider person localization as
a part of a pose estimation problem. By representing a human with a four-point
model, our method jointly estimates the 2D camera attitude and the person's 3D
location through optimization. Evaluations on both public datasets and real
robot experiments demonstrate our method outperforms baselines in person
localization accuracy. Our method is further implemented into a
person-following system and deployed on an agile quadruped robot.
| no_new_dataset | 0.947769 |
2503.02917 | Deval Mehta | Deval Mehta, Yiwen Jiang, Catherine L Jan, Mingguang He, Kshitij
Jadhav, Zongyuan Ge | Interpretable Few-Shot Retinal Disease Diagnosis with Concept-Guided
Prompting of Vision-Language Models | Accepted to Information Processing in Medical Imaging (IPMI) 2025 | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent advancements in deep learning have shown significant potential for
classifying retinal diseases using color fundus images. However, existing works
predominantly rely exclusively on image data, lack interpretability in their
diagnostic decisions, and treat medical professionals primarily as annotators
for ground truth labeling. To fill this gap, we implement two key strategies:
extracting interpretable concepts of retinal diseases using the knowledge base
of GPT models and incorporating these concepts as a language component in
prompt-learning to train vision-language (VL) models with both fundus images
and their associated concepts. Our method not only improves retinal disease
classification but also enriches few-shot and zero-shot detection (novel
disease detection), while offering the added benefit of concept-based model
interpretability. Our extensive evaluation across two diverse retinal fundus
image datasets illustrates substantial performance gains in VL-model based
few-shot methodologies through our concept integration approach, demonstrating
an average improvement of approximately 5.8\% and 2.7\% mean average precision
for 16-shot learning and zero-shot (novel class) detection respectively. Our
method marks a pivotal step towards interpretable and efficient retinal disease
recognition for real-world clinical applications.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 12:03:42 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Mehta",
"Deval",
""
],
[
"Jiang",
"Yiwen",
""
],
[
"Jan",
"Catherine L",
""
],
[
"He",
"Mingguang",
""
],
[
"Jadhav",
"Kshitij",
""
],
[
"Ge",
"Zongyuan",
""
]
]
| TITLE: Interpretable Few-Shot Retinal Disease Diagnosis with Concept-Guided
Prompting of Vision-Language Models
ABSTRACT: Recent advancements in deep learning have shown significant potential for
classifying retinal diseases using color fundus images. However, existing works
predominantly rely exclusively on image data, lack interpretability in their
diagnostic decisions, and treat medical professionals primarily as annotators
for ground truth labeling. To fill this gap, we implement two key strategies:
extracting interpretable concepts of retinal diseases using the knowledge base
of GPT models and incorporating these concepts as a language component in
prompt-learning to train vision-language (VL) models with both fundus images
and their associated concepts. Our method not only improves retinal disease
classification but also enriches few-shot and zero-shot detection (novel
disease detection), while offering the added benefit of concept-based model
interpretability. Our extensive evaluation across two diverse retinal fundus
image datasets illustrates substantial performance gains in VL-model based
few-shot methodologies through our concept integration approach, demonstrating
an average improvement of approximately 5.8\% and 2.7\% mean average precision
for 16-shot learning and zero-shot (novel class) detection respectively. Our
method marks a pivotal step towards interpretable and efficient retinal disease
recognition for real-world clinical applications.
| no_new_dataset | 0.95511 |
2503.02922 | Joyce Cahoon | Joyce Cahoon, Prerna Singh, Nick Litombe, Jonathan Larson, Ha Trinh,
Yiwen Zhu, Andreas Mueller, Fotis Psallidas, Carlo Curino | Optimizing open-domain question answering with graph-based retrieval
augmented generation | null | null | null | null | cs.IR | http://creativecommons.org/publicdomain/zero/1.0/ | In this work, we benchmark various graph-based retrieval-augmented generation
(RAG) systems across a broad spectrum of query types, including OLTP-style
(fact-based) and OLAP-style (thematic) queries, to address the complex demands
of open-domain question answering (QA). Traditional RAG methods often fall
short in handling nuanced, multi-document synthesis tasks. By structuring
knowledge as graphs, we can facilitate the retrieval of context that captures
greater semantic depth and enhances language model operations. We explore
graph-based RAG methodologies and introduce TREX, a novel, cost-effective
alternative that combines graph-based and vector-based retrieval techniques.
Our benchmarking across four diverse datasets highlights the strengths of
different RAG methodologies, demonstrates TREX's ability to handle multiple
open-domain QA types, and reveals the limitations of current evaluation
methods.
In a real-world technical support case study, we demonstrate how TREX
solutions can surpass conventional vector-based RAG in efficiently synthesizing
data from heterogeneous sources. Our findings underscore the potential of
augmenting large language models with advanced retrieval and orchestration
capabilities, advancing scalable, graph-based AI solutions.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:47:17 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Cahoon",
"Joyce",
""
],
[
"Singh",
"Prerna",
""
],
[
"Litombe",
"Nick",
""
],
[
"Larson",
"Jonathan",
""
],
[
"Trinh",
"Ha",
""
],
[
"Zhu",
"Yiwen",
""
],
[
"Mueller",
"Andreas",
""
],
[
"Psallidas",
"Fotis",
""
],
[
"Curino",
"Carlo",
""
]
]
| TITLE: Optimizing open-domain question answering with graph-based retrieval
augmented generation
ABSTRACT: In this work, we benchmark various graph-based retrieval-augmented generation
(RAG) systems across a broad spectrum of query types, including OLTP-style
(fact-based) and OLAP-style (thematic) queries, to address the complex demands
of open-domain question answering (QA). Traditional RAG methods often fall
short in handling nuanced, multi-document synthesis tasks. By structuring
knowledge as graphs, we can facilitate the retrieval of context that captures
greater semantic depth and enhances language model operations. We explore
graph-based RAG methodologies and introduce TREX, a novel, cost-effective
alternative that combines graph-based and vector-based retrieval techniques.
Our benchmarking across four diverse datasets highlights the strengths of
different RAG methodologies, demonstrates TREX's ability to handle multiple
open-domain QA types, and reveals the limitations of current evaluation
methods.
In a real-world technical support case study, we demonstrate how TREX
solutions can surpass conventional vector-based RAG in efficiently synthesizing
data from heterogeneous sources. Our findings underscore the potential of
augmenting large language models with advanced retrieval and orchestration
capabilities, advancing scalable, graph-based AI solutions.
| no_new_dataset | 0.942718 |
2503.02924 | Yue Meng | Yue Meng, Chuchu fan | Diverse Controllable Diffusion Policy with Signal Temporal Logic | Accepted by IEEE Robotics and Automation Letters (RA-L), October 2024 | IEEE Robotics and Automation Letters, vol. 9, no. 10, pp.
8354-8361, Oct. 2024 | 10.1109/LRA.2024.3444668 | null | cs.RO cs.AI cs.LG cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating realistic simulations is critical for autonomous system
applications such as self-driving and human-robot interactions. However,
driving simulators nowadays still have difficulty in generating controllable,
diverse, and rule-compliant behaviors for road participants: Rule-based models
cannot produce diverse behaviors and require careful tuning, whereas
learning-based methods imitate the policy from data but are not designed to
follow the rules explicitly. Besides, the real-world datasets are by nature
"single-outcome", making the learning method hard to generate diverse
behaviors. In this paper, we leverage Signal Temporal Logic (STL) and Diffusion
Models to learn controllable, diverse, and rule-aware policy. We first
calibrate the STL on the real-world data, then generate diverse synthetic data
using trajectory optimization, and finally learn the rectified diffusion policy
on the augmented dataset. We test on the NuScenes dataset and our approach can
achieve the most diverse rule-compliant trajectories compared to other
baselines, with a runtime 1/17X to the second-best approach. In the closed-loop
testing, our approach reaches the highest diversity, rule satisfaction rate,
and the least collision rate. Our method can generate varied characteristics
conditional on different STL parameters in testing. A case study on human-robot
encounter scenarios shows our approach can generate diverse and
closed-to-oracle trajectories. The annotation tool, augmented dataset, and code
are available at https://github.com/mengyuest/pSTL-diffusion-policy.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:59:00 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Meng",
"Yue",
""
],
[
"fan",
"Chuchu",
""
]
]
| TITLE: Diverse Controllable Diffusion Policy with Signal Temporal Logic
ABSTRACT: Generating realistic simulations is critical for autonomous system
applications such as self-driving and human-robot interactions. However,
driving simulators nowadays still have difficulty in generating controllable,
diverse, and rule-compliant behaviors for road participants: Rule-based models
cannot produce diverse behaviors and require careful tuning, whereas
learning-based methods imitate the policy from data but are not designed to
follow the rules explicitly. Besides, the real-world datasets are by nature
"single-outcome", making the learning method hard to generate diverse
behaviors. In this paper, we leverage Signal Temporal Logic (STL) and Diffusion
Models to learn controllable, diverse, and rule-aware policy. We first
calibrate the STL on the real-world data, then generate diverse synthetic data
using trajectory optimization, and finally learn the rectified diffusion policy
on the augmented dataset. We test on the NuScenes dataset and our approach can
achieve the most diverse rule-compliant trajectories compared to other
baselines, with a runtime 1/17X to the second-best approach. In the closed-loop
testing, our approach reaches the highest diversity, rule satisfaction rate,
and the least collision rate. Our method can generate varied characteristics
conditional on different STL parameters in testing. A case study on human-robot
encounter scenarios shows our approach can generate diverse and
closed-to-oracle trajectories. The annotation tool, augmented dataset, and code
are available at https://github.com/mengyuest/pSTL-diffusion-policy.
| no_new_dataset | 0.945045 |
2503.02951 | Zhangchen Xu | Zhangchen Xu, Yang Liu, Yueqin Yin, Mingyuan Zhou, Radha Poovendran | KodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for
Coding | Codes and Data: https://kodcode-ai.github.io/ | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We introduce KodCode, a synthetic dataset that addresses the persistent
challenge of acquiring high-quality, verifiable training data across diverse
difficulties and domains for training Large Language Models for coding.
Existing code-focused resources typically fail to ensure either the breadth of
coverage (e.g., spanning simple coding tasks to advanced algorithmic problems)
or verifiable correctness (e.g., unit tests). In contrast, KodCode comprises
question-solution-test triplets that are systematically validated via a
self-verification procedure. Our pipeline begins by synthesizing a broad range
of coding questions, then generates solutions and test cases with additional
attempts allocated to challenging problems. Finally, post-training data
synthesis is done by rewriting questions into diverse formats and generating
responses under a test-based reject sampling procedure from a reasoning model
(DeepSeek R1). This pipeline yields a large-scale, robust and diverse coding
dataset. KodCode is suitable for supervised fine-tuning and the paired unit
tests also provide great potential for RL tuning. Fine-tuning experiments on
coding benchmarks (HumanEval(+), MBPP(+), BigCodeBench, and LiveCodeBench)
demonstrate that KodCode-tuned models achieve state-of-the-art performance,
surpassing models like Qwen2.5-Coder-32B-Instruct and
DeepSeek-R1-Distill-Llama-70B.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 19:17:36 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Xu",
"Zhangchen",
""
],
[
"Liu",
"Yang",
""
],
[
"Yin",
"Yueqin",
""
],
[
"Zhou",
"Mingyuan",
""
],
[
"Poovendran",
"Radha",
""
]
]
| TITLE: KodCode: A Diverse, Challenging, and Verifiable Synthetic Dataset for
Coding
ABSTRACT: We introduce KodCode, a synthetic dataset that addresses the persistent
challenge of acquiring high-quality, verifiable training data across diverse
difficulties and domains for training Large Language Models for coding.
Existing code-focused resources typically fail to ensure either the breadth of
coverage (e.g., spanning simple coding tasks to advanced algorithmic problems)
or verifiable correctness (e.g., unit tests). In contrast, KodCode comprises
question-solution-test triplets that are systematically validated via a
self-verification procedure. Our pipeline begins by synthesizing a broad range
of coding questions, then generates solutions and test cases with additional
attempts allocated to challenging problems. Finally, post-training data
synthesis is done by rewriting questions into diverse formats and generating
responses under a test-based reject sampling procedure from a reasoning model
(DeepSeek R1). This pipeline yields a large-scale, robust and diverse coding
dataset. KodCode is suitable for supervised fine-tuning and the paired unit
tests also provide great potential for RL tuning. Fine-tuning experiments on
coding benchmarks (HumanEval(+), MBPP(+), BigCodeBench, and LiveCodeBench)
demonstrate that KodCode-tuned models achieve state-of-the-art performance,
surpassing models like Qwen2.5-Coder-32B-Instruct and
DeepSeek-R1-Distill-Llama-70B.
| new_dataset | 0.956227 |
2503.02960 | Shiyang Chen | Shiyang Chen, Xiang Song, Vasiloudis Theodore, Hang Liu | Deal: Distributed End-to-End GNN Inference for All Nodes | null | null | null | null | cs.DC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) are a new research frontier with various
applications and successes. The end-to-end inference for all nodes, is common
for GNN embedding models, which are widely adopted in applications like
recommendation and advertising. While sharing opportunities arise in GNN tasks
(i.e., inference for a few nodes and training), the potential for sharing in
full graph end-to-end inference is largely underutilized because traditional
efforts fail to fully extract sharing benefits due to overwhelming overheads or
excessive memory usage.
This paper introduces Deal, a distributed GNN inference system that is
dedicated to end-to-end inference for all nodes for graphs with multi-billion
edges. First, we unveil and exploit an untapped sharing opportunity during
sampling, and maximize the benefits from sharing during subsequent GNN
computation. Second, we introduce memory-saving and communication-efficient
distributed primitives for lightweight 1-D graph and feature tensor
collaborative partitioning-based distributed inference. Third, we introduce
partitioned, pipelined communication and fusing feature preparation with the
first GNN primitive for end-to-end inference. With Deal, the end-to-end
inference time on real-world benchmark datasets is reduced up to 7.70 x and the
graph construction time is reduced up to 21.05 x, compared to the
state-of-the-art.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 19:35:41 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Chen",
"Shiyang",
""
],
[
"Song",
"Xiang",
""
],
[
"Theodore",
"Vasiloudis",
""
],
[
"Liu",
"Hang",
""
]
]
| TITLE: Deal: Distributed End-to-End GNN Inference for All Nodes
ABSTRACT: Graph Neural Networks (GNNs) are a new research frontier with various
applications and successes. The end-to-end inference for all nodes, is common
for GNN embedding models, which are widely adopted in applications like
recommendation and advertising. While sharing opportunities arise in GNN tasks
(i.e., inference for a few nodes and training), the potential for sharing in
full graph end-to-end inference is largely underutilized because traditional
efforts fail to fully extract sharing benefits due to overwhelming overheads or
excessive memory usage.
This paper introduces Deal, a distributed GNN inference system that is
dedicated to end-to-end inference for all nodes for graphs with multi-billion
edges. First, we unveil and exploit an untapped sharing opportunity during
sampling, and maximize the benefits from sharing during subsequent GNN
computation. Second, we introduce memory-saving and communication-efficient
distributed primitives for lightweight 1-D graph and feature tensor
collaborative partitioning-based distributed inference. Third, we introduce
partitioned, pipelined communication and fusing feature preparation with the
first GNN primitive for end-to-end inference. With Deal, the end-to-end
inference time on real-world benchmark datasets is reduced up to 7.70 x and the
graph construction time is reduced up to 21.05 x, compared to the
state-of-the-art.
| no_new_dataset | 0.947137 |
2503.02968 | Fatima Jahan Sarmin | Fatima J. Sarmin, Atiquer R. Rahman, Christopher J. Henry, Noman
Mohammed | Privacy-Preserving Fair Synthetic Tabular Data | null | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Sharing of tabular data containing valuable but private information is
limited due to legal and ethical issues. Synthetic data could be an alternative
solution to this sharing problem, as it is artificially generated by machine
learning algorithms and tries to capture the underlying data distribution.
However, machine learning models are not free from memorization and may
introduce biases, as they rely on training data. Producing synthetic data that
preserves privacy and fairness while maintaining utility close to the real data
is a challenging task. This research simultaneously addresses both the privacy
and fairness aspects of synthetic data, an area not explored by other studies.
In this work, we present PF-WGAN, a privacy-preserving, fair synthetic tabular
data generator based on the WGAN-GP model. We have modified the original
WGAN-GP by adding privacy and fairness constraints forcing it to produce
privacy-preserving fair data. This approach will enable the publication of
datasets that protect individual's privacy and remain unbiased toward any
particular group. We compared the results with three state-of-the-art synthetic
data generator models in terms of utility, privacy, and fairness across four
different datasets. We found that the proposed model exhibits a more balanced
trade-off among utility, privacy, and fairness.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 19:51:00 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Sarmin",
"Fatima J.",
""
],
[
"Rahman",
"Atiquer R.",
""
],
[
"Henry",
"Christopher J.",
""
],
[
"Mohammed",
"Noman",
""
]
]
| TITLE: Privacy-Preserving Fair Synthetic Tabular Data
ABSTRACT: Sharing of tabular data containing valuable but private information is
limited due to legal and ethical issues. Synthetic data could be an alternative
solution to this sharing problem, as it is artificially generated by machine
learning algorithms and tries to capture the underlying data distribution.
However, machine learning models are not free from memorization and may
introduce biases, as they rely on training data. Producing synthetic data that
preserves privacy and fairness while maintaining utility close to the real data
is a challenging task. This research simultaneously addresses both the privacy
and fairness aspects of synthetic data, an area not explored by other studies.
In this work, we present PF-WGAN, a privacy-preserving, fair synthetic tabular
data generator based on the WGAN-GP model. We have modified the original
WGAN-GP by adding privacy and fairness constraints forcing it to produce
privacy-preserving fair data. This approach will enable the publication of
datasets that protect individual's privacy and remain unbiased toward any
particular group. We compared the results with three state-of-the-art synthetic
data generator models in terms of utility, privacy, and fairness across four
different datasets. We found that the proposed model exhibits a more balanced
trade-off among utility, privacy, and fairness.
| no_new_dataset | 0.937783 |
2503.02978 | Boris Slautin | Boris N. Slautin, Utkarsh Pratiush, Doru C. Lupascu, Maxim A.
Ziatdinov, Sergei V. Kalinin | Integrating Predictive and Generative Capabilities by Latent Space
Design via the DKL-VAE Model | 25 pages, 15 figures | null | null | null | cs.LG cond-mat.mtrl-sci | http://creativecommons.org/licenses/by/4.0/ | We introduce a Deep Kernel Learning Variational Autoencoder (VAE-DKL)
framework that integrates the generative power of a Variational Autoencoder
(VAE) with the predictive nature of Deep Kernel Learning (DKL). The VAE learns
a latent representation of high-dimensional data, enabling the generation of
novel structures, while DKL refines this latent space by structuring it in
alignment with target properties through Gaussian Process (GP) regression. This
approach preserves the generative capabilities of the VAE while enhancing its
latent space for GP-based property prediction. We evaluate the framework on two
datasets: a structured card dataset with predefined variational factors and the
QM9 molecular dataset, where enthalpy serves as the target function for
optimization. The model demonstrates high-precision property prediction and
enables the generation of novel out-of-training subset structures with desired
characteristics. The VAE-DKL framework offers a promising approach for
high-throughput material discovery and molecular design, balancing structured
latent space organization with generative flexibility.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 20:05:04 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Slautin",
"Boris N.",
""
],
[
"Pratiush",
"Utkarsh",
""
],
[
"Lupascu",
"Doru C.",
""
],
[
"Ziatdinov",
"Maxim A.",
""
],
[
"Kalinin",
"Sergei V.",
""
]
]
| TITLE: Integrating Predictive and Generative Capabilities by Latent Space
Design via the DKL-VAE Model
ABSTRACT: We introduce a Deep Kernel Learning Variational Autoencoder (VAE-DKL)
framework that integrates the generative power of a Variational Autoencoder
(VAE) with the predictive nature of Deep Kernel Learning (DKL). The VAE learns
a latent representation of high-dimensional data, enabling the generation of
novel structures, while DKL refines this latent space by structuring it in
alignment with target properties through Gaussian Process (GP) regression. This
approach preserves the generative capabilities of the VAE while enhancing its
latent space for GP-based property prediction. We evaluate the framework on two
datasets: a structured card dataset with predefined variational factors and the
QM9 molecular dataset, where enthalpy serves as the target function for
optimization. The model demonstrates high-precision property prediction and
enables the generation of novel out-of-training subset structures with desired
characteristics. The VAE-DKL framework offers a promising approach for
high-throughput material discovery and molecular design, balancing structured
latent space organization with generative flexibility.
| no_new_dataset | 0.9462 |
2503.02988 | Yiming Xu | Yiming Xu, Bin Shi, Zhen Peng, Huixiang Liu, Bo Dong, Chen Chen | Out-of-Distribution Generalization on Graphs via Progressive Inference | Accepted by AAAI2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The development and evaluation of graph neural networks (GNNs) generally
follow the independent and identically distributed (i.i.d.) assumption. Yet
this assumption is often untenable in practice due to the uncontrollable data
generation mechanism. In particular, when the data distribution shows a
significant shift, most GNNs would fail to produce reliable predictions and may
even make decisions randomly. One of the most promising solutions to improve
the model generalization is to pick out causal invariant parts in the input
graph. Nonetheless, we observe a significant distribution gap between the
causal parts learned by existing methods and the ground truth, leading to
undesirable performance. In response to the above issues, this paper presents
GPro, a model that learns graph causal invariance with progressive inference.
Specifically, the complicated graph causal invariant learning is decomposed
into multiple intermediate inference steps from easy to hard, and the
perception of GPro is continuously strengthened through a progressive inference
process to extract causal features that are stable to distribution shifts. We
also enlarge the training distribution by creating counterfactual samples to
enhance the capability of the GPro in capturing the causal invariant parts.
Extensive experiments demonstrate that our proposed GPro outperforms the
state-of-the-art methods by 4.91% on average. For datasets with more severe
distribution shifts, the performance improvement can be up to 6.86%.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 20:31:55 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Xu",
"Yiming",
""
],
[
"Shi",
"Bin",
""
],
[
"Peng",
"Zhen",
""
],
[
"Liu",
"Huixiang",
""
],
[
"Dong",
"Bo",
""
],
[
"Chen",
"Chen",
""
]
]
| TITLE: Out-of-Distribution Generalization on Graphs via Progressive Inference
ABSTRACT: The development and evaluation of graph neural networks (GNNs) generally
follow the independent and identically distributed (i.i.d.) assumption. Yet
this assumption is often untenable in practice due to the uncontrollable data
generation mechanism. In particular, when the data distribution shows a
significant shift, most GNNs would fail to produce reliable predictions and may
even make decisions randomly. One of the most promising solutions to improve
the model generalization is to pick out causal invariant parts in the input
graph. Nonetheless, we observe a significant distribution gap between the
causal parts learned by existing methods and the ground truth, leading to
undesirable performance. In response to the above issues, this paper presents
GPro, a model that learns graph causal invariance with progressive inference.
Specifically, the complicated graph causal invariant learning is decomposed
into multiple intermediate inference steps from easy to hard, and the
perception of GPro is continuously strengthened through a progressive inference
process to extract causal features that are stable to distribution shifts. We
also enlarge the training distribution by creating counterfactual samples to
enhance the capability of the GPro in capturing the causal invariant parts.
Extensive experiments demonstrate that our proposed GPro outperforms the
state-of-the-art methods by 4.91% on average. For datasets with more severe
distribution shifts, the performance improvement can be up to 6.86%.
| no_new_dataset | 0.949153 |
2503.02992 | Yimin Tang | Yimin Tang, Xiao Xiong, Jingyi Xi, Jiaoyang Li, Erdem B{\i}y{\i}k,
Sven Koenig | RAILGUN: A Unified Convolutional Policy for Multi-Agent Path Finding
Across Different Environments and Tasks | 7 pages | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Multi-Agent Path Finding (MAPF), which focuses on finding collision-free
paths for multiple robots, is crucial for applications ranging from aerial
swarms to warehouse automation. Solving MAPF is NP-hard so learning-based
approaches for MAPF have gained attention, particularly those leveraging deep
neural networks. Nonetheless, despite the community's continued efforts, all
learning-based MAPF planners still rely on decentralized planning due to
variability in the number of agents and map sizes. We have developed the first
centralized learning-based policy for MAPF problem called RAILGUN. RAILGUN is
not an agent-based policy but a map-based policy. By leveraging a CNN-based
architecture, RAILGUN can generalize across different maps and handle any
number of agents. We collect trajectories from rule-based methods to train our
model in a supervised way. In experiments, RAILGUN outperforms most baseline
methods and demonstrates great zero-shot generalization capabilities on various
tasks, maps and agent numbers that were not seen in the training dataset.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 20:35:20 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Tang",
"Yimin",
""
],
[
"Xiong",
"Xiao",
""
],
[
"Xi",
"Jingyi",
""
],
[
"Li",
"Jiaoyang",
""
],
[
"Bıyık",
"Erdem",
""
],
[
"Koenig",
"Sven",
""
]
]
| TITLE: RAILGUN: A Unified Convolutional Policy for Multi-Agent Path Finding
Across Different Environments and Tasks
ABSTRACT: Multi-Agent Path Finding (MAPF), which focuses on finding collision-free
paths for multiple robots, is crucial for applications ranging from aerial
swarms to warehouse automation. Solving MAPF is NP-hard so learning-based
approaches for MAPF have gained attention, particularly those leveraging deep
neural networks. Nonetheless, despite the community's continued efforts, all
learning-based MAPF planners still rely on decentralized planning due to
variability in the number of agents and map sizes. We have developed the first
centralized learning-based policy for MAPF problem called RAILGUN. RAILGUN is
not an agent-based policy but a map-based policy. By leveraging a CNN-based
architecture, RAILGUN can generalize across different maps and handle any
number of agents. We collect trajectories from rule-based methods to train our
model in a supervised way. In experiments, RAILGUN outperforms most baseline
methods and demonstrates great zero-shot generalization capabilities on various
tasks, maps and agent numbers that were not seen in the training dataset.
| no_new_dataset | 0.941654 |
2503.03008 | Andrea Gurioli | Andrea Gurioli, Federico Pennino, Jo\~ao Monteiro, Maurizio Gabbrielli | One Model to Train them All: Hierarchical Self-Distillation for Enhanced
Early Layer Embeddings | null | null | null | null | cs.CL cs.AI cs.PL cs.SE | http://creativecommons.org/licenses/by/4.0/ | Deploying language models often requires handling model size vs. performance
trade-offs to satisfy downstream latency constraints while preserving the
model's usefulness. Model distillation is commonly employed to reduce model
size while maintaining acceptable performance. However, distillation can be
inefficient since it involves multiple training steps. In this work, we
introduce MODULARSTARENCODER, a modular multi-exit encoder with 1B parameters,
useful for multiple tasks within the scope of code retrieval.
MODULARSTARENCODER is trained with a novel self-distillation mechanism that
significantly improves lower-layer representations-allowing different portions
of the model to be used while still maintaining a good trade-off in terms of
performance. Our architecture focuses on enhancing text-to-code and
code-to-code search by systematically capturing syntactic and semantic
structures across multiple levels of representation. Specific encoder layers
are targeted as exit heads, allowing higher layers to guide earlier layers
during training. This self-distillation effect improves intermediate
representations, increasing retrieval recall at no extra training cost. In
addition to the multi-exit scheme, our approach integrates a repository-level
contextual loss that maximally utilizes the training context window, further
enhancing the learned representations. We also release a new dataset
constructed via code translation, seamlessly expanding traditional text-to-code
benchmarks with code-to-code pairs across diverse programming languages.
Experimental results highlight the benefits of self-distillation through
multi-exit supervision.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 21:08:17 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Gurioli",
"Andrea",
""
],
[
"Pennino",
"Federico",
""
],
[
"Monteiro",
"João",
""
],
[
"Gabbrielli",
"Maurizio",
""
]
]
| TITLE: One Model to Train them All: Hierarchical Self-Distillation for Enhanced
Early Layer Embeddings
ABSTRACT: Deploying language models often requires handling model size vs. performance
trade-offs to satisfy downstream latency constraints while preserving the
model's usefulness. Model distillation is commonly employed to reduce model
size while maintaining acceptable performance. However, distillation can be
inefficient since it involves multiple training steps. In this work, we
introduce MODULARSTARENCODER, a modular multi-exit encoder with 1B parameters,
useful for multiple tasks within the scope of code retrieval.
MODULARSTARENCODER is trained with a novel self-distillation mechanism that
significantly improves lower-layer representations-allowing different portions
of the model to be used while still maintaining a good trade-off in terms of
performance. Our architecture focuses on enhancing text-to-code and
code-to-code search by systematically capturing syntactic and semantic
structures across multiple levels of representation. Specific encoder layers
are targeted as exit heads, allowing higher layers to guide earlier layers
during training. This self-distillation effect improves intermediate
representations, increasing retrieval recall at no extra training cost. In
addition to the multi-exit scheme, our approach integrates a repository-level
contextual loss that maximally utilizes the training context window, further
enhancing the learned representations. We also release a new dataset
constructed via code translation, seamlessly expanding traditional text-to-code
benchmarks with code-to-code pairs across diverse programming languages.
Experimental results highlight the benefits of self-distillation through
multi-exit supervision.
| new_dataset | 0.953275 |
2503.03018 | Hayden McAlister | Hayden McAlister, Anthony Robins, and Lech Szymanski | Classifying States of the Hopfield Network with Improved Accuracy,
Generalization, and Interpretability | null | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We extend the existing work on Hopfield network state classification,
employing more complex models that remain interpretable, such as
densely-connected feed-forward deep neural networks and support vector
machines. The states of the Hopfield network can be grouped into several
classes, including learned (those presented during training), spurious (stable
states that were not learned), and prototype (stable states that were not
learned but are representative for a subset of learned states). It is often
useful to determine to what class a given state belongs to; for example to
ignore spurious states when retrieving from the network. Previous research has
approached the state classification task with simple linear methods, most
notably the stability ratio. We deepen the research on classifying states from
prototype-regime Hopfield networks, investigating how varying the factors
strengthening prototypes influences the state classification task. We study the
generalizability of different classification models when trained on states
derived from different prototype tasks -- for example, can a network trained on
a Hopfield network with 10 prototypes classify states from a network with 20
prototypes? We find that simple models often outperform the stability ratio
while remaining interpretable. These models require surprisingly little
training data and generalize exceptionally well to states generated by a range
of Hopfield networks, even those that were trained on exceedingly different
datasets.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 21:29:42 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"McAlister",
"Hayden",
""
],
[
"Robins",
"Anthony",
""
],
[
"Szymanski",
"Lech",
""
]
]
| TITLE: Classifying States of the Hopfield Network with Improved Accuracy,
Generalization, and Interpretability
ABSTRACT: We extend the existing work on Hopfield network state classification,
employing more complex models that remain interpretable, such as
densely-connected feed-forward deep neural networks and support vector
machines. The states of the Hopfield network can be grouped into several
classes, including learned (those presented during training), spurious (stable
states that were not learned), and prototype (stable states that were not
learned but are representative for a subset of learned states). It is often
useful to determine to what class a given state belongs to; for example to
ignore spurious states when retrieving from the network. Previous research has
approached the state classification task with simple linear methods, most
notably the stability ratio. We deepen the research on classifying states from
prototype-regime Hopfield networks, investigating how varying the factors
strengthening prototypes influences the state classification task. We study the
generalizability of different classification models when trained on states
derived from different prototype tasks -- for example, can a network trained on
a Hopfield network with 10 prototypes classify states from a network with 20
prototypes? We find that simple models often outperform the stability ratio
while remaining interpretable. These models require surprisingly little
training data and generalize exceptionally well to states generated by a range
of Hopfield networks, even those that were trained on exceedingly different
datasets.
| no_new_dataset | 0.948965 |
2503.03022 | Ragini Gupta | Ragini Gupta, Shinan Liu, Ruixiao Zhang, Xinyue Hu, Pranav Kommaraju,
Xiaoyang Wang, Hadjer Benkraouda, Nick Feamster, Klara Nahrstedt | Generative Active Adaptation for Drifting and Imbalanced Network
Intrusion Detection | null | null | null | null | cs.NI cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine learning has shown promise in network intrusion detection systems,
yet its performance often degrades due to concept drift and imbalanced data.
These challenges are compounded by the labor-intensive process of labeling
network traffic, especially when dealing with evolving and rare attack types,
which makes selecting the right data for adaptation difficult. To address these
issues, we propose a generative active adaptation framework that minimizes
labeling effort while enhancing model robustness. Our approach employs
density-aware active sampling to identify the most informative samples for
annotation and leverages deep generative models to synthesize diverse samples,
thereby augmenting the training set and mitigating the effects of concept
drift. We evaluate our end-to-end framework on both simulated IDS data and a
real-world ISP dataset, demonstrating significant improvements in intrusion
detection performance. Our method boosts the overall F1-score from 0.60
(without adaptation) to 0.86. Rare attacks such as Infiltration, Web Attack,
and FTP-BruteForce, which originally achieve F1 scores of 0.001, 0.04, and
0.00, improve to 0.30, 0.50, and 0.71, respectively, with generative active
adaptation in the CIC-IDS 2018 dataset. Our framework effectively enhances rare
attack detection while reducing labeling costs, making it a scalable and
adaptive solution for real-world intrusion detection.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 21:49:42 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Gupta",
"Ragini",
""
],
[
"Liu",
"Shinan",
""
],
[
"Zhang",
"Ruixiao",
""
],
[
"Hu",
"Xinyue",
""
],
[
"Kommaraju",
"Pranav",
""
],
[
"Wang",
"Xiaoyang",
""
],
[
"Benkraouda",
"Hadjer",
""
],
[
"Feamster",
"Nick",
""
],
[
"Nahrstedt",
"Klara",
""
]
]
| TITLE: Generative Active Adaptation for Drifting and Imbalanced Network
Intrusion Detection
ABSTRACT: Machine learning has shown promise in network intrusion detection systems,
yet its performance often degrades due to concept drift and imbalanced data.
These challenges are compounded by the labor-intensive process of labeling
network traffic, especially when dealing with evolving and rare attack types,
which makes selecting the right data for adaptation difficult. To address these
issues, we propose a generative active adaptation framework that minimizes
labeling effort while enhancing model robustness. Our approach employs
density-aware active sampling to identify the most informative samples for
annotation and leverages deep generative models to synthesize diverse samples,
thereby augmenting the training set and mitigating the effects of concept
drift. We evaluate our end-to-end framework on both simulated IDS data and a
real-world ISP dataset, demonstrating significant improvements in intrusion
detection performance. Our method boosts the overall F1-score from 0.60
(without adaptation) to 0.86. Rare attacks such as Infiltration, Web Attack,
and FTP-BruteForce, which originally achieve F1 scores of 0.001, 0.04, and
0.00, improve to 0.30, 0.50, and 0.71, respectively, with generative active
adaptation in the CIC-IDS 2018 dataset. Our framework effectively enhances rare
attack detection while reducing labeling costs, making it a scalable and
adaptive solution for real-world intrusion detection.
| no_new_dataset | 0.946843 |
2503.03025 | Peter Halmos | Peter Halmos, Julian Gold, Xinhao Liu, Benjamin J. Raphael | Hierarchical Refinement: Optimal Transport to Infinity and Beyond | 32 pages, 9 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimal transport (OT) has enjoyed great success in machine-learning as a
principled way to align datasets via a least-cost correspondence. This success
was driven in large part by the runtime efficiency of the Sinkhorn algorithm
[Cuturi 2013], which computes a coupling between points from two datasets.
However, Sinkhorn has quadratic space complexity in the number of points,
limiting the scalability to larger datasets. Low-rank OT achieves linear-space
complexity, but by definition, cannot compute a one-to-one correspondence
between points. When the optimal transport problem is an assignment problem
between datasets then the optimal mapping, known as the Monge map, is
guaranteed to be a bijection. In this setting, we show that the factors of an
optimal low-rank coupling co-cluster each point with its image under the Monge
map. We leverage this invariant to derive an algorithm, Hierarchical Refinement
(HiRef), that dynamically constructs a multiscale partition of a dataset using
low-rank OT subproblems, culminating in a bijective coupling. Hierarchical
Refinement uses linear space and has log-linear runtime, retaining the space
advantage of low-rank OT while overcoming its limited resolution. We
demonstrate the advantages of Hierarchical Refinement on several datasets,
including ones containing over a million points, scaling full-rank OT to
problems previously beyond Sinkhorn's reach.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 22:00:12 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Halmos",
"Peter",
""
],
[
"Gold",
"Julian",
""
],
[
"Liu",
"Xinhao",
""
],
[
"Raphael",
"Benjamin J.",
""
]
]
| TITLE: Hierarchical Refinement: Optimal Transport to Infinity and Beyond
ABSTRACT: Optimal transport (OT) has enjoyed great success in machine-learning as a
principled way to align datasets via a least-cost correspondence. This success
was driven in large part by the runtime efficiency of the Sinkhorn algorithm
[Cuturi 2013], which computes a coupling between points from two datasets.
However, Sinkhorn has quadratic space complexity in the number of points,
limiting the scalability to larger datasets. Low-rank OT achieves linear-space
complexity, but by definition, cannot compute a one-to-one correspondence
between points. When the optimal transport problem is an assignment problem
between datasets then the optimal mapping, known as the Monge map, is
guaranteed to be a bijection. In this setting, we show that the factors of an
optimal low-rank coupling co-cluster each point with its image under the Monge
map. We leverage this invariant to derive an algorithm, Hierarchical Refinement
(HiRef), that dynamically constructs a multiscale partition of a dataset using
low-rank OT subproblems, culminating in a bijective coupling. Hierarchical
Refinement uses linear space and has log-linear runtime, retaining the space
advantage of low-rank OT while overcoming its limited resolution. We
demonstrate the advantages of Hierarchical Refinement on several datasets,
including ones containing over a million points, scaling full-rank OT to
problems previously beyond Sinkhorn's reach.
| no_new_dataset | 0.949902 |
2503.03031 | Ghazal Ghajari | Ghazal Ghajari, Ashutosh Ghimire, Elaheh Ghajari, Fathi Amsaad | Network Anomaly Detection for IoT Using Hyperdimensional Computing on
NSL-KDD | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid growth of IoT devices, ensuring robust network security has
become a critical challenge. Traditional intrusion detection systems (IDSs)
often face limitations in detecting sophisticated attacks within
high-dimensional and complex data environments. This paper presents a novel
approach to network anomaly detection using hyperdimensional computing (HDC)
techniques, specifically applied to the NSL-KDD dataset. The proposed method
leverages the efficiency of HDC in processing large-scale data to identify both
known and unknown attack patterns. The model achieved an accuracy of 91.55% on
the KDDTrain+ subset, outperforming traditional approaches. These comparative
evaluations underscore the model's superior performance, highlighting its
potential in advancing anomaly detection for IoT networks and contributing to
more secure and intelligent cybersecurity solutions.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 22:19:26 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ghajari",
"Ghazal",
""
],
[
"Ghimire",
"Ashutosh",
""
],
[
"Ghajari",
"Elaheh",
""
],
[
"Amsaad",
"Fathi",
""
]
]
| TITLE: Network Anomaly Detection for IoT Using Hyperdimensional Computing on
NSL-KDD
ABSTRACT: With the rapid growth of IoT devices, ensuring robust network security has
become a critical challenge. Traditional intrusion detection systems (IDSs)
often face limitations in detecting sophisticated attacks within
high-dimensional and complex data environments. This paper presents a novel
approach to network anomaly detection using hyperdimensional computing (HDC)
techniques, specifically applied to the NSL-KDD dataset. The proposed method
leverages the efficiency of HDC in processing large-scale data to identify both
known and unknown attack patterns. The model achieved an accuracy of 91.55% on
the KDDTrain+ subset, outperforming traditional approaches. These comparative
evaluations underscore the model's superior performance, highlighting its
potential in advancing anomaly detection for IoT networks and contributing to
more secure and intelligent cybersecurity solutions.
| no_new_dataset | 0.942401 |
2503.03032 | Andrea Seveso | Samir Abdaljalil, Filippo Pallucchini, Andrea Seveso, Hasan Kurban,
Fabio Mercorio, Erchin Serpedin | SAFE: A Sparse Autoencoder-Based Framework for Robust Query Enrichment
and Hallucination Mitigation in LLMs | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Despite the state-of-the-art performance of Large Language Models (LLMs),
these models often suffer from hallucinations, which can undermine their
performance in critical applications. In this work, we propose SAFE, a novel
method for detecting and mitigating hallucinations by leveraging Sparse
Autoencoders (SAEs). While hallucination detection techniques and SAEs have
been explored independently, their synergistic application in a comprehensive
system, particularly for hallucination-aware query enrichment, has not been
fully investigated. To validate the effectiveness of SAFE, we evaluate it on
two models with available SAEs across three diverse cross-domain datasets
designed to assess hallucination problems. Empirical results demonstrate that
SAFE consistently improves query generation accuracy and mitigates
hallucinations across all datasets, achieving accuracy improvements of up to
29.45%.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 22:19:52 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Abdaljalil",
"Samir",
""
],
[
"Pallucchini",
"Filippo",
""
],
[
"Seveso",
"Andrea",
""
],
[
"Kurban",
"Hasan",
""
],
[
"Mercorio",
"Fabio",
""
],
[
"Serpedin",
"Erchin",
""
]
]
| TITLE: SAFE: A Sparse Autoencoder-Based Framework for Robust Query Enrichment
and Hallucination Mitigation in LLMs
ABSTRACT: Despite the state-of-the-art performance of Large Language Models (LLMs),
these models often suffer from hallucinations, which can undermine their
performance in critical applications. In this work, we propose SAFE, a novel
method for detecting and mitigating hallucinations by leveraging Sparse
Autoencoders (SAEs). While hallucination detection techniques and SAEs have
been explored independently, their synergistic application in a comprehensive
system, particularly for hallucination-aware query enrichment, has not been
fully investigated. To validate the effectiveness of SAFE, we evaluate it on
two models with available SAEs across three diverse cross-domain datasets
designed to assess hallucination problems. Empirical results demonstrate that
SAFE consistently improves query generation accuracy and mitigates
hallucinations across all datasets, achieving accuracy improvements of up to
29.45%.
| no_new_dataset | 0.942188 |
2503.03037 | Ghazal Ghajari | Ghazal Ghajari, Elaheh Ghajari, Hossein Mohammadi, Fathi Amsaad | Intrusion Detection in IoT Networks Using Hyperdimensional Computing: A
Case Study on the NSL-KDD Dataset | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid expansion of Internet of Things (IoT) networks has introduced new
security challenges, necessitating efficient and reliable methods for intrusion
detection. In this study, a detection framework based on hyperdimensional
computing (HDC) is proposed to identify and classify network intrusions using
the NSL-KDD dataset, a standard benchmark for intrusion detection systems. By
leveraging the capabilities of HDC, including high-dimensional representation
and efficient computation, the proposed approach effectively distinguishes
various attack categories such as DoS, probe, R2L, and U2R, while accurately
identifying normal traffic patterns. Comprehensive evaluations demonstrate that
the proposed method achieves an accuracy of 99.54%, significantly outperforming
conventional intrusion detection techniques, making it a promising solution for
IoT network security. This work emphasizes the critical role of robust and
precise intrusion detection in safeguarding IoT systems against evolving cyber
threats.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 22:33:37 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ghajari",
"Ghazal",
""
],
[
"Ghajari",
"Elaheh",
""
],
[
"Mohammadi",
"Hossein",
""
],
[
"Amsaad",
"Fathi",
""
]
]
| TITLE: Intrusion Detection in IoT Networks Using Hyperdimensional Computing: A
Case Study on the NSL-KDD Dataset
ABSTRACT: The rapid expansion of Internet of Things (IoT) networks has introduced new
security challenges, necessitating efficient and reliable methods for intrusion
detection. In this study, a detection framework based on hyperdimensional
computing (HDC) is proposed to identify and classify network intrusions using
the NSL-KDD dataset, a standard benchmark for intrusion detection systems. By
leveraging the capabilities of HDC, including high-dimensional representation
and efficient computation, the proposed approach effectively distinguishes
various attack categories such as DoS, probe, R2L, and U2R, while accurately
identifying normal traffic patterns. Comprehensive evaluations demonstrate that
the proposed method achieves an accuracy of 99.54%, significantly outperforming
conventional intrusion detection techniques, making it a promising solution for
IoT network security. This work emphasizes the critical role of robust and
precise intrusion detection in safeguarding IoT systems against evolving cyber
threats.
| no_new_dataset | 0.943608 |
2503.03039 | Erfan Entezami | Erfan Entezami, Ali Naseh | LLM Misalignment via Adversarial RLHF Platforms | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning has shown remarkable performance in aligning language
models with human preferences, leading to the rise of attention towards
developing RLHF platforms. These platforms enable users to fine-tune models
without requiring any expertise in developing complex machine learning
algorithms. While these platforms offer useful features such as reward modeling
and RLHF fine-tuning, their security and reliability remain largely unexplored.
Given the growing adoption of RLHF and open-source RLHF frameworks, we
investigate the trustworthiness of these systems and their potential impact on
behavior of LLMs. In this paper, we present an attack targeting publicly
available RLHF tools. In our proposed attack, an adversarial RLHF platform
corrupts the LLM alignment process by selectively manipulating data samples in
the preference dataset. In this scenario, when a user's task aligns with the
attacker's objective, the platform manipulates a subset of the preference
dataset that contains samples related to the attacker's target. This
manipulation results in a corrupted reward model, which ultimately leads to the
misalignment of the language model. Our results demonstrate that such an attack
can effectively steer LLMs toward undesirable behaviors within the targeted
domains. Our work highlights the critical need to explore the vulnerabilities
of RLHF platforms and their potential to cause misalignment in LLMs during the
RLHF fine-tuning process.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 22:38:54 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Entezami",
"Erfan",
""
],
[
"Naseh",
"Ali",
""
]
]
| TITLE: LLM Misalignment via Adversarial RLHF Platforms
ABSTRACT: Reinforcement learning has shown remarkable performance in aligning language
models with human preferences, leading to the rise of attention towards
developing RLHF platforms. These platforms enable users to fine-tune models
without requiring any expertise in developing complex machine learning
algorithms. While these platforms offer useful features such as reward modeling
and RLHF fine-tuning, their security and reliability remain largely unexplored.
Given the growing adoption of RLHF and open-source RLHF frameworks, we
investigate the trustworthiness of these systems and their potential impact on
behavior of LLMs. In this paper, we present an attack targeting publicly
available RLHF tools. In our proposed attack, an adversarial RLHF platform
corrupts the LLM alignment process by selectively manipulating data samples in
the preference dataset. In this scenario, when a user's task aligns with the
attacker's objective, the platform manipulates a subset of the preference
dataset that contains samples related to the attacker's target. This
manipulation results in a corrupted reward model, which ultimately leads to the
misalignment of the language model. Our results demonstrate that such an attack
can effectively steer LLMs toward undesirable behaviors within the targeted
domains. Our work highlights the critical need to explore the vulnerabilities
of RLHF platforms and their potential to cause misalignment in LLMs during the
RLHF fine-tuning process.
| no_new_dataset | 0.942612 |
2503.03042 | Yan Han | Yan Han, Soumava Kumar Roy, Mehrtash Harandi, Lars Petersson | Learning from Noisy Labels with Contrastive Co-Transformer | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | Deep learning with noisy labels is an interesting challenge in weakly
supervised learning. Despite their significant learning capacity, CNNs have a
tendency to overfit in the presence of samples with noisy labels. Alleviating
this issue, the well known Co-Training framework is used as a fundamental basis
for our work. In this paper, we introduce a Contrastive Co-Transformer
framework, which is simple and fast, yet able to improve the performance by a
large margin compared to the state-of-the-art approaches. We argue the
robustness of transformers when dealing with label noise. Our Contrastive
Co-Transformer approach is able to utilize all samples in the dataset,
irrespective of whether they are clean or noisy. Transformers are trained by a
combination of contrastive loss and classification loss. Extensive experimental
results on corrupted data from six standard benchmark datasets including
Clothing1M, demonstrate that our Contrastive Co-Transformer is superior to
existing state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 22:48:43 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Han",
"Yan",
""
],
[
"Roy",
"Soumava Kumar",
""
],
[
"Harandi",
"Mehrtash",
""
],
[
"Petersson",
"Lars",
""
]
]
| TITLE: Learning from Noisy Labels with Contrastive Co-Transformer
ABSTRACT: Deep learning with noisy labels is an interesting challenge in weakly
supervised learning. Despite their significant learning capacity, CNNs have a
tendency to overfit in the presence of samples with noisy labels. Alleviating
this issue, the well known Co-Training framework is used as a fundamental basis
for our work. In this paper, we introduce a Contrastive Co-Transformer
framework, which is simple and fast, yet able to improve the performance by a
large margin compared to the state-of-the-art approaches. We argue the
robustness of transformers when dealing with label noise. Our Contrastive
Co-Transformer approach is able to utilize all samples in the dataset,
irrespective of whether they are clean or noisy. Transformers are trained by a
combination of contrastive loss and classification loss. Extensive experimental
results on corrupted data from six standard benchmark datasets including
Clothing1M, demonstrate that our Contrastive Co-Transformer is superior to
existing state-of-the-art methods.
| no_new_dataset | 0.947381 |
2503.03046 | Xihan Qin | Xihan Qin, Li Liao | Graph Transformer with Disease Subgraph Positional Encoding for Improved
Comorbidity Prediction | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comorbidity, the co-occurrence of multiple medical conditions in a single
patient, profoundly impacts disease management and outcomes. Understanding
these complex interconnections is crucial, especially in contexts where
comorbidities exacerbate outcomes. Leveraging insights from the human
interactome (HI) and advancements in graph-based methodologies, this study
introduces Transformer with Subgraph Positional Encoding (TSPE) for disease
comorbidity prediction. Inspired by Biologically Supervised Embedding (BSE),
TSPE employs Transformer's attention mechanisms and Subgraph Positional
Encoding (SPE) to capture interactions between nodes and disease associations.
Our proposed SPE proves more effective than LPE, as used in Dwivedi et al.'s
Graph Transformer, underscoring the importance of integrating clustering and
disease-specific information for improved predictive accuracy. Evaluated on
real clinical benchmark datasets (RR0 and RR1), TSPE demonstrates substantial
performance enhancements over the state-of-the-art method, achieving up to
28.24% higher ROC AUC and 4.93% higher accuracy. This method shows promise for
adaptation to other complex graph-based tasks and applications. The source code
is available in the GitHub repository at:
https://github.com/xihan-qin/TSPE-GraphTransformer.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 22:59:34 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Qin",
"Xihan",
""
],
[
"Liao",
"Li",
""
]
]
| TITLE: Graph Transformer with Disease Subgraph Positional Encoding for Improved
Comorbidity Prediction
ABSTRACT: Comorbidity, the co-occurrence of multiple medical conditions in a single
patient, profoundly impacts disease management and outcomes. Understanding
these complex interconnections is crucial, especially in contexts where
comorbidities exacerbate outcomes. Leveraging insights from the human
interactome (HI) and advancements in graph-based methodologies, this study
introduces Transformer with Subgraph Positional Encoding (TSPE) for disease
comorbidity prediction. Inspired by Biologically Supervised Embedding (BSE),
TSPE employs Transformer's attention mechanisms and Subgraph Positional
Encoding (SPE) to capture interactions between nodes and disease associations.
Our proposed SPE proves more effective than LPE, as used in Dwivedi et al.'s
Graph Transformer, underscoring the importance of integrating clustering and
disease-specific information for improved predictive accuracy. Evaluated on
real clinical benchmark datasets (RR0 and RR1), TSPE demonstrates substantial
performance enhancements over the state-of-the-art method, achieving up to
28.24% higher ROC AUC and 4.93% higher accuracy. This method shows promise for
adaptation to other complex graph-based tasks and applications. The source code
is available in the GitHub repository at:
https://github.com/xihan-qin/TSPE-GraphTransformer.
| no_new_dataset | 0.947039 |
2503.03056 | Ikechukwu Uchendu | Ikechukwu Uchendu, Jason Jabbour, Korneel Van den Berghe, Joel
Runevic, Matthew Stewart, Jeffrey Ma, Srivatsan Krishnan, Izzeddin Gur,
Austin Huang, Colton Bishop, Paige Bailey, Wenjie Jiang, Ebrahim M. Songhori,
Sergio Guadarrama, Jie Tan, Jordan K. Terry, Aleksandra Faust, Vijay Janapa
Reddi | A2Perf: Real-World Autonomous Agents Benchmark | 32 pages, 12 figures, preprint | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Autonomous agents and systems cover a number of application areas, from
robotics and digital assistants to combinatorial optimization, all sharing
common, unresolved research challenges. It is not sufficient for agents to
merely solve a given task; they must generalize to out-of-distribution tasks,
perform reliably, and use hardware resources efficiently during training and
inference, among other requirements. Several methods, such as reinforcement
learning and imitation learning, are commonly used to tackle these problems,
each with different trade-offs. However, there is a lack of benchmarking suites
that define the environments, datasets, and metrics which can be used to
provide a meaningful way for the community to compare progress on applying
these methods to real-world problems. We introduce A2Perf--a benchmark with
three environments that closely resemble real-world domains: computer chip
floorplanning, web navigation, and quadruped locomotion. A2Perf provides
metrics that track task performance, generalization, system resource
efficiency, and reliability, which are all critical to real-world applications.
Using A2Perf, we demonstrate that web navigation agents can achieve latencies
comparable to human reaction times on consumer hardware, reveal reliability
trade-offs between algorithms for quadruped locomotion, and quantify the energy
costs of different learning approaches for computer chip-design. In addition,
we propose a data cost metric to account for the cost incurred acquiring
offline data for imitation learning and hybrid algorithms, which allows us to
better compare these approaches. A2Perf also contains several standard
baselines, enabling apples-to-apples comparisons across methods and
facilitating progress in real-world autonomy. As an open-source benchmark,
A2Perf is designed to remain accessible, up-to-date, and useful to the research
community over the long term.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 23:41:02 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Uchendu",
"Ikechukwu",
""
],
[
"Jabbour",
"Jason",
""
],
[
"Berghe",
"Korneel Van den",
""
],
[
"Runevic",
"Joel",
""
],
[
"Stewart",
"Matthew",
""
],
[
"Ma",
"Jeffrey",
""
],
[
"Krishnan",
"Srivatsan",
""
],
[
"Gur",
"Izzeddin",
""
],
[
"Huang",
"Austin",
""
],
[
"Bishop",
"Colton",
""
],
[
"Bailey",
"Paige",
""
],
[
"Jiang",
"Wenjie",
""
],
[
"Songhori",
"Ebrahim M.",
""
],
[
"Guadarrama",
"Sergio",
""
],
[
"Tan",
"Jie",
""
],
[
"Terry",
"Jordan K.",
""
],
[
"Faust",
"Aleksandra",
""
],
[
"Reddi",
"Vijay Janapa",
""
]
]
| TITLE: A2Perf: Real-World Autonomous Agents Benchmark
ABSTRACT: Autonomous agents and systems cover a number of application areas, from
robotics and digital assistants to combinatorial optimization, all sharing
common, unresolved research challenges. It is not sufficient for agents to
merely solve a given task; they must generalize to out-of-distribution tasks,
perform reliably, and use hardware resources efficiently during training and
inference, among other requirements. Several methods, such as reinforcement
learning and imitation learning, are commonly used to tackle these problems,
each with different trade-offs. However, there is a lack of benchmarking suites
that define the environments, datasets, and metrics which can be used to
provide a meaningful way for the community to compare progress on applying
these methods to real-world problems. We introduce A2Perf--a benchmark with
three environments that closely resemble real-world domains: computer chip
floorplanning, web navigation, and quadruped locomotion. A2Perf provides
metrics that track task performance, generalization, system resource
efficiency, and reliability, which are all critical to real-world applications.
Using A2Perf, we demonstrate that web navigation agents can achieve latencies
comparable to human reaction times on consumer hardware, reveal reliability
trade-offs between algorithms for quadruped locomotion, and quantify the energy
costs of different learning approaches for computer chip-design. In addition,
we propose a data cost metric to account for the cost incurred acquiring
offline data for imitation learning and hybrid algorithms, which allows us to
better compare these approaches. A2Perf also contains several standard
baselines, enabling apples-to-apples comparisons across methods and
facilitating progress in real-world autonomy. As an open-source benchmark,
A2Perf is designed to remain accessible, up-to-date, and useful to the research
community over the long term.
| no_new_dataset | 0.933975 |
2503.03062 | Zhengyao Gu | Zhengyao Gu, Henry Peng Zou, Yankai Chen, Aiwei Liu, Weizhi Zhang,
Philip S. Yu | Semi-Supervised In-Context Learning: A Baseline Study | null | null | null | null | cs.CL cs.AI cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most existing work in data selection for In-Context Learning (ICL) has
focused on constructing demonstrations from ground truth annotations, with
limited attention given to selecting reliable self-generated annotations. In
this work, we propose a three-step semi-supervised ICL framework: annotation
generation, demonstration selection, and semi-supervised inference. Our
baseline, Naive-SemiICL, which prompts select high-confidence self-generated
demonstrations for ICL prompting, outperforms a 16-shot baseline by an average
of 9.94% across 16 datasets. We further introduce IterPSD, an annotation
approach that refines pseudo-demonstrations iteratively, achieving up to 6.8%
additional gains in classification tasks. Lastly, we reveal a scaling law for
semi-supervised ICL, where models achieve optimal performance with over 1,000
demonstrations.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 23:52:49 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Gu",
"Zhengyao",
""
],
[
"Zou",
"Henry Peng",
""
],
[
"Chen",
"Yankai",
""
],
[
"Liu",
"Aiwei",
""
],
[
"Zhang",
"Weizhi",
""
],
[
"Yu",
"Philip S.",
""
]
]
| TITLE: Semi-Supervised In-Context Learning: A Baseline Study
ABSTRACT: Most existing work in data selection for In-Context Learning (ICL) has
focused on constructing demonstrations from ground truth annotations, with
limited attention given to selecting reliable self-generated annotations. In
this work, we propose a three-step semi-supervised ICL framework: annotation
generation, demonstration selection, and semi-supervised inference. Our
baseline, Naive-SemiICL, which prompts select high-confidence self-generated
demonstrations for ICL prompting, outperforms a 16-shot baseline by an average
of 9.94% across 16 datasets. We further introduce IterPSD, an annotation
approach that refines pseudo-demonstrations iteratively, achieving up to 6.8%
additional gains in classification tasks. Lastly, we reveal a scaling law for
semi-supervised ICL, where models achieve optimal performance with over 1,000
demonstrations.
| no_new_dataset | 0.948442 |
2503.03084 | Kannan Ashwin Viswanathan | Ashwin Viswanathan Kannan, Johnson P Thomas, Abhimanyu Mukerji | Hopfield Networks Meet Big Data: A Brain-Inspired Deep Learning
Framework for Semantic Data Linking | 7 pages | null | null | null | cs.LG cs.AI cs.DC cs.NE | http://creativecommons.org/licenses/by/4.0/ | The exponential rise in data generation has led to vast, heterogeneous
datasets crucial for predictive analytics and decision-making. Ensuring data
quality and semantic integrity remains a challenge. This paper presents a
brain-inspired distributed cognitive framework that integrates deep learning
with Hopfield networks to identify and link semantically related attributes
across datasets. Modeled on the dual-hemisphere functionality of the human
brain, the right hemisphere assimilates new information while the left
retrieves learned representations for association. Our architecture,
implemented on MapReduce with Hadoop Distributed File System (HDFS), leverages
deep Hopfield networks as an associative memory mechanism to enhance recall of
frequently co-occurring attributes and dynamically adjust relationships based
on evolving data patterns. Experiments show that associative imprints in
Hopfield memory are reinforced over time, ensuring linked datasets remain
contextually meaningful and improving data disambiguation and integration
accuracy. Our results indicate that combining deep Hopfield networks with
distributed cognitive processing offers a scalable, biologically inspired
approach to managing complex data relationships in large-scale environments.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 00:53:22 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Kannan",
"Ashwin Viswanathan",
""
],
[
"Thomas",
"Johnson P",
""
],
[
"Mukerji",
"Abhimanyu",
""
]
]
| TITLE: Hopfield Networks Meet Big Data: A Brain-Inspired Deep Learning
Framework for Semantic Data Linking
ABSTRACT: The exponential rise in data generation has led to vast, heterogeneous
datasets crucial for predictive analytics and decision-making. Ensuring data
quality and semantic integrity remains a challenge. This paper presents a
brain-inspired distributed cognitive framework that integrates deep learning
with Hopfield networks to identify and link semantically related attributes
across datasets. Modeled on the dual-hemisphere functionality of the human
brain, the right hemisphere assimilates new information while the left
retrieves learned representations for association. Our architecture,
implemented on MapReduce with Hadoop Distributed File System (HDFS), leverages
deep Hopfield networks as an associative memory mechanism to enhance recall of
frequently co-occurring attributes and dynamically adjust relationships based
on evolving data patterns. Experiments show that associative imprints in
Hopfield memory are reinforced over time, ensuring linked datasets remain
contextually meaningful and improving data disambiguation and integration
accuracy. Our results indicate that combining deep Hopfield networks with
distributed cognitive processing offers a scalable, biologically inspired
approach to managing complex data relationships in large-scale environments.
| no_new_dataset | 0.947478 |
2503.03100 | Arpan Kusari | Asma A. Almutairi, David J. LeBlanc, Arpan Kusari | Car-STAGE: Automated framework for large-scale high-dimensional
simulated time-series data generation based on user-defined criteria | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Generating large-scale sensing datasets through photo-realistic simulation is
an important aspect of many robotics applications such as autonomous driving.
In this paper, we consider the problem of synchronous data collection from the
open-source CARLA simulator using multiple sensors attached to vehicle based on
user-defined criteria. We propose a novel, one-step framework that we refer to
as Car-STAGE, based on CARLA simulator, to generate data using a graphical user
interface (GUI) defining configuration parameters to data collection without
any user intervention. This framework can utilize the user-defined
configuration parameters such as choice of maps, number and configurations of
sensors, environmental and lighting conditions etc. to run the simulation in
the background, collecting high-dimensional sensor data from diverse sensors
such as RGB Camera, LiDAR, Radar, Depth Camera, IMU Sensor, GNSS Sensor,
Semantic Segmentation Camera, Instance Segmentation Camera, and Optical Flow
Camera along with the ground-truths of the individual actors and storing the
sensor data as well as ground-truth labels in a local or cloud-based database.
The framework uses multiple threads where a main thread runs the server, a
worker thread deals with queue and frame number and the rest of the threads
processes the sensor data. The other way we derive speed up over the native
implementation is by memory mapping the raw binary data into the disk and then
converting the data into known formats at the end of data collection. We show
that using these techniques, we gain a significant speed up over frames, under
an increasing set of sensors and over the number of spawned objects.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 01:32:56 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Almutairi",
"Asma A.",
""
],
[
"LeBlanc",
"David J.",
""
],
[
"Kusari",
"Arpan",
""
]
]
| TITLE: Car-STAGE: Automated framework for large-scale high-dimensional
simulated time-series data generation based on user-defined criteria
ABSTRACT: Generating large-scale sensing datasets through photo-realistic simulation is
an important aspect of many robotics applications such as autonomous driving.
In this paper, we consider the problem of synchronous data collection from the
open-source CARLA simulator using multiple sensors attached to vehicle based on
user-defined criteria. We propose a novel, one-step framework that we refer to
as Car-STAGE, based on CARLA simulator, to generate data using a graphical user
interface (GUI) defining configuration parameters to data collection without
any user intervention. This framework can utilize the user-defined
configuration parameters such as choice of maps, number and configurations of
sensors, environmental and lighting conditions etc. to run the simulation in
the background, collecting high-dimensional sensor data from diverse sensors
such as RGB Camera, LiDAR, Radar, Depth Camera, IMU Sensor, GNSS Sensor,
Semantic Segmentation Camera, Instance Segmentation Camera, and Optical Flow
Camera along with the ground-truths of the individual actors and storing the
sensor data as well as ground-truth labels in a local or cloud-based database.
The framework uses multiple threads where a main thread runs the server, a
worker thread deals with queue and frame number and the rest of the threads
processes the sensor data. The other way we derive speed up over the native
implementation is by memory mapping the raw binary data into the disk and then
converting the data into known formats at the end of data collection. We show
that using these techniques, we gain a significant speed up over frames, under
an increasing set of sensors and over the number of spawned objects.
| no_new_dataset | 0.95388 |
2503.03103 | Chang Sun | Chang Sun, Jennifer Ngadiuba, Maurizio Pierini, Maria Spiropulu | Fast Jet Tagging with MLP-Mixers on FPGAs | null | null | null | null | physics.ins-det cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the innovative use of MLP-Mixer models for real-time jet tagging
and establish their feasibility on resource-constrained hardware like FPGAs.
MLP-Mixers excel in processing sequences of jet constituents, achieving
state-of-the-art performance on datasets mimicking Large Hadron Collider
conditions. By using advanced optimization techniques such as High-Granularity
Quantization and Distributed Arithmetic, we achieve unprecedented efficiency.
These models match or surpass the accuracy of previous architectures, reduce
hardware resource usage by up to 97%, double the throughput, and half the
latency. Additionally, non-permutation-invariant architectures enable smart
feature prioritization and efficient FPGA deployment, setting a new benchmark
for machine learning in real-time data processing at particle colliders.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 01:37:47 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Sun",
"Chang",
""
],
[
"Ngadiuba",
"Jennifer",
""
],
[
"Pierini",
"Maurizio",
""
],
[
"Spiropulu",
"Maria",
""
]
]
| TITLE: Fast Jet Tagging with MLP-Mixers on FPGAs
ABSTRACT: We explore the innovative use of MLP-Mixer models for real-time jet tagging
and establish their feasibility on resource-constrained hardware like FPGAs.
MLP-Mixers excel in processing sequences of jet constituents, achieving
state-of-the-art performance on datasets mimicking Large Hadron Collider
conditions. By using advanced optimization techniques such as High-Granularity
Quantization and Distributed Arithmetic, we achieve unprecedented efficiency.
These models match or surpass the accuracy of previous architectures, reduce
hardware resource usage by up to 97%, double the throughput, and half the
latency. Additionally, non-permutation-invariant architectures enable smart
feature prioritization and efficient FPGA deployment, setting a new benchmark
for machine learning in real-time data processing at particle colliders.
| no_new_dataset | 0.949763 |
2503.03107 | Biwei Cao | Biwei Cao, Qihang Wu, Jiuxin Cao, Bo Liu, Jie Gui | External Reliable Information-enhanced Multimodal Contrastive Learning
for Fake News Detection | accepted by AAAI'25 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of the Internet, the information dissemination
paradigm has changed and the efficiency has been improved greatly. While this
also brings the quick spread of fake news and leads to negative impacts on
cyberspace. Currently, the information presentation formats have evolved
gradually, with the news formats shifting from texts to multimodal contents. As
a result, detecting multimodal fake news has become one of the research
hotspots. However, multimodal fake news detection research field still faces
two main challenges: the inability to fully and effectively utilize multimodal
information for detection, and the low credibility or static nature of the
introduced external information, which limits dynamic updates. To bridge the
gaps, we propose ERIC-FND, an external reliable information-enhanced multimodal
contrastive learning framework for fake news detection. ERIC-FND strengthens
the representation of news contents by entity-enriched external information
enhancement method. It also enriches the multimodal news information via
multimodal semantic interaction method where the multimodal constrative
learning is employed to make different modality representations learn from each
other. Moreover, an adaptive fusion method is taken to integrate the news
representations from different dimensions for the eventual classification.
Experiments are done on two commonly used datasets in different languages, X
(Twitter) and Weibo. Experiment results demonstrate that our proposed model
ERIC-FND outperforms existing state-of-the-art fake news detection methods
under the same settings.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 02:07:38 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Cao",
"Biwei",
""
],
[
"Wu",
"Qihang",
""
],
[
"Cao",
"Jiuxin",
""
],
[
"Liu",
"Bo",
""
],
[
"Gui",
"Jie",
""
]
]
| TITLE: External Reliable Information-enhanced Multimodal Contrastive Learning
for Fake News Detection
ABSTRACT: With the rapid development of the Internet, the information dissemination
paradigm has changed and the efficiency has been improved greatly. While this
also brings the quick spread of fake news and leads to negative impacts on
cyberspace. Currently, the information presentation formats have evolved
gradually, with the news formats shifting from texts to multimodal contents. As
a result, detecting multimodal fake news has become one of the research
hotspots. However, multimodal fake news detection research field still faces
two main challenges: the inability to fully and effectively utilize multimodal
information for detection, and the low credibility or static nature of the
introduced external information, which limits dynamic updates. To bridge the
gaps, we propose ERIC-FND, an external reliable information-enhanced multimodal
contrastive learning framework for fake news detection. ERIC-FND strengthens
the representation of news contents by entity-enriched external information
enhancement method. It also enriches the multimodal news information via
multimodal semantic interaction method where the multimodal constrative
learning is employed to make different modality representations learn from each
other. Moreover, an adaptive fusion method is taken to integrate the news
representations from different dimensions for the eventual classification.
Experiments are done on two commonly used datasets in different languages, X
(Twitter) and Weibo. Experiment results demonstrate that our proposed model
ERIC-FND outperforms existing state-of-the-art fake news detection methods
under the same settings.
| no_new_dataset | 0.950041 |
2503.03108 | Cheng Wenrui | Wenrui Cheng, Tiantian Zhu, Chunlin Xiong, Haofei Sun, Zijun Wang,
Shunan Jing, Mingqi Lv, Yan Chen | SoK: Knowledge is All You Need: Last Mile Delivery for Automated
Provenance-based Intrusion Detection with LLMs | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, provenance-based intrusion detection systems (PIDSes) have been
widely proposed for endpoint threat analysis. However, due to the lack of
systematic integration and utilization of knowledge, existing PIDSes still
require significant manual intervention for practical deployment, making full
automation challenging. This paper presents a disruptive innovation by
categorizing PIDSes according to the types of knowledge they utilize. In
response to the prevalent issue of ``knowledge silos problem'' in existing
research, we introduce a novel knowledge-driven provenance-based intrusion
detection framework, powered by large language models (LLMs). We also present
OmniSec, a best practice system built upon this framework. By integrating
attack representation knowledge, threat intelligence knowledge, and benign
behavior knowledge, OmniSec outperforms the state-of-the-art approaches on
public benchmark datasets. OmniSec is available online at
https://anonymous.4open.science/r/PIDS-with-LLM-613B.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 02:08:12 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Cheng",
"Wenrui",
""
],
[
"Zhu",
"Tiantian",
""
],
[
"Xiong",
"Chunlin",
""
],
[
"Sun",
"Haofei",
""
],
[
"Wang",
"Zijun",
""
],
[
"Jing",
"Shunan",
""
],
[
"Lv",
"Mingqi",
""
],
[
"Chen",
"Yan",
""
]
]
| TITLE: SoK: Knowledge is All You Need: Last Mile Delivery for Automated
Provenance-based Intrusion Detection with LLMs
ABSTRACT: Recently, provenance-based intrusion detection systems (PIDSes) have been
widely proposed for endpoint threat analysis. However, due to the lack of
systematic integration and utilization of knowledge, existing PIDSes still
require significant manual intervention for practical deployment, making full
automation challenging. This paper presents a disruptive innovation by
categorizing PIDSes according to the types of knowledge they utilize. In
response to the prevalent issue of ``knowledge silos problem'' in existing
research, we introduce a novel knowledge-driven provenance-based intrusion
detection framework, powered by large language models (LLMs). We also present
OmniSec, a best practice system built upon this framework. By integrating
attack representation knowledge, threat intelligence knowledge, and benign
behavior knowledge, OmniSec outperforms the state-of-the-art approaches on
public benchmark datasets. OmniSec is available online at
https://anonymous.4open.science/r/PIDS-with-LLM-613B.
| no_new_dataset | 0.942135 |
2503.03111 | Wanke Xia | Wanke Xia, Ruoxin Peng, Haoqi Chu, Xinlei Zhu | An Improved Pure Fully Connected Neural Network for Rice Grain
Classification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Rice is a staple food for a significant portion of the world's population,
providing essential nutrients and serving as a versatile in-gredient in a wide
range of culinary traditions. Recently, the use of deep learning has enabled
automated classification of rice, im-proving accuracy and efficiency. However,
classical models based on first-stage training may face difficulties in
distinguishing between rice varieties with similar external characteristics,
thus leading to misclassifications. Considering the transparency and
feasibility of model, we selected and gradually improved pure fully connected
neural network to achieve classification of rice grain. The dataset we used
contains both global and domestic rice images obtained from websites and
laboratories respectively. First, the training mode was changed from one-stage
training to two-stage training, which significantly contributes to
distinguishing two similar types of rice. Secondly, the preprocessing method
was changed from random tilting to horizontal or vertical position cor-rection.
After those two enhancements, the accuracy of our model increased notably from
97% to 99%. In summary, two subtle methods proposed in this study can
remarkably enhance the classification ability of deep learning models in terms
of the classification of rice grain.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 02:10:14 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Xia",
"Wanke",
""
],
[
"Peng",
"Ruoxin",
""
],
[
"Chu",
"Haoqi",
""
],
[
"Zhu",
"Xinlei",
""
]
]
| TITLE: An Improved Pure Fully Connected Neural Network for Rice Grain
Classification
ABSTRACT: Rice is a staple food for a significant portion of the world's population,
providing essential nutrients and serving as a versatile in-gredient in a wide
range of culinary traditions. Recently, the use of deep learning has enabled
automated classification of rice, im-proving accuracy and efficiency. However,
classical models based on first-stage training may face difficulties in
distinguishing between rice varieties with similar external characteristics,
thus leading to misclassifications. Considering the transparency and
feasibility of model, we selected and gradually improved pure fully connected
neural network to achieve classification of rice grain. The dataset we used
contains both global and domestic rice images obtained from websites and
laboratories respectively. First, the training mode was changed from one-stage
training to two-stage training, which significantly contributes to
distinguishing two similar types of rice. Secondly, the preprocessing method
was changed from random tilting to horizontal or vertical position cor-rection.
After those two enhancements, the accuracy of our model increased notably from
97% to 99%. In summary, two subtle methods proposed in this study can
remarkably enhance the classification ability of deep learning models in terms
of the classification of rice grain.
| no_new_dataset | 0.811153 |
2503.03115 | Kun Yang | Kun Yang, Yuxiang Liu, Zeyu Cui, Yu Liu, Maojun Zhang, Shen Yan, Qing
Wang | NTR-Gaussian: Nighttime Dynamic Thermal Reconstruction with 4D Gaussian
Splatting Based on Thermodynamics | IEEE Conference on Computer Vision and Pattern Recognition 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Thermal infrared imaging offers the advantage of all-weather capability,
enabling non-intrusive measurement of an object's surface temperature.
Consequently, thermal infrared images are employed to reconstruct 3D models
that accurately reflect the temperature distribution of a scene, aiding in
applications such as building monitoring and energy management. However,
existing approaches predominantly focus on static 3D reconstruction for a
single time period, overlooking the impact of environmental factors on thermal
radiation and failing to predict or analyze temperature variations over time.
To address these challenges, we propose the NTR-Gaussian method, which treats
temperature as a form of thermal radiation, incorporating elements like
convective heat transfer and radiative heat dissipation. Our approach utilizes
neural networks to predict thermodynamic parameters such as emissivity,
convective heat transfer coefficient, and heat capacity. By integrating these
predictions, we can accurately forecast thermal temperatures at various times
throughout a nighttime scene. Furthermore, we introduce a dynamic dataset
specifically for nighttime thermal imagery. Extensive experiments and
evaluations demonstrate that NTR-Gaussian significantly outperforms comparison
methods in thermal reconstruction, achieving a predicted temperature error
within 1 degree Celsius.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 02:24:13 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yang",
"Kun",
""
],
[
"Liu",
"Yuxiang",
""
],
[
"Cui",
"Zeyu",
""
],
[
"Liu",
"Yu",
""
],
[
"Zhang",
"Maojun",
""
],
[
"Yan",
"Shen",
""
],
[
"Wang",
"Qing",
""
]
]
| TITLE: NTR-Gaussian: Nighttime Dynamic Thermal Reconstruction with 4D Gaussian
Splatting Based on Thermodynamics
ABSTRACT: Thermal infrared imaging offers the advantage of all-weather capability,
enabling non-intrusive measurement of an object's surface temperature.
Consequently, thermal infrared images are employed to reconstruct 3D models
that accurately reflect the temperature distribution of a scene, aiding in
applications such as building monitoring and energy management. However,
existing approaches predominantly focus on static 3D reconstruction for a
single time period, overlooking the impact of environmental factors on thermal
radiation and failing to predict or analyze temperature variations over time.
To address these challenges, we propose the NTR-Gaussian method, which treats
temperature as a form of thermal radiation, incorporating elements like
convective heat transfer and radiative heat dissipation. Our approach utilizes
neural networks to predict thermodynamic parameters such as emissivity,
convective heat transfer coefficient, and heat capacity. By integrating these
predictions, we can accurately forecast thermal temperatures at various times
throughout a nighttime scene. Furthermore, we introduce a dynamic dataset
specifically for nighttime thermal imagery. Extensive experiments and
evaluations demonstrate that NTR-Gaussian significantly outperforms comparison
methods in thermal reconstruction, achieving a predicted temperature error
within 1 degree Celsius.
| no_new_dataset | 0.87397 |
2503.03132 | Awais Ahmed Nizamani | Awais Nizamani, Hamid Laga, Guanjin Wang, Farid Boussaid, Mohammed
Bennamoun, Anuj Srivastava | Dynamic Neural Surfaces for Elastic 4D Shape Representation and Analysis | 22 pages, 23 figures, conference paper | CVPR 2025 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose a novel framework for the statistical analysis of genus-zero 4D
surfaces, i.e., 3D surfaces that deform and evolve over time. This problem is
particularly challenging due to the arbitrary parameterizations of these
surfaces and their varying deformation speeds, necessitating effective
spatiotemporal registration. Traditionally, 4D surfaces are discretized, in
space and time, before computing their spatiotemporal registrations, geodesics,
and statistics. However, this approach may result in suboptimal solutions and,
as we demonstrate in this paper, is not necessary. In contrast, we treat 4D
surfaces as continuous functions in both space and time. We introduce Dynamic
Spherical Neural Surfaces (D-SNS), an efficient smooth and continuous
spatiotemporal representation for genus-0 4D surfaces. We then demonstrate how
to perform core 4D shape analysis tasks such as spatiotemporal registration,
geodesics computation, and mean 4D shape estimation, directly on these
continuous representations without upfront discretization and meshing. By
integrating neural representations with classical Riemannian geometry and
statistical shape analysis techniques, we provide the building blocks for
enabling full functional shape analysis. We demonstrate the efficiency of the
framework on 4D human and face datasets. The source code and additional results
are available at https://4d-dsns.github.io/DSNS/.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 03:02:59 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Nizamani",
"Awais",
""
],
[
"Laga",
"Hamid",
""
],
[
"Wang",
"Guanjin",
""
],
[
"Boussaid",
"Farid",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Srivastava",
"Anuj",
""
]
]
| TITLE: Dynamic Neural Surfaces for Elastic 4D Shape Representation and Analysis
ABSTRACT: We propose a novel framework for the statistical analysis of genus-zero 4D
surfaces, i.e., 3D surfaces that deform and evolve over time. This problem is
particularly challenging due to the arbitrary parameterizations of these
surfaces and their varying deformation speeds, necessitating effective
spatiotemporal registration. Traditionally, 4D surfaces are discretized, in
space and time, before computing their spatiotemporal registrations, geodesics,
and statistics. However, this approach may result in suboptimal solutions and,
as we demonstrate in this paper, is not necessary. In contrast, we treat 4D
surfaces as continuous functions in both space and time. We introduce Dynamic
Spherical Neural Surfaces (D-SNS), an efficient smooth and continuous
spatiotemporal representation for genus-0 4D surfaces. We then demonstrate how
to perform core 4D shape analysis tasks such as spatiotemporal registration,
geodesics computation, and mean 4D shape estimation, directly on these
continuous representations without upfront discretization and meshing. By
integrating neural representations with classical Riemannian geometry and
statistical shape analysis techniques, we provide the building blocks for
enabling full functional shape analysis. We demonstrate the efficiency of the
framework on 4D human and face datasets. The source code and additional results
are available at https://4d-dsns.github.io/DSNS/.
| no_new_dataset | 0.948822 |
2503.03141 | Chun-Wun Cheng | Chun-Wun Cheng, Yining Zhao, Yanqi Cheng, Javier Montoya,
Carola-Bibiane Sch\"onlieb, Angelica I Aviles-Rivero | Implicit U-KAN2.0: Dynamic, Efficient and Interpretable Medical Image
Segmentation | null | null | null | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image segmentation is a fundamental task in both image analysis and medical
applications. State-of-the-art methods predominantly rely on encoder-decoder
architectures with a U-shaped design, commonly referred to as U-Net. Recent
advancements integrating transformers and MLPs improve performance but still
face key limitations, such as poor interpretability, difficulty handling
intrinsic noise, and constrained expressiveness due to discrete layer
structures, often lacking a solid theoretical foundation.In this work, we
introduce Implicit U-KAN 2.0, a novel U-Net variant that adopts a two-phase
encoder-decoder structure. In the SONO phase, we use a second-order neural
ordinary differential equation (NODEs), called the SONO block, for a more
efficient, expressive, and theoretically grounded modeling approach. In the
SONO-MultiKAN phase, we integrate the second-order NODEs and MultiKAN layer as
the core computational block to enhance interpretability and representation
power. Our contributions are threefold. First, U-KAN 2.0 is an implicit deep
neural network incorporating MultiKAN and second order NODEs, improving
interpretability and performance while reducing computational costs. Second, we
provide a theoretical analysis demonstrating that the approximation ability of
the MultiKAN block is independent of the input dimension. Third, we conduct
extensive experiments on a variety of 2D and a single 3D dataset, demonstrating
that our model consistently outperforms existing segmentation networks.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 03:31:05 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Cheng",
"Chun-Wun",
""
],
[
"Zhao",
"Yining",
""
],
[
"Cheng",
"Yanqi",
""
],
[
"Montoya",
"Javier",
""
],
[
"Schönlieb",
"Carola-Bibiane",
""
],
[
"Aviles-Rivero",
"Angelica I",
""
]
]
| TITLE: Implicit U-KAN2.0: Dynamic, Efficient and Interpretable Medical Image
Segmentation
ABSTRACT: Image segmentation is a fundamental task in both image analysis and medical
applications. State-of-the-art methods predominantly rely on encoder-decoder
architectures with a U-shaped design, commonly referred to as U-Net. Recent
advancements integrating transformers and MLPs improve performance but still
face key limitations, such as poor interpretability, difficulty handling
intrinsic noise, and constrained expressiveness due to discrete layer
structures, often lacking a solid theoretical foundation.In this work, we
introduce Implicit U-KAN 2.0, a novel U-Net variant that adopts a two-phase
encoder-decoder structure. In the SONO phase, we use a second-order neural
ordinary differential equation (NODEs), called the SONO block, for a more
efficient, expressive, and theoretically grounded modeling approach. In the
SONO-MultiKAN phase, we integrate the second-order NODEs and MultiKAN layer as
the core computational block to enhance interpretability and representation
power. Our contributions are threefold. First, U-KAN 2.0 is an implicit deep
neural network incorporating MultiKAN and second order NODEs, improving
interpretability and performance while reducing computational costs. Second, we
provide a theoretical analysis demonstrating that the approximation ability of
the MultiKAN block is independent of the input dimension. Third, we conduct
extensive experiments on a variety of 2D and a single 3D dataset, demonstrating
that our model consistently outperforms existing segmentation networks.
| no_new_dataset | 0.946892 |
2503.03148 | Haiduo Huang | Haiduo Huang, Fuwei Yang, Dong Li, Ji Liu, Lu Tian, Jinzhang Peng,
Pengju Ren, Emad Barsoum | Partial Convolution Meets Visual Attention | arXiv admin note: substantial text overlap with arXiv:2502.01303 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing an efficient and effective neural network has remained a prominent
topic in computer vision research. Depthwise onvolution (DWConv) is widely used
in efficient CNNs or ViTs, but it needs frequent memory access during
inference, which leads to low throughput. FasterNet attempts to introduce
partial convolution (PConv) as an alternative to DWConv but compromises the
accuracy due to underutilized channels. To remedy this shortcoming and consider
the redundancy between feature map channels, we introduce a novel Partial
visual ATtention mechanism (PAT) that can efficiently combine PConv with visual
attention. Our exploration indicates that the partial attention mechanism can
completely replace the full attention mechanism and reduce model parameters and
FLOPs. Our PAT can derive three types of blocks: Partial Channel-Attention
block (PAT_ch), Partial Spatial-Attention block (PAT_sp) and Partial
Self-Attention block (PAT_sf). First, PAT_ch integrates the enhanced Gaussian
channel attention mechanism to infuse global distribution information into the
untouched channels of PConv. Second, we introduce the spatial-wise attention to
the MLP layer to further improve model accuracy. Finally, we replace PAT_ch in
the last stage with the self-attention mechanism to extend the global receptive
field. Building upon PAT, we propose a novel hybrid network family, named
PATNet, which achieves superior top-1 accuracy and inference speed compared to
FasterNet on ImageNet-1K classification and excel in both detection and
segmentation on the COCO dataset. Particularly, our PATNet-T2 achieves 1.3%
higher accuracy than FasterNet-T2, while exhibiting 25% higher GPU throughput
and 24% lower CPU latency.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 03:42:59 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Huang",
"Haiduo",
""
],
[
"Yang",
"Fuwei",
""
],
[
"Li",
"Dong",
""
],
[
"Liu",
"Ji",
""
],
[
"Tian",
"Lu",
""
],
[
"Peng",
"Jinzhang",
""
],
[
"Ren",
"Pengju",
""
],
[
"Barsoum",
"Emad",
""
]
]
| TITLE: Partial Convolution Meets Visual Attention
ABSTRACT: Designing an efficient and effective neural network has remained a prominent
topic in computer vision research. Depthwise onvolution (DWConv) is widely used
in efficient CNNs or ViTs, but it needs frequent memory access during
inference, which leads to low throughput. FasterNet attempts to introduce
partial convolution (PConv) as an alternative to DWConv but compromises the
accuracy due to underutilized channels. To remedy this shortcoming and consider
the redundancy between feature map channels, we introduce a novel Partial
visual ATtention mechanism (PAT) that can efficiently combine PConv with visual
attention. Our exploration indicates that the partial attention mechanism can
completely replace the full attention mechanism and reduce model parameters and
FLOPs. Our PAT can derive three types of blocks: Partial Channel-Attention
block (PAT_ch), Partial Spatial-Attention block (PAT_sp) and Partial
Self-Attention block (PAT_sf). First, PAT_ch integrates the enhanced Gaussian
channel attention mechanism to infuse global distribution information into the
untouched channels of PConv. Second, we introduce the spatial-wise attention to
the MLP layer to further improve model accuracy. Finally, we replace PAT_ch in
the last stage with the self-attention mechanism to extend the global receptive
field. Building upon PAT, we propose a novel hybrid network family, named
PATNet, which achieves superior top-1 accuracy and inference speed compared to
FasterNet on ImageNet-1K classification and excel in both detection and
segmentation on the COCO dataset. Particularly, our PATNet-T2 achieves 1.3%
higher accuracy than FasterNet-T2, while exhibiting 25% higher GPU throughput
and 24% lower CPU latency.
| no_new_dataset | 0.951278 |
2503.03165 | Fuyuan Lyu | Xing Tang, Yunpeng Weng, Fuyuan Lyu, Dugang Liu, Xiuqiang He | A Predict-Then-Optimize Customer Allocation Framework for Online Fund
Recommendation | Accepted by DASFAA 2025 | null | null | null | cs.CE cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid growth of online investment platforms, funds can be
distributed to individual customers online. The central issue is to match funds
with potential customers under constraints. Most mainstream platforms adopt the
recommendation formulation to tackle the problem. However, the traditional
recommendation regime has its inherent drawbacks when applying the
fund-matching problem with multiple constraints. In this paper, we model the
fund matching under the allocation formulation. We design PTOFA, a
Predict-Then-Optimize Fund Allocation framework. This data-driven framework
consists of two stages, i.e., prediction and optimization, which aim to predict
expected revenue based on customer behavior and optimize the impression
allocation to achieve the maximum revenue under the necessary constraints,
respectively. Extensive experiments on real-world datasets from an industrial
online investment platform validate the effectiveness and efficiency of our
solution. Additionally, the online A/B tests demonstrate PTOFA's effectiveness
in the real-world fund recommendation scenario.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 04:16:36 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Tang",
"Xing",
""
],
[
"Weng",
"Yunpeng",
""
],
[
"Lyu",
"Fuyuan",
""
],
[
"Liu",
"Dugang",
""
],
[
"He",
"Xiuqiang",
""
]
]
| TITLE: A Predict-Then-Optimize Customer Allocation Framework for Online Fund
Recommendation
ABSTRACT: With the rapid growth of online investment platforms, funds can be
distributed to individual customers online. The central issue is to match funds
with potential customers under constraints. Most mainstream platforms adopt the
recommendation formulation to tackle the problem. However, the traditional
recommendation regime has its inherent drawbacks when applying the
fund-matching problem with multiple constraints. In this paper, we model the
fund matching under the allocation formulation. We design PTOFA, a
Predict-Then-Optimize Fund Allocation framework. This data-driven framework
consists of two stages, i.e., prediction and optimization, which aim to predict
expected revenue based on customer behavior and optimize the impression
allocation to achieve the maximum revenue under the necessary constraints,
respectively. Extensive experiments on real-world datasets from an industrial
online investment platform validate the effectiveness and efficiency of our
solution. Additionally, the online A/B tests demonstrate PTOFA's effectiveness
in the real-world fund recommendation scenario.
| no_new_dataset | 0.94366 |
2503.03170 | Javier Yong | Javier Yong, Haokai Ma, Yunshan Ma, Anis Yusof, Zhenkai Liang,
Ee-Chien Chang | AttackSeqBench: Benchmarking Large Language Models' Understanding of
Sequential Patterns in Cyber Attacks | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The observations documented in Cyber Threat Intelligence (CTI) reports play a
critical role in describing adversarial behaviors, providing valuable insights
for security practitioners to respond to evolving threats. Recent advancements
of Large Language Models (LLMs) have demonstrated significant potential in
various cybersecurity applications, including CTI report understanding and
attack knowledge graph construction. While previous works have proposed
benchmarks that focus on the CTI extraction ability of LLMs, the sequential
characteristic of adversarial behaviors within CTI reports remains largely
unexplored, which holds considerable significance in developing a comprehensive
understanding of how adversaries operate. To address this gap, we introduce
AttackSeqBench, a benchmark tailored to systematically evaluate LLMs'
capability to understand and reason attack sequences in CTI reports. Our
benchmark encompasses three distinct Question Answering (QA) tasks, each task
focuses on the varying granularity in adversarial behavior. To alleviate the
laborious effort of QA construction, we carefully design an automated dataset
construction pipeline to create scalable and well-formulated QA datasets based
on real-world CTI reports. To ensure the quality of our dataset, we adopt a
hybrid approach of combining human evaluation and systematic evaluation
metrics. We conduct extensive experiments and analysis with both fast-thinking
and slow-thinking LLMs, while highlighting their strengths and limitations in
analyzing the sequential patterns in cyber attacks. The overarching goal of
this work is to provide a benchmark that advances LLM-driven CTI report
understanding and fosters its application in real-world cybersecurity
operations. Our dataset and code are available at
https://github.com/Javiery3889/AttackSeqBench .
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 04:25:21 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yong",
"Javier",
""
],
[
"Ma",
"Haokai",
""
],
[
"Ma",
"Yunshan",
""
],
[
"Yusof",
"Anis",
""
],
[
"Liang",
"Zhenkai",
""
],
[
"Chang",
"Ee-Chien",
""
]
]
| TITLE: AttackSeqBench: Benchmarking Large Language Models' Understanding of
Sequential Patterns in Cyber Attacks
ABSTRACT: The observations documented in Cyber Threat Intelligence (CTI) reports play a
critical role in describing adversarial behaviors, providing valuable insights
for security practitioners to respond to evolving threats. Recent advancements
of Large Language Models (LLMs) have demonstrated significant potential in
various cybersecurity applications, including CTI report understanding and
attack knowledge graph construction. While previous works have proposed
benchmarks that focus on the CTI extraction ability of LLMs, the sequential
characteristic of adversarial behaviors within CTI reports remains largely
unexplored, which holds considerable significance in developing a comprehensive
understanding of how adversaries operate. To address this gap, we introduce
AttackSeqBench, a benchmark tailored to systematically evaluate LLMs'
capability to understand and reason attack sequences in CTI reports. Our
benchmark encompasses three distinct Question Answering (QA) tasks, each task
focuses on the varying granularity in adversarial behavior. To alleviate the
laborious effort of QA construction, we carefully design an automated dataset
construction pipeline to create scalable and well-formulated QA datasets based
on real-world CTI reports. To ensure the quality of our dataset, we adopt a
hybrid approach of combining human evaluation and systematic evaluation
metrics. We conduct extensive experiments and analysis with both fast-thinking
and slow-thinking LLMs, while highlighting their strengths and limitations in
analyzing the sequential patterns in cyber attacks. The overarching goal of
this work is to provide a benchmark that advances LLM-driven CTI report
understanding and fosters its application in real-world cybersecurity
operations. Our dataset and code are available at
https://github.com/Javiery3889/AttackSeqBench .
| new_dataset | 0.953188 |
2503.03172 | Gibson Nkhata | Gibson Nkhata and Susan Gauch | Intermediate-Task Transfer Learning: Leveraging Sarcasm Detection for
Stance Detection | 8 pages, 2 figures, published in The Sixteenth International
Conference on Information (eKNOW 2024) | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Stance Detection (SD) on social media has emerged as a prominent area of
interest with implications for social business and political applications
thereby garnering escalating research attention within NLP. The inherent
subtlety and complexity of texts procured from online platforms pose challenges
for SD algorithms in accurately discerning the authors stance. Mostly the
inclusion of sarcastic and figurative language drastically impacts the
performance of SD models. This paper addresses this by employing sarcasm
detection intermediate-task transfer learning tailored for SD. The proposed
methodology involves the finetuning of BERT and RoBERTa and the concatenation
of convolutional BiLSTM and dense layers. Rigorous experiments are conducted on
publicly available datasets to evaluate our transfer-learning framework. The
performance of the approach is assessed against various State-Of-The-Art
baselines for SD providing empirical evidence of its effectiveness. Notably our
model outperforms the best SOTA models even prior to sarcasm-detection
pretraining. The integration of sarcasm knowledge into the model proves
instrumental in mitigating misclassifications of sarcastic textual elements in
SD. Our model accurately predicts 85% of texts that were previously
misclassified by the model without sarcasm-detection pretraining thereby
amplifying the average F1-score of the model. Our experiments also revealed
that the success of the transfer-learning framework is contingent upon the
correlation of lexical attributes between the intermediate task and the target
task. This study represents the first exploration of sarcasm detection as an
intermediate transfer-learning task in the context of SD and simultaneously
uses the concatenation of BERT or RoBERTa with other deep-learning techniques
establishing the proposed approach as a foundational baseline for future
research endeavors in this domain.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 04:30:53 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Nkhata",
"Gibson",
""
],
[
"Gauch",
"Susan",
""
]
]
| TITLE: Intermediate-Task Transfer Learning: Leveraging Sarcasm Detection for
Stance Detection
ABSTRACT: Stance Detection (SD) on social media has emerged as a prominent area of
interest with implications for social business and political applications
thereby garnering escalating research attention within NLP. The inherent
subtlety and complexity of texts procured from online platforms pose challenges
for SD algorithms in accurately discerning the authors stance. Mostly the
inclusion of sarcastic and figurative language drastically impacts the
performance of SD models. This paper addresses this by employing sarcasm
detection intermediate-task transfer learning tailored for SD. The proposed
methodology involves the finetuning of BERT and RoBERTa and the concatenation
of convolutional BiLSTM and dense layers. Rigorous experiments are conducted on
publicly available datasets to evaluate our transfer-learning framework. The
performance of the approach is assessed against various State-Of-The-Art
baselines for SD providing empirical evidence of its effectiveness. Notably our
model outperforms the best SOTA models even prior to sarcasm-detection
pretraining. The integration of sarcasm knowledge into the model proves
instrumental in mitigating misclassifications of sarcastic textual elements in
SD. Our model accurately predicts 85% of texts that were previously
misclassified by the model without sarcasm-detection pretraining thereby
amplifying the average F1-score of the model. Our experiments also revealed
that the success of the transfer-learning framework is contingent upon the
correlation of lexical attributes between the intermediate task and the target
task. This study represents the first exploration of sarcasm detection as an
intermediate transfer-learning task in the context of SD and simultaneously
uses the concatenation of BERT or RoBERTa with other deep-learning techniques
establishing the proposed approach as a foundational baseline for future
research endeavors in this domain.
| no_new_dataset | 0.944791 |
2503.03178 | Nick Winovich | Nick Winovich, Mitchell Daneker, Lu Lu, Guang Lin | Active operator learning with predictive uncertainty quantification for
partial differential equations | Submitted to the Journal of Computational Physics | null | null | null | cs.LG math.PR | http://creativecommons.org/licenses/by/4.0/ | In this work, we develop a method for uncertainty quantification in deep
operator networks (DeepONets) using predictive uncertainty estimates calibrated
to model errors observed during training. The uncertainty framework operates
using a single network, in contrast to existing ensemble approaches, and
introduces minimal overhead during training and inference. We also introduce an
optimized implementation for DeepONet inference (reducing evaluation times by a
factor of five) to provide models well-suited for real-time applications. We
evaluate the uncertainty-equipped models on a series of partial differential
equation (PDE) problems, and show that the model predictions are unbiased,
non-skewed, and accurately reproduce solutions to the PDEs. To assess how well
the models generalize, we evaluate the network predictions and uncertainty
estimates on in-distribution and out-of-distribution test datasets. We find the
predictive uncertainties accurately reflect the observed model errors over a
range of problems with varying complexity; simpler out-of-distribution examples
are assigned low uncertainty estimates, consistent with the observed errors,
while more complex out-of-distribution examples are properly assigned higher
uncertainties. We also provide a statistical analysis of the predictive
uncertainties and verify that these estimates are well-aligned with the
observed error distributions at the tail-end of training. Finally, we
demonstrate how predictive uncertainties can be used within an active learning
framework to yield improvements in accuracy and data-efficiency for outer-loop
optimization procedures.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 04:48:14 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Winovich",
"Nick",
""
],
[
"Daneker",
"Mitchell",
""
],
[
"Lu",
"Lu",
""
],
[
"Lin",
"Guang",
""
]
]
| TITLE: Active operator learning with predictive uncertainty quantification for
partial differential equations
ABSTRACT: In this work, we develop a method for uncertainty quantification in deep
operator networks (DeepONets) using predictive uncertainty estimates calibrated
to model errors observed during training. The uncertainty framework operates
using a single network, in contrast to existing ensemble approaches, and
introduces minimal overhead during training and inference. We also introduce an
optimized implementation for DeepONet inference (reducing evaluation times by a
factor of five) to provide models well-suited for real-time applications. We
evaluate the uncertainty-equipped models on a series of partial differential
equation (PDE) problems, and show that the model predictions are unbiased,
non-skewed, and accurately reproduce solutions to the PDEs. To assess how well
the models generalize, we evaluate the network predictions and uncertainty
estimates on in-distribution and out-of-distribution test datasets. We find the
predictive uncertainties accurately reflect the observed model errors over a
range of problems with varying complexity; simpler out-of-distribution examples
are assigned low uncertainty estimates, consistent with the observed errors,
while more complex out-of-distribution examples are properly assigned higher
uncertainties. We also provide a statistical analysis of the predictive
uncertainties and verify that these estimates are well-aligned with the
observed error distributions at the tail-end of training. Finally, we
demonstrate how predictive uncertainties can be used within an active learning
framework to yield improvements in accuracy and data-efficiency for outer-loop
optimization procedures.
| no_new_dataset | 0.943712 |
2503.03180 | Ghazal Ghajari | Ashutosh Ghimire, Ghazal Ghajari, Karma Gurung, Love K. Sah, Fathi
Amsaad | Enhancing Cybersecurity in Critical Infrastructure with LLM-Assisted
Explainable IoT Systems | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring the security of critical infrastructure has become increasingly
vital with the proliferation of Internet of Things (IoT) systems. However, the
heterogeneous nature of IoT data and the lack of human-comprehensible insights
from anomaly detection models remain significant challenges. This paper
presents a hybrid framework that combines numerical anomaly detection using
Autoencoders with Large Language Models (LLMs) for enhanced preprocessing and
interpretability. Two preprocessing approaches are implemented: a traditional
method utilizing Principal Component Analysis (PCA) to reduce dimensionality
and an LLM-assisted method where GPT-4 dynamically recommends feature
selection, transformation, and encoding strategies.
Experimental results on the KDDCup99 10% corrected dataset demonstrate that
the LLM-assisted preprocessing pipeline significantly improves anomaly
detection performance. The macro-average F1 score increased from 0.49 in the
traditional PCA-based approach to 0.98 with LLM-driven insights. Additionally,
the LLM generates natural language explanations for detected anomalies,
providing contextual insights into their causes and implications. This
framework highlights the synergy between numerical AI models and LLMs,
delivering an accurate, interpretable, and efficient solution for IoT
cybersecurity in critical infrastructure.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 04:53:07 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ghimire",
"Ashutosh",
""
],
[
"Ghajari",
"Ghazal",
""
],
[
"Gurung",
"Karma",
""
],
[
"Sah",
"Love K.",
""
],
[
"Amsaad",
"Fathi",
""
]
]
| TITLE: Enhancing Cybersecurity in Critical Infrastructure with LLM-Assisted
Explainable IoT Systems
ABSTRACT: Ensuring the security of critical infrastructure has become increasingly
vital with the proliferation of Internet of Things (IoT) systems. However, the
heterogeneous nature of IoT data and the lack of human-comprehensible insights
from anomaly detection models remain significant challenges. This paper
presents a hybrid framework that combines numerical anomaly detection using
Autoencoders with Large Language Models (LLMs) for enhanced preprocessing and
interpretability. Two preprocessing approaches are implemented: a traditional
method utilizing Principal Component Analysis (PCA) to reduce dimensionality
and an LLM-assisted method where GPT-4 dynamically recommends feature
selection, transformation, and encoding strategies.
Experimental results on the KDDCup99 10% corrected dataset demonstrate that
the LLM-assisted preprocessing pipeline significantly improves anomaly
detection performance. The macro-average F1 score increased from 0.49 in the
traditional PCA-based approach to 0.98 with LLM-driven insights. Additionally,
the LLM generates natural language explanations for detected anomalies,
providing contextual insights into their causes and implications. This
framework highlights the synergy between numerical AI models and LLMs,
delivering an accurate, interpretable, and efficient solution for IoT
cybersecurity in critical infrastructure.
| no_new_dataset | 0.947039 |
2503.03192 | Alexander Thoms | Alexander Thoms, Alan Papalia, Jared Velasquez, David M. Rosen, Sriram
Narasimhan | Distributed Certifiably Correct Range-Aided SLAM | 8 pages, 3 figures, accepted to 2025 International Conference on
Robotics and Automation | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable simultaneous localization and mapping (SLAM) algorithms are
necessary for safety-critical autonomous navigation. In the
communication-constrained multi-agent setting, navigation systems increasingly
use point-to-point range sensors as they afford measurements with low bandwidth
requirements and known data association. The state estimation problem for these
systems takes the form of range-aided (RA) SLAM. However, distributed
algorithms for solving the RA-SLAM problem lack formal guarantees on the
quality of the returned estimate. To this end, we present the first distributed
algorithm for RA-SLAM that can efficiently recover certifiably globally optimal
solutions. Our algorithm, distributed certifiably correct RA-SLAM (DCORA),
achieves this via the Riemannian Staircase method, where computational
procedures developed for distributed certifiably correct pose graph
optimization are generalized to the RA-SLAM problem. We demonstrate DCORA's
efficacy on real-world multi-agent datasets by achieving absolute trajectory
errors comparable to those of a state-of-the-art centralized certifiably
correct RA-SLAM algorithm. Additionally, we perform a parametric study on the
structure of the RA-SLAM problem using synthetic data, revealing how common
parameters affect DCORA's performance.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 05:17:15 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Thoms",
"Alexander",
""
],
[
"Papalia",
"Alan",
""
],
[
"Velasquez",
"Jared",
""
],
[
"Rosen",
"David M.",
""
],
[
"Narasimhan",
"Sriram",
""
]
]
| TITLE: Distributed Certifiably Correct Range-Aided SLAM
ABSTRACT: Reliable simultaneous localization and mapping (SLAM) algorithms are
necessary for safety-critical autonomous navigation. In the
communication-constrained multi-agent setting, navigation systems increasingly
use point-to-point range sensors as they afford measurements with low bandwidth
requirements and known data association. The state estimation problem for these
systems takes the form of range-aided (RA) SLAM. However, distributed
algorithms for solving the RA-SLAM problem lack formal guarantees on the
quality of the returned estimate. To this end, we present the first distributed
algorithm for RA-SLAM that can efficiently recover certifiably globally optimal
solutions. Our algorithm, distributed certifiably correct RA-SLAM (DCORA),
achieves this via the Riemannian Staircase method, where computational
procedures developed for distributed certifiably correct pose graph
optimization are generalized to the RA-SLAM problem. We demonstrate DCORA's
efficacy on real-world multi-agent datasets by achieving absolute trajectory
errors comparable to those of a state-of-the-art centralized certifiably
correct RA-SLAM algorithm. Additionally, we perform a parametric study on the
structure of the RA-SLAM problem using synthetic data, revealing how common
parameters affect DCORA's performance.
| no_new_dataset | 0.943295 |
2503.03194 | Guangfu Guo | Guangfu Guo, Kai Zhang, Bryan Hoo, Yujun Cai, Xiaoqian Lu, Nanyun
Peng, Yiwei Wang | Structured Outputs Enable General-Purpose LLMs to be Medical Experts | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Medical question-answering (QA) is a critical task for evaluating how
effectively large language models (LLMs) encode clinical knowledge and
assessing their potential applications in medicine. Despite showing promise on
multiple-choice tests, LLMs frequently struggle with open-ended medical
questions, producing responses with dangerous hallucinations or lacking
comprehensive coverage of critical aspects. Existing approaches attempt to
address these challenges through domain-specific fine-tuning, but this proves
resource-intensive and difficult to scale across models. To improve the
comprehensiveness and factuality of medical responses, we propose a novel
approach utilizing structured medical reasoning. Our method guides LLMs through
an seven-step cognitive process inspired by clinical diagnosis, enabling more
accurate and complete answers without additional training. Experiments on the
MedLFQA benchmark demonstrate that our approach achieves the highest Factuality
Score of 85.8, surpassing fine-tuned models. Notably, this improvement
transfers to smaller models, highlighting the method's efficiency and
scalability. Our code and datasets are available.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 05:24:55 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Guo",
"Guangfu",
""
],
[
"Zhang",
"Kai",
""
],
[
"Hoo",
"Bryan",
""
],
[
"Cai",
"Yujun",
""
],
[
"Lu",
"Xiaoqian",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Wang",
"Yiwei",
""
]
]
| TITLE: Structured Outputs Enable General-Purpose LLMs to be Medical Experts
ABSTRACT: Medical question-answering (QA) is a critical task for evaluating how
effectively large language models (LLMs) encode clinical knowledge and
assessing their potential applications in medicine. Despite showing promise on
multiple-choice tests, LLMs frequently struggle with open-ended medical
questions, producing responses with dangerous hallucinations or lacking
comprehensive coverage of critical aspects. Existing approaches attempt to
address these challenges through domain-specific fine-tuning, but this proves
resource-intensive and difficult to scale across models. To improve the
comprehensiveness and factuality of medical responses, we propose a novel
approach utilizing structured medical reasoning. Our method guides LLMs through
an seven-step cognitive process inspired by clinical diagnosis, enabling more
accurate and complete answers without additional training. Experiments on the
MedLFQA benchmark demonstrate that our approach achieves the highest Factuality
Score of 85.8, surpassing fine-tuned models. Notably, this improvement
transfers to smaller models, highlighting the method's efficiency and
scalability. Our code and datasets are available.
| no_new_dataset | 0.947332 |
2503.03196 | Zhiyuan Huang | Zhiyuan Huang, Ziming Cheng, Junting Pan, Zhaohui Hou, Mingjie Zhan | SpiritSight Agent: Advanced GUI Agent with One Look | Paper accepted to CVPR 2025 | null | null | null | cs.CV cs.HC cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphical User Interface (GUI) agents show amazing abilities in assisting
human-computer interaction, automating human user's navigation on digital
devices. An ideal GUI agent is expected to achieve high accuracy, low latency,
and compatibility for different GUI platforms. Recent vision-based approaches
have shown promise by leveraging advanced Vision Language Models (VLMs). While
they generally meet the requirements of compatibility and low latency, these
vision-based GUI agents tend to have low accuracy due to their limitations in
element grounding. To address this issue, we propose $\textbf{SpiritSight}$, a
vision-based, end-to-end GUI agent that excels in GUI navigation tasks across
various GUI platforms. First, we create a multi-level, large-scale,
high-quality GUI dataset called $\textbf{GUI-Lasagne}$ using scalable methods,
empowering SpiritSight with robust GUI understanding and grounding
capabilities. Second, we introduce the $\textbf{Universal Block Parsing (UBP)}$
method to resolve the ambiguity problem in dynamic high-resolution of visual
inputs, further enhancing SpiritSight's ability to ground GUI objects. Through
these efforts, SpiritSight agent outperforms other advanced methods on diverse
GUI benchmarks, demonstrating its superior capability and compatibility in GUI
navigation tasks. Models are available at
$\href{https://huggingface.co/SenseLLM/SpiritSight-Agent-8B}{this\ URL}$.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 05:30:22 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Huang",
"Zhiyuan",
""
],
[
"Cheng",
"Ziming",
""
],
[
"Pan",
"Junting",
""
],
[
"Hou",
"Zhaohui",
""
],
[
"Zhan",
"Mingjie",
""
]
]
| TITLE: SpiritSight Agent: Advanced GUI Agent with One Look
ABSTRACT: Graphical User Interface (GUI) agents show amazing abilities in assisting
human-computer interaction, automating human user's navigation on digital
devices. An ideal GUI agent is expected to achieve high accuracy, low latency,
and compatibility for different GUI platforms. Recent vision-based approaches
have shown promise by leveraging advanced Vision Language Models (VLMs). While
they generally meet the requirements of compatibility and low latency, these
vision-based GUI agents tend to have low accuracy due to their limitations in
element grounding. To address this issue, we propose $\textbf{SpiritSight}$, a
vision-based, end-to-end GUI agent that excels in GUI navigation tasks across
various GUI platforms. First, we create a multi-level, large-scale,
high-quality GUI dataset called $\textbf{GUI-Lasagne}$ using scalable methods,
empowering SpiritSight with robust GUI understanding and grounding
capabilities. Second, we introduce the $\textbf{Universal Block Parsing (UBP)}$
method to resolve the ambiguity problem in dynamic high-resolution of visual
inputs, further enhancing SpiritSight's ability to ground GUI objects. Through
these efforts, SpiritSight agent outperforms other advanced methods on diverse
GUI benchmarks, demonstrating its superior capability and compatibility in GUI
navigation tasks. Models are available at
$\href{https://huggingface.co/SenseLLM/SpiritSight-Agent-8B}{this\ URL}$.
| new_dataset | 0.957755 |
2503.03201 | Zixuan Li | Jizhao Zhu, Akang Shi, Zixuan Li, Long Bai, Xiaolong Jin, Jiafeng Guo,
Xueqi Cheng | Towards Robust Universal Information Extraction: Benchmark, Evaluation,
and Solution | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we aim to enhance the robustness of Universal Information
Extraction (UIE) by introducing a new benchmark dataset, a comprehensive
evaluation, and a feasible solution. Existing robust benchmark datasets have
two key limitations: 1) They generate only a limited range of perturbations for
a single Information Extraction (IE) task, which fails to evaluate the
robustness of UIE models effectively; 2) They rely on small models or
handcrafted rules to generate perturbations, often resulting in unnatural
adversarial examples. Considering the powerful generation capabilities of Large
Language Models (LLMs), we introduce a new benchmark dataset for Robust UIE,
called RUIE-Bench, which utilizes LLMs to generate more diverse and realistic
perturbations across different IE tasks. Based on this dataset, we
comprehensively evaluate existing UIE models and reveal that both LLM-based
models and other models suffer from significant performance drops. To improve
robustness and reduce training costs, we propose a data-augmentation solution
that dynamically selects hard samples for iterative training based on the
model's inference loss. Experimental results show that training with only
\textbf{15\%} of the data leads to an average \textbf{7.5\%} relative
performance improvement across three IE tasks.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 05:39:29 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhu",
"Jizhao",
""
],
[
"Shi",
"Akang",
""
],
[
"Li",
"Zixuan",
""
],
[
"Bai",
"Long",
""
],
[
"Jin",
"Xiaolong",
""
],
[
"Guo",
"Jiafeng",
""
],
[
"Cheng",
"Xueqi",
""
]
]
| TITLE: Towards Robust Universal Information Extraction: Benchmark, Evaluation,
and Solution
ABSTRACT: In this paper, we aim to enhance the robustness of Universal Information
Extraction (UIE) by introducing a new benchmark dataset, a comprehensive
evaluation, and a feasible solution. Existing robust benchmark datasets have
two key limitations: 1) They generate only a limited range of perturbations for
a single Information Extraction (IE) task, which fails to evaluate the
robustness of UIE models effectively; 2) They rely on small models or
handcrafted rules to generate perturbations, often resulting in unnatural
adversarial examples. Considering the powerful generation capabilities of Large
Language Models (LLMs), we introduce a new benchmark dataset for Robust UIE,
called RUIE-Bench, which utilizes LLMs to generate more diverse and realistic
perturbations across different IE tasks. Based on this dataset, we
comprehensively evaluate existing UIE models and reveal that both LLM-based
models and other models suffer from significant performance drops. To improve
robustness and reduce training costs, we propose a data-augmentation solution
that dynamically selects hard samples for iterative training based on the
model's inference loss. Experimental results show that training with only
\textbf{15\%} of the data leads to an average \textbf{7.5\%} relative
performance improvement across three IE tasks.
| new_dataset | 0.960547 |
2503.03202 | Sneh Pillai | Sneh Pillai | Variance-Aware Loss Scheduling for Multimodal Alignment in Low-Data
Settings | 8 pages, 4 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Training vision-language models for image-text alignment typically requires
large datasets to achieve robust performance. In low-data scenarios, standard
contrastive learning can struggle to align modalities effectively due to
overfitting and unstable training dynamics. In this paper, we propose a
variance-aware loss scheduling approach that dynamically adjusts the weighting
of the contrastive loss based on the statistical variability (uncertainty) in
the model's alignment predictions. Using a subset of the Flickr8k image-caption
dataset to simulate limited data conditions, we demonstrate that our approach
improves image-text retrieval accuracy compared to a fixed-weight baseline. We
also compare against other adaptive weighting strategies (using output entropy
and cosine similarity spread) and find that variance-aware scheduling provides
the best overall trade-off. Qualitatively, our method yields more distinct
multimodal embeddings as shown by t-SNE visualizations. Moreover, in a stress
test with noise-injected captions and images, the variance-guided loss proves
more robust, maintaining higher recall when random perturbations are
introduced. These results highlight the benefit of adaptive loss weighting for
multimodal alignment in low-data regimes.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 05:46:08 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Pillai",
"Sneh",
""
]
]
| TITLE: Variance-Aware Loss Scheduling for Multimodal Alignment in Low-Data
Settings
ABSTRACT: Training vision-language models for image-text alignment typically requires
large datasets to achieve robust performance. In low-data scenarios, standard
contrastive learning can struggle to align modalities effectively due to
overfitting and unstable training dynamics. In this paper, we propose a
variance-aware loss scheduling approach that dynamically adjusts the weighting
of the contrastive loss based on the statistical variability (uncertainty) in
the model's alignment predictions. Using a subset of the Flickr8k image-caption
dataset to simulate limited data conditions, we demonstrate that our approach
improves image-text retrieval accuracy compared to a fixed-weight baseline. We
also compare against other adaptive weighting strategies (using output entropy
and cosine similarity spread) and find that variance-aware scheduling provides
the best overall trade-off. Qualitatively, our method yields more distinct
multimodal embeddings as shown by t-SNE visualizations. Moreover, in a stress
test with noise-injected captions and images, the variance-guided loss proves
more robust, maintaining higher recall when random perturbations are
introduced. These results highlight the benefit of adaptive loss weighting for
multimodal alignment in low-data regimes.
| no_new_dataset | 0.952574 |
2503.03206 | Binxu Wang | Binxu Wang | An Analytical Theory of Power Law Spectral Bias in the Learning Dynamics
of Diffusion Models | 50 pages, 10 figures. Preprint | null | null | null | cs.LG cs.CV math.ST stat.ML stat.TH | http://creativecommons.org/licenses/by/4.0/ | We developed an analytical framework for understanding how the learned
distribution evolves during diffusion model training. Leveraging the Gaussian
equivalence principle, we derived exact solutions for the gradient-flow
dynamics of weights in one- or two-layer linear denoiser settings with
arbitrary data. Remarkably, these solutions allowed us to derive the generated
distribution in closed form and its KL divergence through training. These
analytical results expose a pronounced power-law spectral bias, i.e., for
weights and distributions, the convergence time of a mode follows an inverse
power law of its variance. Empirical experiments on both Gaussian and image
datasets demonstrate that the power-law spectral bias remains robust even when
using deeper or convolutional architectures. Our results underscore the
importance of the data covariance in dictating the order and rate at which
diffusion models learn different modes of the data, providing potential
explanations for why earlier stopping could lead to incorrect details in image
generative models.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 05:50:38 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Wang",
"Binxu",
""
]
]
| TITLE: An Analytical Theory of Power Law Spectral Bias in the Learning Dynamics
of Diffusion Models
ABSTRACT: We developed an analytical framework for understanding how the learned
distribution evolves during diffusion model training. Leveraging the Gaussian
equivalence principle, we derived exact solutions for the gradient-flow
dynamics of weights in one- or two-layer linear denoiser settings with
arbitrary data. Remarkably, these solutions allowed us to derive the generated
distribution in closed form and its KL divergence through training. These
analytical results expose a pronounced power-law spectral bias, i.e., for
weights and distributions, the convergence time of a mode follows an inverse
power law of its variance. Empirical experiments on both Gaussian and image
datasets demonstrate that the power-law spectral bias remains robust even when
using deeper or convolutional architectures. Our results underscore the
importance of the data covariance in dictating the order and rate at which
diffusion models learn different modes of the data, providing potential
explanations for why earlier stopping could lead to incorrect details in image
generative models.
| no_new_dataset | 0.951818 |
2503.03211 | Shenzhi Yang | Shenzhi Yang, Jun Xia, Jingbo Zhou, Xingkai Yao, Xiaofang Zhang | NodeReg: Mitigating the Imbalance and Distribution Shift Effects in
Semi-Supervised Node Classification via Norm Consistency | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aggregating information from neighboring nodes benefits graph neural networks
(GNNs) in semi-supervised node classification tasks. Nevertheless, this
mechanism also renders nodes susceptible to the influence of their neighbors.
For instance, this will occur when the neighboring nodes are imbalanced or the
neighboring nodes contain noise, which can even affect the GNN's ability to
generalize out of distribution. We find that ensuring the consistency of the
norm for node representations can significantly reduce the impact of these two
issues on GNNs. To this end, we propose a regularized optimization method
called NodeReg that enforces the consistency of node representation norms. This
method is simple but effective and satisfies Lipschitz continuity, thus
facilitating stable optimization and significantly improving semi-supervised
node classification performance under the above two scenarios. To illustrate,
in the imbalance scenario, when training a GCN with an imbalance ratio of 0.1,
NodeReg outperforms the most competitive baselines by 1.4%-25.9% in F1 score
across five public datasets. Similarly, in the distribution shift scenario,
NodeReg outperforms the most competitive baseline by 1.4%-3.1% in accuracy.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 06:06:16 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yang",
"Shenzhi",
""
],
[
"Xia",
"Jun",
""
],
[
"Zhou",
"Jingbo",
""
],
[
"Yao",
"Xingkai",
""
],
[
"Zhang",
"Xiaofang",
""
]
]
| TITLE: NodeReg: Mitigating the Imbalance and Distribution Shift Effects in
Semi-Supervised Node Classification via Norm Consistency
ABSTRACT: Aggregating information from neighboring nodes benefits graph neural networks
(GNNs) in semi-supervised node classification tasks. Nevertheless, this
mechanism also renders nodes susceptible to the influence of their neighbors.
For instance, this will occur when the neighboring nodes are imbalanced or the
neighboring nodes contain noise, which can even affect the GNN's ability to
generalize out of distribution. We find that ensuring the consistency of the
norm for node representations can significantly reduce the impact of these two
issues on GNNs. To this end, we propose a regularized optimization method
called NodeReg that enforces the consistency of node representation norms. This
method is simple but effective and satisfies Lipschitz continuity, thus
facilitating stable optimization and significantly improving semi-supervised
node classification performance under the above two scenarios. To illustrate,
in the imbalance scenario, when training a GCN with an imbalance ratio of 0.1,
NodeReg outperforms the most competitive baselines by 1.4%-25.9% in F1 score
across five public datasets. Similarly, in the distribution shift scenario,
NodeReg outperforms the most competitive baseline by 1.4%-3.1% in accuracy.
| no_new_dataset | 0.953405 |
2503.03225 | Yice Zhang | Yice Zhang, Guangyu Xie, Jingjie Lin, Jianzhu Bao, Qianlong Wang, Xi
Zeng, Ruifeng Xu | Targeted Distillation for Sentiment Analysis | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper presents a compact model that achieves strong sentiment analysis
capabilities through targeted distillation from advanced large language models
(LLMs). Our methodology decouples the distillation target into two key
components: sentiment-related knowledge and task alignment. To transfer these
components, we propose a two-stage distillation framework. The first stage,
knowledge-driven distillation (\textsc{KnowDist}), transfers sentiment-related
knowledge to enhance fundamental sentiment analysis capabilities. The second
stage, in-context learning distillation (\textsc{ICLDist}), transfers
task-specific prompt-following abilities to optimize task alignment. For
evaluation, we introduce \textsc{SentiBench}, a comprehensive sentiment
analysis benchmark comprising 3 task categories across 12 datasets. Experiments
on this benchmark demonstrate that our model effectively balances model size
and performance, showing strong competitiveness compared to existing
small-scale LLMs.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 06:45:25 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhang",
"Yice",
""
],
[
"Xie",
"Guangyu",
""
],
[
"Lin",
"Jingjie",
""
],
[
"Bao",
"Jianzhu",
""
],
[
"Wang",
"Qianlong",
""
],
[
"Zeng",
"Xi",
""
],
[
"Xu",
"Ruifeng",
""
]
]
| TITLE: Targeted Distillation for Sentiment Analysis
ABSTRACT: This paper presents a compact model that achieves strong sentiment analysis
capabilities through targeted distillation from advanced large language models
(LLMs). Our methodology decouples the distillation target into two key
components: sentiment-related knowledge and task alignment. To transfer these
components, we propose a two-stage distillation framework. The first stage,
knowledge-driven distillation (\textsc{KnowDist}), transfers sentiment-related
knowledge to enhance fundamental sentiment analysis capabilities. The second
stage, in-context learning distillation (\textsc{ICLDist}), transfers
task-specific prompt-following abilities to optimize task alignment. For
evaluation, we introduce \textsc{SentiBench}, a comprehensive sentiment
analysis benchmark comprising 3 task categories across 12 datasets. Experiments
on this benchmark demonstrate that our model effectively balances model size
and performance, showing strong competitiveness compared to existing
small-scale LLMs.
| new_dataset | 0.951323 |
2503.03228 | Qinglin Liu | Qinglin Liu, Zonglin Li, Xiaoqian Lv, Xin Sun, Ru Li, Shengping Zhang | Path-Adaptive Matting for Efficient Inference Under Various
Computational Cost Constraints | Accepted to AAAI 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we explore a novel image matting task aimed at achieving
efficient inference under various computational cost constraints, specifically
FLOP limitations, using a single matting network. Existing matting methods
which have not explored scalable architectures or path-learning strategies,
fail to tackle this challenge. To overcome these limitations, we introduce
Path-Adaptive Matting (PAM), a framework that dynamically adjusts network paths
based on image contexts and computational cost constraints. We formulate the
training of the computational cost-constrained matting network as a bilevel
optimization problem, jointly optimizing the matting network and the path
estimator. Building on this formalization, we design a path-adaptive matting
architecture by incorporating path selection layers and learnable connect
layers to estimate optimal paths and perform efficient inference within a
unified network. Furthermore, we propose a performance-aware path-learning
strategy to generate path labels online by evaluating a few paths sampled from
the prior distribution of optimal paths and network estimations, enabling
robust and efficient online path learning. Experiments on five image matting
datasets demonstrate that the proposed PAM framework achieves competitive
performance across a range of computational cost constraints.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 06:56:42 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Liu",
"Qinglin",
""
],
[
"Li",
"Zonglin",
""
],
[
"Lv",
"Xiaoqian",
""
],
[
"Sun",
"Xin",
""
],
[
"Li",
"Ru",
""
],
[
"Zhang",
"Shengping",
""
]
]
| TITLE: Path-Adaptive Matting for Efficient Inference Under Various
Computational Cost Constraints
ABSTRACT: In this paper, we explore a novel image matting task aimed at achieving
efficient inference under various computational cost constraints, specifically
FLOP limitations, using a single matting network. Existing matting methods
which have not explored scalable architectures or path-learning strategies,
fail to tackle this challenge. To overcome these limitations, we introduce
Path-Adaptive Matting (PAM), a framework that dynamically adjusts network paths
based on image contexts and computational cost constraints. We formulate the
training of the computational cost-constrained matting network as a bilevel
optimization problem, jointly optimizing the matting network and the path
estimator. Building on this formalization, we design a path-adaptive matting
architecture by incorporating path selection layers and learnable connect
layers to estimate optimal paths and perform efficient inference within a
unified network. Furthermore, we propose a performance-aware path-learning
strategy to generate path labels online by evaluating a few paths sampled from
the prior distribution of optimal paths and network estimations, enabling
robust and efficient online path learning. Experiments on five image matting
datasets demonstrate that the proposed PAM framework achieves competitive
performance across a range of computational cost constraints.
| no_new_dataset | 0.944125 |
2503.03230 | Yifu Wang | Kun Huang, Yifu Wang, Si'ao Zhang, Zhirui Wang, Zhanpeng Ouyang,
Zhenghua Yu, Laurent Kneip | OpenGV 2.0: Motion prior-assisted calibration and SLAM with
vehicle-mounted surround-view systems | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The present paper proposes optimization-based solutions to visual SLAM with a
vehicle-mounted surround-view camera system. Owing to their original use-case,
such systems often only contain a single camera facing into either direction
and very limited overlap between fields of view. Our novelty consist of three
optimization modules targeting at practical online calibration of exterior
orientations from simple two-view geometry, reliable front-end initialization
of relative displacements, and accurate back-end optimization using a
continuous-time trajectory model. The commonality between the proposed modules
is given by the fact that all three of them exploit motion priors that are
related to the inherent non-holonomic characteristics of passenger vehicle
motion. In contrast to prior related art, the proposed modules furthermore
excel in terms of bypassing partial unobservabilities in the transformation
variables that commonly occur for Ackermann-motion. As a further contribution,
the modules are built into a novel surround-view camera SLAM system that
specifically targets deployment on Ackermann vehicles operating in urban
environments. All modules are studied in the context of in-depth ablation
studies, and the practical validity of the entire framework is supported by a
successful application to challenging, large-scale publicly available online
datasets. Note that upon acceptance, the entire framework is scheduled for
open-source release as part of an extension of the OpenGV library.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 07:03:15 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Huang",
"Kun",
""
],
[
"Wang",
"Yifu",
""
],
[
"Zhang",
"Si'ao",
""
],
[
"Wang",
"Zhirui",
""
],
[
"Ouyang",
"Zhanpeng",
""
],
[
"Yu",
"Zhenghua",
""
],
[
"Kneip",
"Laurent",
""
]
]
| TITLE: OpenGV 2.0: Motion prior-assisted calibration and SLAM with
vehicle-mounted surround-view systems
ABSTRACT: The present paper proposes optimization-based solutions to visual SLAM with a
vehicle-mounted surround-view camera system. Owing to their original use-case,
such systems often only contain a single camera facing into either direction
and very limited overlap between fields of view. Our novelty consist of three
optimization modules targeting at practical online calibration of exterior
orientations from simple two-view geometry, reliable front-end initialization
of relative displacements, and accurate back-end optimization using a
continuous-time trajectory model. The commonality between the proposed modules
is given by the fact that all three of them exploit motion priors that are
related to the inherent non-holonomic characteristics of passenger vehicle
motion. In contrast to prior related art, the proposed modules furthermore
excel in terms of bypassing partial unobservabilities in the transformation
variables that commonly occur for Ackermann-motion. As a further contribution,
the modules are built into a novel surround-view camera SLAM system that
specifically targets deployment on Ackermann vehicles operating in urban
environments. All modules are studied in the context of in-depth ablation
studies, and the practical validity of the entire framework is supported by a
successful application to challenging, large-scale publicly available online
datasets. Note that upon acceptance, the entire framework is scheduled for
open-source release as part of an extension of the OpenGV library.
| no_new_dataset | 0.940517 |
2503.03232 | Longshen Ou | Longshen Ou, Yu Takahashi, Ye Wang | Lead Instrument Detection from Multitrack Music | Camera ready version of ICASSP 2025 submission | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prior approaches to lead instrument detection primarily analyze mixture
audio, limited to coarse classifications and lacking generalization ability.
This paper presents a novel approach to lead instrument detection in multitrack
music audio by crafting expertly annotated datasets and designing a novel
framework that integrates a self-supervised learning model with a track-wise,
frame-level attention-based classifier. This attention mechanism dynamically
extracts and aggregates track-specific features based on their auditory
importance, enabling precise detection across varied instrument types and
combinations. Enhanced by track classification and permutation augmentation,
our model substantially outperforms existing SVM and CRNN models, showing
robustness on unseen instruments and out-of-domain testing. We believe our
exploration provides valuable insights for future research on audio content
analysis in multitrack music settings.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 07:16:20 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Ou",
"Longshen",
""
],
[
"Takahashi",
"Yu",
""
],
[
"Wang",
"Ye",
""
]
]
| TITLE: Lead Instrument Detection from Multitrack Music
ABSTRACT: Prior approaches to lead instrument detection primarily analyze mixture
audio, limited to coarse classifications and lacking generalization ability.
This paper presents a novel approach to lead instrument detection in multitrack
music audio by crafting expertly annotated datasets and designing a novel
framework that integrates a self-supervised learning model with a track-wise,
frame-level attention-based classifier. This attention mechanism dynamically
extracts and aggregates track-specific features based on their auditory
importance, enabling precise detection across varied instrument types and
combinations. Enhanced by track classification and permutation augmentation,
our model substantially outperforms existing SVM and CRNN models, showing
robustness on unseen instruments and out-of-domain testing. We believe our
exploration provides valuable insights for future research on audio content
analysis in multitrack music settings.
| no_new_dataset | 0.948489 |
2503.03238 | Jiarui Yao | Jiarui Yao, Ruida Wang, Tong Zhang | FANS -- Formal Answer Selection for Natural Language Math Reasoning
Using Lean4 | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have displayed astonishing abilities in various
tasks, especially in text generation, classification, question answering, etc.
However, the reasoning ability of LLMs still faces many debates. The inherent
ambiguity of Natural Language (NL) limits LLMs' ability to perform verifiable
reasoning, making its answers lack coherence and trustworthy support. To tackle
the above problems, we propose a novel framework named FANS: Formal ANswer
Selection for Natural Language Math Reasoning Using Lean4. To the best of our
knowledge, it is the first framework that utilizes Lean4 to enhance LLMs' NL
math reasoning ability. In particular, given an NL math question and
LLM-generated answers, FANS first translates it into Lean4 theorem statements.
Then it tries to prove it using a Lean4 prover and verify it by Lean4. Finally,
it uses the FL result to assist in answer selection. It enhances LLMs' NL math
ability in providing a computer-verifiable solution for its correct answer and
proposes an alternative method for answer selection beyond the reward model.
Extensive experiments indicate the effectiveness of our framework. It can
improve the accuracy rate of reward model enhanced LLMs in the MATH-500 dataset
by at most 1.91% and AMC-23 by at most 8.33% on strong reward-model baselines.
In some particular fields like number theory that Lean4 experts in, we can even
select all correct solutions. The qualitative analysis also shows our framework
can make NL results formally backed by Lean4 proofs. As a pioneering work in
the corresponding field, we will open-source all our models and datasets to
further boost the development of the field.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 07:34:53 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yao",
"Jiarui",
""
],
[
"Wang",
"Ruida",
""
],
[
"Zhang",
"Tong",
""
]
]
| TITLE: FANS -- Formal Answer Selection for Natural Language Math Reasoning
Using Lean4
ABSTRACT: Large Language Models (LLMs) have displayed astonishing abilities in various
tasks, especially in text generation, classification, question answering, etc.
However, the reasoning ability of LLMs still faces many debates. The inherent
ambiguity of Natural Language (NL) limits LLMs' ability to perform verifiable
reasoning, making its answers lack coherence and trustworthy support. To tackle
the above problems, we propose a novel framework named FANS: Formal ANswer
Selection for Natural Language Math Reasoning Using Lean4. To the best of our
knowledge, it is the first framework that utilizes Lean4 to enhance LLMs' NL
math reasoning ability. In particular, given an NL math question and
LLM-generated answers, FANS first translates it into Lean4 theorem statements.
Then it tries to prove it using a Lean4 prover and verify it by Lean4. Finally,
it uses the FL result to assist in answer selection. It enhances LLMs' NL math
ability in providing a computer-verifiable solution for its correct answer and
proposes an alternative method for answer selection beyond the reward model.
Extensive experiments indicate the effectiveness of our framework. It can
improve the accuracy rate of reward model enhanced LLMs in the MATH-500 dataset
by at most 1.91% and AMC-23 by at most 8.33% on strong reward-model baselines.
In some particular fields like number theory that Lean4 experts in, we can even
select all correct solutions. The qualitative analysis also shows our framework
can make NL results formally backed by Lean4 proofs. As a pioneering work in
the corresponding field, we will open-source all our models and datasets to
further boost the development of the field.
| no_new_dataset | 0.944893 |
2503.03251 | Guoyang Rong Chris | Guoyang Rong, Ying Chen, Thorsten Koch, Keisuke Honda | From Coverage to Prestige: A Comprehensive Assessment of Large-Scale
Scientometric Data | 23 pages, 11 tables, 7 figures | null | null | null | cs.DL stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | As research in the Scientometric deepens, the impact of data quality on
research outcomes has garnered increasing attention. This study, based on Web
of Science (WoS) and Crossref datasets, systematically evaluates the
differences between data sources and the effects of data merging through
matching, comparison, and integration. Two core metrics were employed:
Reference Coverage Rate (RCR) and Article Scientific Prestige (ASP), which
respectively measure citation completeness (quantity) and academic influence
(quality). The results indicate that the WoS dataset outperforms Crossref in
its coverage of high-impact literature and ASP scores, while the Crossref
dataset provides complementary value through its broader coverage of
literature. Data merging significantly improves the completeness of the
citation network, with particularly pronounced benefits in smaller disciplinary
clusters such as Education and Arts. However, data merging also introduces some
low-quality citations, resulting in a polarization of overall data quality.
Moreover, the impact of data merging varies across disciplines; high-impact
clusters such as Science, Biology, and Medicine benefit the most, whereas
clusters like Social Sciences and Arts are more vulnerable to negative effects.
This study highlights the critical role of data sources in Scientometric
research and provides a framework for assessing and improving data quality.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 08:08:32 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Rong",
"Guoyang",
""
],
[
"Chen",
"Ying",
""
],
[
"Koch",
"Thorsten",
""
],
[
"Honda",
"Keisuke",
""
]
]
| TITLE: From Coverage to Prestige: A Comprehensive Assessment of Large-Scale
Scientometric Data
ABSTRACT: As research in the Scientometric deepens, the impact of data quality on
research outcomes has garnered increasing attention. This study, based on Web
of Science (WoS) and Crossref datasets, systematically evaluates the
differences between data sources and the effects of data merging through
matching, comparison, and integration. Two core metrics were employed:
Reference Coverage Rate (RCR) and Article Scientific Prestige (ASP), which
respectively measure citation completeness (quantity) and academic influence
(quality). The results indicate that the WoS dataset outperforms Crossref in
its coverage of high-impact literature and ASP scores, while the Crossref
dataset provides complementary value through its broader coverage of
literature. Data merging significantly improves the completeness of the
citation network, with particularly pronounced benefits in smaller disciplinary
clusters such as Education and Arts. However, data merging also introduces some
low-quality citations, resulting in a polarization of overall data quality.
Moreover, the impact of data merging varies across disciplines; high-impact
clusters such as Science, Biology, and Medicine benefit the most, whereas
clusters like Social Sciences and Arts are more vulnerable to negative effects.
This study highlights the critical role of data sources in Scientometric
research and provides a framework for assessing and improving data quality.
| no_new_dataset | 0.951684 |
2503.03254 | Haodong Jiang | Haodong Jiang, Xiang Zheng, Yanglin Zhang, Qingcheng Zeng, Yiqian Li,
Ziyang Hong, Junfeng Wu | SCORE: Saturated Consensus Relocalization in Semantic Line Maps | 11 pages, 14 figurs, arxiv version for paper submitted to IROS 2025 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the arxiv version for our paper submitted to IEEE/RSJ IROS 2025. We
propose a scene-agnostic and light-weight visual relocalization framework that
leverages semantically labeled 3D lines as a compact map representation. In our
framework, the robot localizes itself by capturing a single image, extracting
2D lines, associating them with semantically similar 3D lines in the map, and
solving a robust perspective-n-line problem. To address the extremely high
outlier ratios~(exceeding 99.5\%) caused by one-to-many ambiguities in semantic
matching, we introduce the Saturated Consensus Maximization~(Sat-CM)
formulation, which enables accurate pose estimation when the classic Consensus
Maximization framework fails. We further propose a fast global solver to the
formulated Sat-CM problems, leveraging rigorous interval analysis results to
ensure both accuracy and computational efficiency. Additionally, we develop a
pipeline for constructing semantic 3D line maps using posed depth images. To
validate the effectiveness of our framework, which integrates our innovations
in robust estimation and practical engineering insights, we conduct extensive
experiments on the ScanNet++ dataset.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 08:13:56 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Jiang",
"Haodong",
""
],
[
"Zheng",
"Xiang",
""
],
[
"Zhang",
"Yanglin",
""
],
[
"Zeng",
"Qingcheng",
""
],
[
"Li",
"Yiqian",
""
],
[
"Hong",
"Ziyang",
""
],
[
"Wu",
"Junfeng",
""
]
]
| TITLE: SCORE: Saturated Consensus Relocalization in Semantic Line Maps
ABSTRACT: This is the arxiv version for our paper submitted to IEEE/RSJ IROS 2025. We
propose a scene-agnostic and light-weight visual relocalization framework that
leverages semantically labeled 3D lines as a compact map representation. In our
framework, the robot localizes itself by capturing a single image, extracting
2D lines, associating them with semantically similar 3D lines in the map, and
solving a robust perspective-n-line problem. To address the extremely high
outlier ratios~(exceeding 99.5\%) caused by one-to-many ambiguities in semantic
matching, we introduce the Saturated Consensus Maximization~(Sat-CM)
formulation, which enables accurate pose estimation when the classic Consensus
Maximization framework fails. We further propose a fast global solver to the
formulated Sat-CM problems, leveraging rigorous interval analysis results to
ensure both accuracy and computational efficiency. Additionally, we develop a
pipeline for constructing semantic 3D line maps using posed depth images. To
validate the effectiveness of our framework, which integrates our innovations
in robust estimation and practical engineering insights, we conduct extensive
experiments on the ScanNet++ dataset.
| no_new_dataset | 0.950595 |
2503.03258 | Runlin Lei | Runlin Lei, Jiarui Ji, Haipeng Ding, Lu Yi, Zhewei Wei, Yongchao Liu,
Chuntao Hong | Exploring the Potential of Large Language Models as Predictors in
Dynamic Text-Attributed Graphs | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With the rise of large language models (LLMs), there has been growing
interest in Graph Foundation Models (GFMs) for graph-based tasks. By leveraging
LLMs as predictors, GFMs have demonstrated impressive generalizability across
various tasks and datasets. However, existing research on LLMs as predictors
has predominantly focused on static graphs, leaving their potential in dynamic
graph prediction unexplored. In this work, we pioneer using LLMs for predictive
tasks on dynamic graphs. We identify two key challenges: the constraints
imposed by context length when processing large-scale historical data and the
significant variability in domain characteristics, both of which complicate the
development of a unified predictor. To address these challenges, we propose the
GraphAgent-Dynamic (GAD) Framework, a multi-agent system that leverages
collaborative LLMs. In contrast to using a single LLM as the predictor, GAD
incorporates global and local summary agents to generate domain-specific
knowledge, enhancing its transferability across domains. Additionally,
knowledge reflection agents enable adaptive updates to GAD's knowledge,
maintaining a unified and self-consistent architecture. In experiments, GAD
demonstrates performance comparable to or even exceeds that of full-supervised
graph neural networks without dataset-specific training. Finally, to enhance
the task-specific performance of LLM-based predictors, we discuss potential
improvements, such as dataset-specific fine-tuning to LLMs. By developing
tailored strategies for different tasks, we provide new insights for the future
design of LLM-based predictors.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 08:28:11 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Lei",
"Runlin",
""
],
[
"Ji",
"Jiarui",
""
],
[
"Ding",
"Haipeng",
""
],
[
"Yi",
"Lu",
""
],
[
"Wei",
"Zhewei",
""
],
[
"Liu",
"Yongchao",
""
],
[
"Hong",
"Chuntao",
""
]
]
| TITLE: Exploring the Potential of Large Language Models as Predictors in
Dynamic Text-Attributed Graphs
ABSTRACT: With the rise of large language models (LLMs), there has been growing
interest in Graph Foundation Models (GFMs) for graph-based tasks. By leveraging
LLMs as predictors, GFMs have demonstrated impressive generalizability across
various tasks and datasets. However, existing research on LLMs as predictors
has predominantly focused on static graphs, leaving their potential in dynamic
graph prediction unexplored. In this work, we pioneer using LLMs for predictive
tasks on dynamic graphs. We identify two key challenges: the constraints
imposed by context length when processing large-scale historical data and the
significant variability in domain characteristics, both of which complicate the
development of a unified predictor. To address these challenges, we propose the
GraphAgent-Dynamic (GAD) Framework, a multi-agent system that leverages
collaborative LLMs. In contrast to using a single LLM as the predictor, GAD
incorporates global and local summary agents to generate domain-specific
knowledge, enhancing its transferability across domains. Additionally,
knowledge reflection agents enable adaptive updates to GAD's knowledge,
maintaining a unified and self-consistent architecture. In experiments, GAD
demonstrates performance comparable to or even exceeds that of full-supervised
graph neural networks without dataset-specific training. Finally, to enhance
the task-specific performance of LLM-based predictors, we discuss potential
improvements, such as dataset-specific fine-tuning to LLMs. By developing
tailored strategies for different tasks, we provide new insights for the future
design of LLM-based predictors.
| no_new_dataset | 0.946101 |
2503.03261 | Yichong Zhao | Yichong Zhao, Susumu Goto | Can Frontier LLMs Replace Annotators in Biomedical Text Mining?
Analyzing Challenges and Exploring Solutions | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) can perform various natural language processing
(NLP) tasks through in-context learning without relying on supervised data.
However, multiple previous studies have reported suboptimal performance of LLMs
in biological text mining. By analyzing failure patterns in these evaluations,
we identified three primary challenges for LLMs in biomedical corpora: (1) LLMs
fail to learn implicit dataset-specific nuances from supervised data, (2) The
common formatting requirements of discriminative tasks limit the reasoning
capabilities of LLMs particularly for LLMs that lack test-time compute, and (3)
LLMs struggle to adhere to annotation guidelines and match exact schemas, which
hinders their ability to understand detailed annotation requirements which is
essential in biomedical annotation workflow. To address these challenges, we
experimented with prompt engineering techniques targeted to the above issues,
and developed a pipeline that dynamically extracts instructions from annotation
guidelines. Our findings show that frontier LLMs can approach or surpass the
performance of state-of-the-art (SOTA) BERT-based models with minimal reliance
on manually annotated data and without fine-tuning. Furthermore, we performed
model distillation on a closed-source LLM, demonstrating that a BERT model
trained exclusively on synthetic data annotated by LLMs can also achieve a
practical performance. Based on these results, we explored the feasibility of
partially replacing manual annotation with LLMs in production scenarios for
biomedical text mining.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 08:37:10 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhao",
"Yichong",
""
],
[
"Goto",
"Susumu",
""
]
]
| TITLE: Can Frontier LLMs Replace Annotators in Biomedical Text Mining?
Analyzing Challenges and Exploring Solutions
ABSTRACT: Large language models (LLMs) can perform various natural language processing
(NLP) tasks through in-context learning without relying on supervised data.
However, multiple previous studies have reported suboptimal performance of LLMs
in biological text mining. By analyzing failure patterns in these evaluations,
we identified three primary challenges for LLMs in biomedical corpora: (1) LLMs
fail to learn implicit dataset-specific nuances from supervised data, (2) The
common formatting requirements of discriminative tasks limit the reasoning
capabilities of LLMs particularly for LLMs that lack test-time compute, and (3)
LLMs struggle to adhere to annotation guidelines and match exact schemas, which
hinders their ability to understand detailed annotation requirements which is
essential in biomedical annotation workflow. To address these challenges, we
experimented with prompt engineering techniques targeted to the above issues,
and developed a pipeline that dynamically extracts instructions from annotation
guidelines. Our findings show that frontier LLMs can approach or surpass the
performance of state-of-the-art (SOTA) BERT-based models with minimal reliance
on manually annotated data and without fine-tuning. Furthermore, we performed
model distillation on a closed-source LLM, demonstrating that a BERT model
trained exclusively on synthetic data annotated by LLMs can also achieve a
practical performance. Based on these results, we explored the feasibility of
partially replacing manual annotation with LLMs in production scenarios for
biomedical text mining.
| no_new_dataset | 0.948917 |
2503.03267 | Gazi Tanbhir | Gazi Tanbhir and Md. Farhan Shahriyar | Quantum-Inspired Privacy-Preserving Federated Learning Framework for
Secure Dementia Classification | This work has been accepted and presented at the 4th International
Conference on Electrical, Computer and Communication Engineering (ECCE 2025),
held at Chittagong University of Engineering & Technology (CUET), Bangladesh,
in February 2025 | null | null | null | cs.CR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dementia, a neurological disorder impacting millions globally, presents
significant challenges in diagnosis and patient care. With the rise of privacy
concerns and security threats in healthcare, federated learning (FL) has
emerged as a promising approach to enable collaborative model training across
decentralized datasets without exposing sensitive patient information. However,
FL remains vulnerable to advanced security breaches such as gradient inversion
and eavesdropping attacks. This paper introduces a novel framework that
integrates federated learning with quantum-inspired encryption techniques for
dementia classification, emphasizing privacy preservation and security.
Leveraging quantum key distribution (QKD), the framework ensures secure
transmission of model weights, protecting against unauthorized access and
interception during training. The methodology utilizes a convolutional neural
network (CNN) for dementia classification, with federated training conducted
across distributed healthcare nodes, incorporating QKD-encrypted weight sharing
to secure the aggregation process. Experimental evaluations conducted on MRI
data from the OASIS dataset demonstrate that the proposed framework achieves
identical accuracy levels to a baseline model while enhancing data security and
reducing loss by almost 1% compared to the classical baseline model. The
framework offers significant implications for democratizing access to AI-driven
dementia diagnostics in low- and middle-income countries, addressing critical
resource and privacy constraints. This work contributes a robust, scalable, and
secure federated learning solution for healthcare applications, paving the way
for broader adoption of quantum-inspired techniques in AI-driven medical
research.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 08:49:31 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Tanbhir",
"Gazi",
""
],
[
"Shahriyar",
"Md. Farhan",
""
]
]
| TITLE: Quantum-Inspired Privacy-Preserving Federated Learning Framework for
Secure Dementia Classification
ABSTRACT: Dementia, a neurological disorder impacting millions globally, presents
significant challenges in diagnosis and patient care. With the rise of privacy
concerns and security threats in healthcare, federated learning (FL) has
emerged as a promising approach to enable collaborative model training across
decentralized datasets without exposing sensitive patient information. However,
FL remains vulnerable to advanced security breaches such as gradient inversion
and eavesdropping attacks. This paper introduces a novel framework that
integrates federated learning with quantum-inspired encryption techniques for
dementia classification, emphasizing privacy preservation and security.
Leveraging quantum key distribution (QKD), the framework ensures secure
transmission of model weights, protecting against unauthorized access and
interception during training. The methodology utilizes a convolutional neural
network (CNN) for dementia classification, with federated training conducted
across distributed healthcare nodes, incorporating QKD-encrypted weight sharing
to secure the aggregation process. Experimental evaluations conducted on MRI
data from the OASIS dataset demonstrate that the proposed framework achieves
identical accuracy levels to a baseline model while enhancing data security and
reducing loss by almost 1% compared to the classical baseline model. The
framework offers significant implications for democratizing access to AI-driven
dementia diagnostics in low- and middle-income countries, addressing critical
resource and privacy constraints. This work contributes a robust, scalable, and
secure federated learning solution for healthcare applications, paving the way
for broader adoption of quantum-inspired techniques in AI-driven medical
research.
| no_new_dataset | 0.947137 |
2503.03269 | Saurabh Kumar | Saurabh Kumar, Jacob Buckman, Carles Gelada, Sean Zhang | Conformal Transformations for Symmetric Power Transformers | SCOPE Workshop at ICLR 2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Transformers with linear attention offer significant computational advantages
over softmax-based transformers but often suffer from degraded performance. The
symmetric power (sympow) transformer, a particular type of linear transformer,
addresses some of this performance gap by leveraging symmetric tensor
embeddings, achieving comparable performance to softmax transformers. However,
the finite capacity of the recurrent state in sympow transformers limits their
ability to retain information, leading to performance degradation when scaling
the training or evaluation context length. To address this issue, we propose
the conformal-sympow transformer, which dynamically frees up capacity using
data-dependent multiplicative gating and adaptively stores information using
data-dependent rotary embeddings. Preliminary experiments on the LongCrawl64
dataset demonstrate that conformal-sympow overcomes the limitations of sympow
transformers, achieving robust performance across scaled training and
evaluation contexts.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 08:50:53 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Kumar",
"Saurabh",
""
],
[
"Buckman",
"Jacob",
""
],
[
"Gelada",
"Carles",
""
],
[
"Zhang",
"Sean",
""
]
]
| TITLE: Conformal Transformations for Symmetric Power Transformers
ABSTRACT: Transformers with linear attention offer significant computational advantages
over softmax-based transformers but often suffer from degraded performance. The
symmetric power (sympow) transformer, a particular type of linear transformer,
addresses some of this performance gap by leveraging symmetric tensor
embeddings, achieving comparable performance to softmax transformers. However,
the finite capacity of the recurrent state in sympow transformers limits their
ability to retain information, leading to performance degradation when scaling
the training or evaluation context length. To address this issue, we propose
the conformal-sympow transformer, which dynamically frees up capacity using
data-dependent multiplicative gating and adaptively stores information using
data-dependent rotary embeddings. Preliminary experiments on the LongCrawl64
dataset demonstrate that conformal-sympow overcomes the limitations of sympow
transformers, achieving robust performance across scaled training and
evaluation contexts.
| no_new_dataset | 0.947284 |
2503.03280 | Hiep Truong | Hiep Truong Cong, Ajay Kumar Sigatapu, Arindam Das, Yashwanth Sharma,
Venkatesh Satagopan, Ganesh Sistu, Ciaran Eising | BEVMOSNet: Multimodal Fusion for BEV Moving Object Segmentation | In Proceedings of the 20th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications (2025) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate motion understanding of the dynamic objects within the scene in
bird's-eye-view (BEV) is critical to ensure a reliable obstacle avoidance
system and smooth path planning for autonomous vehicles. However, this task has
received relatively limited exploration when compared to object detection and
segmentation with only a few recent vision-based approaches presenting
preliminary findings that significantly deteriorate in low-light, nighttime,
and adverse weather conditions such as rain. Conversely, LiDAR and radar
sensors remain almost unaffected in these scenarios, and radar provides key
velocity information of the objects. Therefore, we introduce BEVMOSNet, to our
knowledge, the first end-to-end multimodal fusion leveraging cameras, LiDAR,
and radar to precisely predict the moving objects in BEV. In addition, we
perform a deeper analysis to find out the optimal strategy for deformable
cross-attention-guided sensor fusion for cross-sensor knowledge sharing in BEV.
While evaluating BEVMOSNet on the nuScenes dataset, we show an overall
improvement in IoU score of 36.59% compared to the vision-based unimodal
baseline BEV-MoSeg (Sigatapu et al., 2023), and 2.35% compared to the
multimodel SimpleBEV (Harley et al., 2022), extended for the motion
segmentation task, establishing this method as the state-of-the-art in BEV
motion segmentation.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 09:03:46 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Cong",
"Hiep Truong",
""
],
[
"Sigatapu",
"Ajay Kumar",
""
],
[
"Das",
"Arindam",
""
],
[
"Sharma",
"Yashwanth",
""
],
[
"Satagopan",
"Venkatesh",
""
],
[
"Sistu",
"Ganesh",
""
],
[
"Eising",
"Ciaran",
""
]
]
| TITLE: BEVMOSNet: Multimodal Fusion for BEV Moving Object Segmentation
ABSTRACT: Accurate motion understanding of the dynamic objects within the scene in
bird's-eye-view (BEV) is critical to ensure a reliable obstacle avoidance
system and smooth path planning for autonomous vehicles. However, this task has
received relatively limited exploration when compared to object detection and
segmentation with only a few recent vision-based approaches presenting
preliminary findings that significantly deteriorate in low-light, nighttime,
and adverse weather conditions such as rain. Conversely, LiDAR and radar
sensors remain almost unaffected in these scenarios, and radar provides key
velocity information of the objects. Therefore, we introduce BEVMOSNet, to our
knowledge, the first end-to-end multimodal fusion leveraging cameras, LiDAR,
and radar to precisely predict the moving objects in BEV. In addition, we
perform a deeper analysis to find out the optimal strategy for deformable
cross-attention-guided sensor fusion for cross-sensor knowledge sharing in BEV.
While evaluating BEVMOSNet on the nuScenes dataset, we show an overall
improvement in IoU score of 36.59% compared to the vision-based unimodal
baseline BEV-MoSeg (Sigatapu et al., 2023), and 2.35% compared to the
multimodel SimpleBEV (Harley et al., 2022), extended for the motion
segmentation task, establishing this method as the state-of-the-art in BEV
motion segmentation.
| no_new_dataset | 0.947769 |
2503.03282 | Ziniu Wu | Yijie Chu, Ziniu Wu, Yong Yue, Eng Gee Lim, Paolo Paoletti, Xiaohui
Zhu | Supervised Visual Docking Network for Unmanned Surface Vehicles Using
Auto-labeling in Real-world Water Environments | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unmanned Surface Vehicles (USVs) are increasingly applied to water operations
such as environmental monitoring and river-map modeling. It faces a significant
challenge in achieving precise autonomous docking at ports or stations, still
relying on remote human control or external positioning systems for accuracy
and safety which limits the full potential of human-out-of-loop deployment for
USVs.This paper introduces a novel supervised learning pipeline with the
auto-labeling technique for USVs autonomous visual docking. Firstly, we
designed an auto-labeling data collection pipeline that appends relative pose
and image pair to the dataset. This step does not require conventional manual
labeling for supervised learning. Secondly, the Neural Dock Pose Estimator
(NDPE) is proposed to achieve relative dock pose prediction without the need
for hand-crafted feature engineering, camera calibration, and peripheral
markers. Moreover, The NDPE can accurately predict the relative dock pose in
real-world water environments, facilitating the implementation of
Position-Based Visual Servo (PBVS) and low-level motion controllers for
efficient and autonomous docking.Experiments show that the NDPE is robust to
the disturbance of the distance and the USV velocity. The effectiveness of our
proposed solution is tested and validated in real-world water environments,
reflecting its capability to handle real-world autonomous docking tasks.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 09:07:13 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Chu",
"Yijie",
""
],
[
"Wu",
"Ziniu",
""
],
[
"Yue",
"Yong",
""
],
[
"Lim",
"Eng Gee",
""
],
[
"Paoletti",
"Paolo",
""
],
[
"Zhu",
"Xiaohui",
""
]
]
| TITLE: Supervised Visual Docking Network for Unmanned Surface Vehicles Using
Auto-labeling in Real-world Water Environments
ABSTRACT: Unmanned Surface Vehicles (USVs) are increasingly applied to water operations
such as environmental monitoring and river-map modeling. It faces a significant
challenge in achieving precise autonomous docking at ports or stations, still
relying on remote human control or external positioning systems for accuracy
and safety which limits the full potential of human-out-of-loop deployment for
USVs.This paper introduces a novel supervised learning pipeline with the
auto-labeling technique for USVs autonomous visual docking. Firstly, we
designed an auto-labeling data collection pipeline that appends relative pose
and image pair to the dataset. This step does not require conventional manual
labeling for supervised learning. Secondly, the Neural Dock Pose Estimator
(NDPE) is proposed to achieve relative dock pose prediction without the need
for hand-crafted feature engineering, camera calibration, and peripheral
markers. Moreover, The NDPE can accurately predict the relative dock pose in
real-world water environments, facilitating the implementation of
Position-Based Visual Servo (PBVS) and low-level motion controllers for
efficient and autonomous docking.Experiments show that the NDPE is robust to
the disturbance of the distance and the USV velocity. The effectiveness of our
proposed solution is tested and validated in real-world water environments,
reflecting its capability to handle real-world autonomous docking tasks.
| no_new_dataset | 0.948537 |
2503.03286 | Yi He | Yi He, Lei Yang, Shilin Wang | Enhancing Visual Forced Alignment with Local Context-Aware Feature
Extraction and Multi-Task Learning | Accepted by ICASSP2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper introduces a novel approach to Visual Forced Alignment (VFA),
aiming to accurately synchronize utterances with corresponding lip movements,
without relying on audio cues. We propose a novel VFA approach that integrates
a local context-aware feature extractor and employs multi-task learning to
refine both global and local context features, enhancing sensitivity to subtle
lip movements for precise word-level and phoneme-level alignment. Incorporating
the improved Viterbi algorithm for post-processing, our method significantly
reduces misalignments. Experimental results show our approach outperforms
existing methods, achieving a 6% accuracy improvement at the word-level and 27%
improvement at the phoneme-level in LRS2 dataset. These improvements offer new
potential for applications in automatically subtitling TV shows or
user-generated content platforms like TikTok and YouTube Shorts.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 09:13:19 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"He",
"Yi",
""
],
[
"Yang",
"Lei",
""
],
[
"Wang",
"Shilin",
""
]
]
| TITLE: Enhancing Visual Forced Alignment with Local Context-Aware Feature
Extraction and Multi-Task Learning
ABSTRACT: This paper introduces a novel approach to Visual Forced Alignment (VFA),
aiming to accurately synchronize utterances with corresponding lip movements,
without relying on audio cues. We propose a novel VFA approach that integrates
a local context-aware feature extractor and employs multi-task learning to
refine both global and local context features, enhancing sensitivity to subtle
lip movements for precise word-level and phoneme-level alignment. Incorporating
the improved Viterbi algorithm for post-processing, our method significantly
reduces misalignments. Experimental results show our approach outperforms
existing methods, achieving a 6% accuracy improvement at the word-level and 27%
improvement at the phoneme-level in LRS2 dataset. These improvements offer new
potential for applications in automatically subtitling TV shows or
user-generated content platforms like TikTok and YouTube Shorts.
| no_new_dataset | 0.949623 |
2503.03299 | Julia Hindel | Julia Hindel, Rohit Mohan, Jelena Bratuli\`c, Daniele Cattaneo, Thomas
Brox, and Abhinav Valada | Label-Efficient LiDAR Semantic Segmentation with 2D-3D Vision
Transformer Adapters | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LiDAR semantic segmentation models are typically trained from random
initialization as universal pre-training is hindered by the lack of large,
diverse datasets. Moreover, most point cloud segmentation architectures
incorporate custom network layers, limiting the transferability of advances
from vision-based architectures. Inspired by recent advances in universal
foundation models, we propose BALViT, a novel approach that leverages frozen
vision models as amodal feature encoders for learning strong LiDAR encoders.
Specifically, BALViT incorporates both range-view and bird's-eye-view LiDAR
encoding mechanisms, which we combine through a novel 2D-3D adapter. While the
range-view features are processed through a frozen image backbone, our
bird's-eye-view branch enhances them through multiple cross-attention
interactions. Thereby, we continuously improve the vision network with
domain-dependent knowledge, resulting in a strong label-efficient LiDAR
encoding mechanism. Extensive evaluations of BALViT on the SemanticKITTI and
nuScenes benchmarks demonstrate that it outperforms state-of-the-art methods on
small data regimes. We make the code and models publicly available at:
http://balvit.cs.uni-freiburg.de.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 09:30:49 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Hindel",
"Julia",
""
],
[
"Mohan",
"Rohit",
""
],
[
"Bratulic",
"Jelena",
""
],
[
"Cattaneo",
"Daniele",
""
],
[
"Brox",
"Thomas",
""
],
[
"Valada",
"Abhinav",
""
]
]
| TITLE: Label-Efficient LiDAR Semantic Segmentation with 2D-3D Vision
Transformer Adapters
ABSTRACT: LiDAR semantic segmentation models are typically trained from random
initialization as universal pre-training is hindered by the lack of large,
diverse datasets. Moreover, most point cloud segmentation architectures
incorporate custom network layers, limiting the transferability of advances
from vision-based architectures. Inspired by recent advances in universal
foundation models, we propose BALViT, a novel approach that leverages frozen
vision models as amodal feature encoders for learning strong LiDAR encoders.
Specifically, BALViT incorporates both range-view and bird's-eye-view LiDAR
encoding mechanisms, which we combine through a novel 2D-3D adapter. While the
range-view features are processed through a frozen image backbone, our
bird's-eye-view branch enhances them through multiple cross-attention
interactions. Thereby, we continuously improve the vision network with
domain-dependent knowledge, resulting in a strong label-efficient LiDAR
encoding mechanism. Extensive evaluations of BALViT on the SemanticKITTI and
nuScenes benchmarks demonstrate that it outperforms state-of-the-art methods on
small data regimes. We make the code and models publicly available at:
http://balvit.cs.uni-freiburg.de.
| no_new_dataset | 0.944074 |
2503.03325 | Guoyu Yang | Guoyu Yang, Yuan Wang, Daming Shi, Yanzhong Wang | Golden Cudgel Network for Real-Time Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent real-time semantic segmentation models, whether single-branch or
multi-branch, achieve good performance and speed. However, their speed is
limited by multi-path blocks, and some depend on high-performance teacher
models for training. To overcome these issues, we propose Golden Cudgel Network
(GCNet). Specifically, GCNet uses vertical multi-convolutions and horizontal
multi-paths for training, which are reparameterized into a single convolution
for inference, optimizing both performance and speed. This design allows GCNet
to self-enlarge during training and self-contract during inference, effectively
becoming a "teacher model" without needing external ones. Experimental results
show that GCNet outperforms existing state-of-the-art models in terms of
performance and speed on the Cityscapes, CamVid, and Pascal VOC 2012 datasets.
The code is available at https://github.com/gyyang23/GCNet.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 09:59:23 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yang",
"Guoyu",
""
],
[
"Wang",
"Yuan",
""
],
[
"Shi",
"Daming",
""
],
[
"Wang",
"Yanzhong",
""
]
]
| TITLE: Golden Cudgel Network for Real-Time Semantic Segmentation
ABSTRACT: Recent real-time semantic segmentation models, whether single-branch or
multi-branch, achieve good performance and speed. However, their speed is
limited by multi-path blocks, and some depend on high-performance teacher
models for training. To overcome these issues, we propose Golden Cudgel Network
(GCNet). Specifically, GCNet uses vertical multi-convolutions and horizontal
multi-paths for training, which are reparameterized into a single convolution
for inference, optimizing both performance and speed. This design allows GCNet
to self-enlarge during training and self-contract during inference, effectively
becoming a "teacher model" without needing external ones. Experimental results
show that GCNet outperforms existing state-of-the-art models in terms of
performance and speed on the Cityscapes, CamVid, and Pascal VOC 2012 datasets.
The code is available at https://github.com/gyyang23/GCNet.
| no_new_dataset | 0.951684 |
2503.03327 | Saqib Qamar | Saqib Qamar, Syed Furqan Qadri, Roobaea Alroobaea, Majed Alsafyani,
Abdullah M. Baqasah | ScaleFusionNet: Transformer-Guided Multi-Scale Feature Fusion for Skin
Lesion Segmentation | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Melanoma is a malignant tumor originating from skin cell lesions. Accurate
and efficient segmentation of skin lesions is essential for quantitative
medical analysis but remains challenging. To address this, we propose
ScaleFusionNet, a segmentation model that integrates Cross-Attention
Transformer Module (CATM) and AdaptiveFusionBlock to enhance feature extraction
and fusion. The model employs a hybrid architecture encoder that effectively
captures both local and global features. We introduce CATM, which utilizes Swin
Transformer Blocks and Cross Attention Fusion (CAF) to adaptively refine
encoder-decoder feature fusion, reducing semantic gaps and improving
segmentation accuracy. Additionally, the AdaptiveFusionBlock is improved by
integrating adaptive multi-scale fusion, where Swin Transformer-based attention
complements deformable convolution-based multi-scale feature extraction. This
enhancement refines lesion boundaries and preserves fine-grained details.
ScaleFusionNet achieves Dice scores of 92.94% and 91.65% on ISIC-2016 and
ISIC-2018 datasets, respectively, demonstrating its effectiveness in skin
lesion analysis. Our code implementation is publicly available at GitHub.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:00:32 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Qamar",
"Saqib",
""
],
[
"Qadri",
"Syed Furqan",
""
],
[
"Alroobaea",
"Roobaea",
""
],
[
"Alsafyani",
"Majed",
""
],
[
"Baqasah",
"Abdullah M.",
""
]
]
| TITLE: ScaleFusionNet: Transformer-Guided Multi-Scale Feature Fusion for Skin
Lesion Segmentation
ABSTRACT: Melanoma is a malignant tumor originating from skin cell lesions. Accurate
and efficient segmentation of skin lesions is essential for quantitative
medical analysis but remains challenging. To address this, we propose
ScaleFusionNet, a segmentation model that integrates Cross-Attention
Transformer Module (CATM) and AdaptiveFusionBlock to enhance feature extraction
and fusion. The model employs a hybrid architecture encoder that effectively
captures both local and global features. We introduce CATM, which utilizes Swin
Transformer Blocks and Cross Attention Fusion (CAF) to adaptively refine
encoder-decoder feature fusion, reducing semantic gaps and improving
segmentation accuracy. Additionally, the AdaptiveFusionBlock is improved by
integrating adaptive multi-scale fusion, where Swin Transformer-based attention
complements deformable convolution-based multi-scale feature extraction. This
enhancement refines lesion boundaries and preserves fine-grained details.
ScaleFusionNet achieves Dice scores of 92.94% and 91.65% on ISIC-2016 and
ISIC-2018 datasets, respectively, demonstrating its effectiveness in skin
lesion analysis. Our code implementation is publicly available at GitHub.
| no_new_dataset | 0.950869 |
2503.03329 | Yiqiong Yang | Yiqiong Yang, Yitian Yuan, Baoxing Ren, Ye Wu, Yanqiu Feng, Xinyuan
Zhang | Deep Learning-Based Diffusion MRI Tractography: Integrating Spatial and
Anatomical Information | null | null | null | null | cs.CV physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion MRI tractography technique enables non-invasive visualization of
the white matter pathways in the brain. It plays a crucial role in neuroscience
and clinical fields by facilitating the study of brain connectivity and
neurological disorders. However, the accuracy of reconstructed tractograms has
been a longstanding challenge. Recently, deep learning methods have been
applied to improve tractograms for better white matter coverage, but often
comes at the expense of generating excessive false-positive connections. This
is largely due to their reliance on local information to predict long range
streamlines. To improve the accuracy of streamline propagation predictions, we
introduce a novel deep learning framework that integrates image-domain spatial
information and anatomical information along tracts, with the former extracted
through convolutional layers and the later modeled via a Transformer-decoder.
Additionally, we employ a weighted loss function to address fiber class
imbalance encountered during training. We evaluate the proposed method on the
simulated ISMRM 2015 Tractography Challenge dataset, achieving a valid
streamline rate of 66.2%, white matter coverage of 63.8%, and successfully
reconstructing 24 out of 25 bundles. Furthermore, on the multi-site
Tractoinferno dataset, the proposed method demonstrates its ability to handle
various diffusion MRI acquisition schemes, achieving a 5.7% increase in white
matter coverage and a 4.1% decrease in overreach compared to RNN-based methods.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:02:35 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Yang",
"Yiqiong",
""
],
[
"Yuan",
"Yitian",
""
],
[
"Ren",
"Baoxing",
""
],
[
"Wu",
"Ye",
""
],
[
"Feng",
"Yanqiu",
""
],
[
"Zhang",
"Xinyuan",
""
]
]
| TITLE: Deep Learning-Based Diffusion MRI Tractography: Integrating Spatial and
Anatomical Information
ABSTRACT: Diffusion MRI tractography technique enables non-invasive visualization of
the white matter pathways in the brain. It plays a crucial role in neuroscience
and clinical fields by facilitating the study of brain connectivity and
neurological disorders. However, the accuracy of reconstructed tractograms has
been a longstanding challenge. Recently, deep learning methods have been
applied to improve tractograms for better white matter coverage, but often
comes at the expense of generating excessive false-positive connections. This
is largely due to their reliance on local information to predict long range
streamlines. To improve the accuracy of streamline propagation predictions, we
introduce a novel deep learning framework that integrates image-domain spatial
information and anatomical information along tracts, with the former extracted
through convolutional layers and the later modeled via a Transformer-decoder.
Additionally, we employ a weighted loss function to address fiber class
imbalance encountered during training. We evaluate the proposed method on the
simulated ISMRM 2015 Tractography Challenge dataset, achieving a valid
streamline rate of 66.2%, white matter coverage of 63.8%, and successfully
reconstructing 24 out of 25 bundles. Furthermore, on the multi-site
Tractoinferno dataset, the proposed method demonstrates its ability to handle
various diffusion MRI acquisition schemes, achieving a 5.7% increase in white
matter coverage and a 4.1% decrease in overreach compared to RNN-based methods.
| no_new_dataset | 0.951863 |
2503.03331 | Ahmed Samy Mr | Ahmed E. Samy, Zekarias T. Kefato, Sarunas Girdzijauskas | Leap: Inductive Link Prediction via Learnable TopologyAugmentation | published in Machine Learning, Optimization, and Data Science,
Springer Nature Switzerland | null | 10.1007/978-3-031-82481-4_31 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Link prediction is a crucial task in many downstream applications of graph
machine learning. To this end, Graph Neural Network (GNN) is a widely used
technique for link prediction, mainly in transductive settings, where the goal
is to predict missing links between existing nodes. However, many real-life
applications require an inductive setting that accommodates for new nodes,
coming into an existing graph. Thus, recently inductive link prediction has
attracted considerable attention, and a multi-layer perceptron (MLP) is the
popular choice of most studies to learn node representations. However, these
approaches have limited expressivity and do not fully capture the graph's
structural signal. Therefore, in this work we propose LEAP, an inductive link
prediction method based on LEArnable toPology augmentation. Unlike previous
methods, LEAP models the inductive bias from both the structure and node
features, and hence is more expressive. To the best of our knowledge, this is
the first attempt to provide structural contexts for new nodes via learnable
augmentation in inductive settings. Extensive experiments on seven real-world
homogeneous and heterogeneous graphs demonstrates that LEAP significantly
surpasses SOTA methods. The improvements are up to 22\% and 17\% in terms of
AUC and average precision, respectively. The code and datasets are available on
GitHub (https://github.com/AhmedESamy/LEAP/)
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:03:59 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Samy",
"Ahmed E.",
""
],
[
"Kefato",
"Zekarias T.",
""
],
[
"Girdzijauskas",
"Sarunas",
""
]
]
| TITLE: Leap: Inductive Link Prediction via Learnable TopologyAugmentation
ABSTRACT: Link prediction is a crucial task in many downstream applications of graph
machine learning. To this end, Graph Neural Network (GNN) is a widely used
technique for link prediction, mainly in transductive settings, where the goal
is to predict missing links between existing nodes. However, many real-life
applications require an inductive setting that accommodates for new nodes,
coming into an existing graph. Thus, recently inductive link prediction has
attracted considerable attention, and a multi-layer perceptron (MLP) is the
popular choice of most studies to learn node representations. However, these
approaches have limited expressivity and do not fully capture the graph's
structural signal. Therefore, in this work we propose LEAP, an inductive link
prediction method based on LEArnable toPology augmentation. Unlike previous
methods, LEAP models the inductive bias from both the structure and node
features, and hence is more expressive. To the best of our knowledge, this is
the first attempt to provide structural contexts for new nodes via learnable
augmentation in inductive settings. Extensive experiments on seven real-world
homogeneous and heterogeneous graphs demonstrates that LEAP significantly
surpasses SOTA methods. The improvements are up to 22\% and 17\% in terms of
AUC and average precision, respectively. The code and datasets are available on
GitHub (https://github.com/AhmedESamy/LEAP/)
| no_new_dataset | 0.944944 |
2503.03335 | Tiancheng Hu | Tiancheng Hu and Nigel Collier | iNews: A Multimodal Dataset for Modeling Personalized Affective
Responses to News | null | null | null | null | cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current approaches to emotion detection often overlook the inherent
subjectivity of affective experiences, instead relying on aggregated labels
that mask individual variations in emotional responses. We introduce iNews, a
novel large-scale dataset explicitly capturing subjective affective responses
to news headlines. Our dataset comprises annotations from 291 demographically
diverse UK participants across 2,899 multimodal Facebook news posts from major
UK outlets, with an average of 5.18 annotators per sample. For each post,
annotators provide multifaceted labels including valence, arousal, dominance,
discrete emotions, content relevance judgments, sharing likelihood, and
modality importance ratings (text, image, or both). Furthermore, we collect
comprehensive annotator persona information covering demographics, personality,
media trust, and consumption patterns, which explain 15.2% of annotation
variance - higher than existing NLP datasets. Incorporating this information
yields a 7% accuracy gain in zero-shot prediction and remains beneficial even
with 32-shot. iNews will enhance research in LLM personalization, subjectivity,
affective computing, and individual-level behavior simulation.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:09:53 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Hu",
"Tiancheng",
""
],
[
"Collier",
"Nigel",
""
]
]
| TITLE: iNews: A Multimodal Dataset for Modeling Personalized Affective
Responses to News
ABSTRACT: Current approaches to emotion detection often overlook the inherent
subjectivity of affective experiences, instead relying on aggregated labels
that mask individual variations in emotional responses. We introduce iNews, a
novel large-scale dataset explicitly capturing subjective affective responses
to news headlines. Our dataset comprises annotations from 291 demographically
diverse UK participants across 2,899 multimodal Facebook news posts from major
UK outlets, with an average of 5.18 annotators per sample. For each post,
annotators provide multifaceted labels including valence, arousal, dominance,
discrete emotions, content relevance judgments, sharing likelihood, and
modality importance ratings (text, image, or both). Furthermore, we collect
comprehensive annotator persona information covering demographics, personality,
media trust, and consumption patterns, which explain 15.2% of annotation
variance - higher than existing NLP datasets. Incorporating this information
yields a 7% accuracy gain in zero-shot prediction and remains beneficial even
with 32-shot. iNews will enhance research in LLM personalization, subjectivity,
affective computing, and individual-level behavior simulation.
| new_dataset | 0.956391 |
2503.03338 | Pedram Asef | Alexandre Benoit and Pedram Asef | Navigating Intelligence: A Survey of Google OR-Tools and Machine
Learning for Global Path Planning in Autonomous Vehicles | null | null | 10.1002/aisy.202300840 | null | cs.RO cs.AI cs.CE eess.SP | http://creativecommons.org/licenses/by/4.0/ | We offer a new in-depth investigation of global path planning (GPP) for
unmanned ground vehicles, an autonomous mining sampling robot named ROMIE. GPP
is essential for ROMIE's optimal performance, which is translated into solving
the traveling salesman problem, a complex graph theory challenge that is
crucial for determining the most effective route to cover all sampling
locations in a mining field. This problem is central to enhancing ROMIE's
operational efficiency and competitiveness against human labor by optimizing
cost and time. The primary aim of this research is to advance GPP by
developing, evaluating, and improving a cost-efficient software and web
application. We delve into an extensive comparison and analysis of Google
operations research (OR)-Tools optimization algorithms. Our study is driven by
the goal of applying and testing the limits of OR-Tools capabilities by
integrating Reinforcement Learning techniques for the first time. This enables
us to compare these methods with OR-Tools, assessing their computational
effectiveness and real-world application efficiency. Our analysis seeks to
provide insights into the effectiveness and practical application of each
technique. Our findings indicate that Q-Learning stands out as the optimal
strategy, demonstrating superior efficiency by deviating only 1.2% on average
from the optimal solutions across our datasets.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:12:22 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Benoit",
"Alexandre",
""
],
[
"Asef",
"Pedram",
""
]
]
| TITLE: Navigating Intelligence: A Survey of Google OR-Tools and Machine
Learning for Global Path Planning in Autonomous Vehicles
ABSTRACT: We offer a new in-depth investigation of global path planning (GPP) for
unmanned ground vehicles, an autonomous mining sampling robot named ROMIE. GPP
is essential for ROMIE's optimal performance, which is translated into solving
the traveling salesman problem, a complex graph theory challenge that is
crucial for determining the most effective route to cover all sampling
locations in a mining field. This problem is central to enhancing ROMIE's
operational efficiency and competitiveness against human labor by optimizing
cost and time. The primary aim of this research is to advance GPP by
developing, evaluating, and improving a cost-efficient software and web
application. We delve into an extensive comparison and analysis of Google
operations research (OR)-Tools optimization algorithms. Our study is driven by
the goal of applying and testing the limits of OR-Tools capabilities by
integrating Reinforcement Learning techniques for the first time. This enables
us to compare these methods with OR-Tools, assessing their computational
effectiveness and real-world application efficiency. Our analysis seeks to
provide insights into the effectiveness and practical application of each
technique. Our findings indicate that Q-Learning stands out as the optimal
strategy, demonstrating superior efficiency by deviating only 1.2% on average
from the optimal solutions across our datasets.
| no_new_dataset | 0.932638 |
2503.03365 | Juan Miguel Valverde | Juan Miguel Valverde, Motoya Koga, Nijihiko Otsuka, Anders Bjorholm
Dahl | TopoMortar: A dataset to evaluate image segmentation methods focused on
topology accuracy | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present TopoMortar, a brick wall dataset that is the first dataset
specifically designed to evaluate topology-focused image segmentation methods,
such as topology loss functions. TopoMortar enables to investigate in two ways
whether methods incorporate prior topological knowledge. First, by eliminating
challenges seen in real-world data, such as small training set, noisy labels,
and out-of-distribution test-set images, that, as we show, impact the
effectiveness of topology losses. Second, by allowing to assess in the same
dataset topology accuracy across dataset challenges, isolating dataset-related
effects from the effect of incorporating prior topological knowledge. In these
two experiments, it is deliberately difficult to improve topology accuracy
without actually using topology information, thus, permitting to attribute an
improvement in topology accuracy to the incorporation of prior topological
knowledge. To this end, TopoMortar includes three types of labels (accurate,
noisy, pseudo-labels), two fixed training sets (large and small), and
in-distribution and out-of-distribution test-set images. We compared eight loss
functions on TopoMortar, and we found that clDice achieved the most
topologically accurate segmentations, Skeleton Recall loss performed best
particularly with noisy labels, and the relative advantageousness of the other
loss functions depended on the experimental setting. Additionally, we show that
simple methods, such as data augmentation and self-distillation, can elevate
Cross entropy Dice loss to surpass most topology loss functions, and that those
simple methods can enhance topology loss functions as well. clDice and Skeleton
Recall loss, both skeletonization-based loss functions, were also the fastest
to train, making this type of loss function a promising research direction.
TopoMortar and our code can be found at https://github.com/jmlipman/TopoMortar
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:42:41 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Valverde",
"Juan Miguel",
""
],
[
"Koga",
"Motoya",
""
],
[
"Otsuka",
"Nijihiko",
""
],
[
"Dahl",
"Anders Bjorholm",
""
]
]
| TITLE: TopoMortar: A dataset to evaluate image segmentation methods focused on
topology accuracy
ABSTRACT: We present TopoMortar, a brick wall dataset that is the first dataset
specifically designed to evaluate topology-focused image segmentation methods,
such as topology loss functions. TopoMortar enables to investigate in two ways
whether methods incorporate prior topological knowledge. First, by eliminating
challenges seen in real-world data, such as small training set, noisy labels,
and out-of-distribution test-set images, that, as we show, impact the
effectiveness of topology losses. Second, by allowing to assess in the same
dataset topology accuracy across dataset challenges, isolating dataset-related
effects from the effect of incorporating prior topological knowledge. In these
two experiments, it is deliberately difficult to improve topology accuracy
without actually using topology information, thus, permitting to attribute an
improvement in topology accuracy to the incorporation of prior topological
knowledge. To this end, TopoMortar includes three types of labels (accurate,
noisy, pseudo-labels), two fixed training sets (large and small), and
in-distribution and out-of-distribution test-set images. We compared eight loss
functions on TopoMortar, and we found that clDice achieved the most
topologically accurate segmentations, Skeleton Recall loss performed best
particularly with noisy labels, and the relative advantageousness of the other
loss functions depended on the experimental setting. Additionally, we show that
simple methods, such as data augmentation and self-distillation, can elevate
Cross entropy Dice loss to surpass most topology loss functions, and that those
simple methods can enhance topology loss functions as well. clDice and Skeleton
Recall loss, both skeletonization-based loss functions, were also the fastest
to train, making this type of loss function a promising research direction.
TopoMortar and our code can be found at https://github.com/jmlipman/TopoMortar
| no_new_dataset | 0.94545 |
2503.03367 | Xiaotong Zhang | Xiaotong Zhang, Alexander Broersen, Gonnie CM van Erp, Silvia L.
Pintea, Jouke Dijkstra | Top-K Maximum Intensity Projection Priors for 3D Liver Vessel
Segmentation | Accepted in 2025 IEEE International Symposium on Biomedical Imaging
(ISBI 2025) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Liver-vessel segmentation is an essential task in the pre-operative planning
of liver resection. State-of-the-art 2D or 3D convolution-based methods
focusing on liver vessel segmentation on 2D CT cross-sectional views, which do
not take into account the global liver-vessel topology. To maintain this global
vessel topology, we rely on the underlying physics used in the CT
reconstruction process, and apply this to liver-vessel segmentation.
Concretely, we introduce the concept of top-k maximum intensity projections,
which mimics the CT reconstruction by replacing the integral along each
projection direction, with keeping the top-k maxima along each projection
direction. We use these top-k maximum projections to condition a diffusion
model and generate 3D liver-vessel trees. We evaluate our 3D liver-vessel
segmentation on the 3D-ircadb-01 dataset, and achieve the highest Dice
coefficient, intersection-over-union (IoU), and Sensitivity scores compared to
prior work.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:43:01 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhang",
"Xiaotong",
""
],
[
"Broersen",
"Alexander",
""
],
[
"van Erp",
"Gonnie CM",
""
],
[
"Pintea",
"Silvia L.",
""
],
[
"Dijkstra",
"Jouke",
""
]
]
| TITLE: Top-K Maximum Intensity Projection Priors for 3D Liver Vessel
Segmentation
ABSTRACT: Liver-vessel segmentation is an essential task in the pre-operative planning
of liver resection. State-of-the-art 2D or 3D convolution-based methods
focusing on liver vessel segmentation on 2D CT cross-sectional views, which do
not take into account the global liver-vessel topology. To maintain this global
vessel topology, we rely on the underlying physics used in the CT
reconstruction process, and apply this to liver-vessel segmentation.
Concretely, we introduce the concept of top-k maximum intensity projections,
which mimics the CT reconstruction by replacing the integral along each
projection direction, with keeping the top-k maxima along each projection
direction. We use these top-k maximum projections to condition a diffusion
model and generate 3D liver-vessel trees. We evaluate our 3D liver-vessel
segmentation on the 3D-ircadb-01 dataset, and achieve the highest Dice
coefficient, intersection-over-union (IoU), and Sensitivity scores compared to
prior work.
| no_new_dataset | 0.949435 |
2503.03373 | Jie Deng | Jie Deng, Fengtian Lang, Zikang Yuan and Xin Yang | Direct Sparse Odometry with Continuous 3D Gaussian Maps for Indoor
Environments | 7 pages,5 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate localization is essential for robotics and augmented reality
applications such as autonomous navigation. Vision-based methods combining
prior maps aim to integrate LiDAR-level accuracy with camera cost efficiency
for robust pose estimation. Existing approaches, however, often depend on
unreliable interpolation procedures when associating discrete point cloud maps
with dense image pixels, which inevitably introduces depth errors and degrades
pose estimation accuracy. We propose a monocular visual odometry framework
utilizing a continuous 3D Gaussian map, which directly assigns geometrically
consistent depth values to all extracted high-gradient points without
interpolation. Evaluations on two public datasets demonstrate superior tracking
accuracy compared to existing methods. We have released the source code of this
work for the development of the community.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 10:49:28 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Deng",
"Jie",
""
],
[
"Lang",
"Fengtian",
""
],
[
"Yuan",
"Zikang",
""
],
[
"Yang",
"Xin",
""
]
]
| TITLE: Direct Sparse Odometry with Continuous 3D Gaussian Maps for Indoor
Environments
ABSTRACT: Accurate localization is essential for robotics and augmented reality
applications such as autonomous navigation. Vision-based methods combining
prior maps aim to integrate LiDAR-level accuracy with camera cost efficiency
for robust pose estimation. Existing approaches, however, often depend on
unreliable interpolation procedures when associating discrete point cloud maps
with dense image pixels, which inevitably introduces depth errors and degrades
pose estimation accuracy. We propose a monocular visual odometry framework
utilizing a continuous 3D Gaussian map, which directly assigns geometrically
consistent depth values to all extracted high-gradient points without
interpolation. Evaluations on two public datasets demonstrate superior tracking
accuracy compared to existing methods. We have released the source code of this
work for the development of the community.
| no_new_dataset | 0.947088 |
2503.03399 | Hanyu Duan | Hanyu Duan, Yi Yang, Ahmed Abbasi, Kar Yan Tam | Predicting Practically? Domain Generalization for Predictive Analytics
in Real-world Environments | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Predictive machine learning models are widely used in customer relationship
management (CRM) to forecast customer behaviors and support decision-making.
However, the dynamic nature of customer behaviors often results in significant
distribution shifts between training data and serving data, leading to
performance degradation in predictive models. Domain generalization, which aims
to train models that can generalize to unseen environments without prior
knowledge of their distributions, has become a critical area of research. In
this work, we propose a novel domain generalization method tailored to handle
complex distribution shifts, encompassing both covariate and concept shifts.
Our method builds upon the Distributionally Robust Optimization framework,
optimizing model performance over a set of hypothetical worst-case
distributions rather than relying solely on the training data. Through
simulation experiments, we demonstrate the working mechanism of the proposed
method. We also conduct experiments on a real-world customer churn dataset, and
validate its effectiveness in both temporal and spatial generalization
settings. Finally, we discuss the broader implications of our method for
advancing Information Systems (IS) design research, particularly in building
robust predictive models for dynamic managerial environments.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 11:21:37 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Duan",
"Hanyu",
""
],
[
"Yang",
"Yi",
""
],
[
"Abbasi",
"Ahmed",
""
],
[
"Tam",
"Kar Yan",
""
]
]
| TITLE: Predicting Practically? Domain Generalization for Predictive Analytics
in Real-world Environments
ABSTRACT: Predictive machine learning models are widely used in customer relationship
management (CRM) to forecast customer behaviors and support decision-making.
However, the dynamic nature of customer behaviors often results in significant
distribution shifts between training data and serving data, leading to
performance degradation in predictive models. Domain generalization, which aims
to train models that can generalize to unseen environments without prior
knowledge of their distributions, has become a critical area of research. In
this work, we propose a novel domain generalization method tailored to handle
complex distribution shifts, encompassing both covariate and concept shifts.
Our method builds upon the Distributionally Robust Optimization framework,
optimizing model performance over a set of hypothetical worst-case
distributions rather than relying solely on the training data. Through
simulation experiments, we demonstrate the working mechanism of the proposed
method. We also conduct experiments on a real-world customer churn dataset, and
validate its effectiveness in both temporal and spatial generalization
settings. Finally, we discuss the broader implications of our method for
advancing Information Systems (IS) design research, particularly in building
robust predictive models for dynamic managerial environments.
| no_new_dataset | 0.946695 |
2503.03410 | Nadia Brancati | Martina Russo, Giulia Bertolini, Vera Cappelletti, Cinzia De Marco,
Serena Di Cosimo, Petra Pai\`e and Nadia Brancati | Augmentation-Based Deep Learning for Identification of Circulating Tumor
Cells | 20 pages, 4 figures, 3 tables | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Circulating tumor cells (CTCs) are crucial biomarkers in liquid biopsy,
offering a noninvasive tool for cancer patient management. However, their
identification remains particularly challenging due to their limited number and
heterogeneity. Labeling samples for contrast limits the generalization of
fluorescence-based methods across different hospital datasets. Analyzing
single-cell images enables detailed assessment of cell morphology, subcellular
structures, and phenotypic variations, often hidden in clustered images.
Developing a method based on bright-field single-cell analysis could overcome
these limitations. CTCs can be isolated using an unbiased workflow combining
Parsortix technology, which selects cells based on size and deformability, with
DEPArray technology, enabling precise visualization and selection of single
cells. Traditionally, DEPArray-acquired digital images are manually analyzed,
making the process time-consuming and prone to variability. In this study, we
present a Deep Learning-based classification pipeline designed to distinguish
CTCs from leukocytes in blood samples, aimed to enhance diagnostic accuracy and
optimize clinical workflows. Our approach employs images from the bright-field
channel acquired through DEPArray technology leveraging a ResNet-based CNN. To
improve model generalization, we applied three types of data augmentation
techniques and incorporated fluorescence (DAPI) channel images into the
training phase, allowing the network to learn additional CTC-specific features.
Notably, only bright-field images have been used for testing, ensuring the
model's ability to identify CTCs without relying on fluorescence markers. The
proposed model achieved an F1-score of 0.798, demonstrating its capability to
distinguish CTCs from leukocytes. These findings highlight the potential of DL
in refining CTC analysis and advancing liquid biopsy applications.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 11:39:15 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Russo",
"Martina",
""
],
[
"Bertolini",
"Giulia",
""
],
[
"Cappelletti",
"Vera",
""
],
[
"De Marco",
"Cinzia",
""
],
[
"Di Cosimo",
"Serena",
""
],
[
"Paiè",
"Petra",
""
],
[
"Brancati",
"Nadia",
""
]
]
| TITLE: Augmentation-Based Deep Learning for Identification of Circulating Tumor
Cells
ABSTRACT: Circulating tumor cells (CTCs) are crucial biomarkers in liquid biopsy,
offering a noninvasive tool for cancer patient management. However, their
identification remains particularly challenging due to their limited number and
heterogeneity. Labeling samples for contrast limits the generalization of
fluorescence-based methods across different hospital datasets. Analyzing
single-cell images enables detailed assessment of cell morphology, subcellular
structures, and phenotypic variations, often hidden in clustered images.
Developing a method based on bright-field single-cell analysis could overcome
these limitations. CTCs can be isolated using an unbiased workflow combining
Parsortix technology, which selects cells based on size and deformability, with
DEPArray technology, enabling precise visualization and selection of single
cells. Traditionally, DEPArray-acquired digital images are manually analyzed,
making the process time-consuming and prone to variability. In this study, we
present a Deep Learning-based classification pipeline designed to distinguish
CTCs from leukocytes in blood samples, aimed to enhance diagnostic accuracy and
optimize clinical workflows. Our approach employs images from the bright-field
channel acquired through DEPArray technology leveraging a ResNet-based CNN. To
improve model generalization, we applied three types of data augmentation
techniques and incorporated fluorescence (DAPI) channel images into the
training phase, allowing the network to learn additional CTC-specific features.
Notably, only bright-field images have been used for testing, ensuring the
model's ability to identify CTCs without relying on fluorescence markers. The
proposed model achieved an F1-score of 0.798, demonstrating its capability to
distinguish CTCs from leukocytes. These findings highlight the potential of DL
in refining CTC analysis and advancing liquid biopsy applications.
| no_new_dataset | 0.952309 |
2503.03430 | Junhao Xu | Junhao Xu, Yanan Zhang, Zhi Cai, Di Huang | CoSDH: Communication-Efficient Collaborative Perception via
Supply-Demand Awareness and Intermediate-Late Hybridization | Accepted at CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-agent collaborative perception enhances perceptual capabilities by
utilizing information from multiple agents and is considered a fundamental
solution to the problem of weak single-vehicle perception in autonomous
driving. However, existing collaborative perception methods face a dilemma
between communication efficiency and perception accuracy. To address this
issue, we propose a novel communication-efficient collaborative perception
framework based on supply-demand awareness and intermediate-late hybridization,
dubbed as \mymethodname. By modeling the supply-demand relationship between
agents, the framework refines the selection of collaboration regions, reducing
unnecessary communication cost while maintaining accuracy. In addition, we
innovatively introduce the intermediate-late hybrid collaboration mode, where
late-stage collaboration compensates for the performance degradation in
collaborative perception under low communication bandwidth. Extensive
experiments on multiple datasets, including both simulated and real-world
scenarios, demonstrate that \mymethodname~ achieves state-of-the-art detection
accuracy and optimal bandwidth trade-offs, delivering superior detection
precision under real communication bandwidths, thus proving its effectiveness
and practical applicability. The code will be released at
https://github.com/Xu2729/CoSDH.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 12:02:04 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Xu",
"Junhao",
""
],
[
"Zhang",
"Yanan",
""
],
[
"Cai",
"Zhi",
""
],
[
"Huang",
"Di",
""
]
]
| TITLE: CoSDH: Communication-Efficient Collaborative Perception via
Supply-Demand Awareness and Intermediate-Late Hybridization
ABSTRACT: Multi-agent collaborative perception enhances perceptual capabilities by
utilizing information from multiple agents and is considered a fundamental
solution to the problem of weak single-vehicle perception in autonomous
driving. However, existing collaborative perception methods face a dilemma
between communication efficiency and perception accuracy. To address this
issue, we propose a novel communication-efficient collaborative perception
framework based on supply-demand awareness and intermediate-late hybridization,
dubbed as \mymethodname. By modeling the supply-demand relationship between
agents, the framework refines the selection of collaboration regions, reducing
unnecessary communication cost while maintaining accuracy. In addition, we
innovatively introduce the intermediate-late hybrid collaboration mode, where
late-stage collaboration compensates for the performance degradation in
collaborative perception under low communication bandwidth. Extensive
experiments on multiple datasets, including both simulated and real-world
scenarios, demonstrate that \mymethodname~ achieves state-of-the-art detection
accuracy and optimal bandwidth trade-offs, delivering superior detection
precision under real communication bandwidths, thus proving its effectiveness
and practical applicability. The code will be released at
https://github.com/Xu2729/CoSDH.
| no_new_dataset | 0.949153 |
2503.03438 | Shijie Zhu | Shijie Zhu, Hui Zhao, Tianshu Wu, Pengjie Wang, Hongbo Deng, Jian Xu,
Bo Zheng | Gradient Deconfliction via Orthogonal Projections onto Subspaces For
Multi-task Learning | WSDM 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although multi-task learning (MTL) has been a preferred approach and
successfully applied in many real-world scenarios, MTL models are not
guaranteed to outperform single-task models on all tasks mainly due to the
negative effects of conflicting gradients among the tasks. In this paper, we
fully examine the influence of conflicting gradients and further emphasize the
importance and advantages of achieving non-conflicting gradients which allows
simple but effective trade-off strategies among the tasks with stable
performance. Based on our findings, we propose the Gradient Deconfliction via
Orthogonal Projections onto Subspaces (GradOPS) spanned by other task-specific
gradients. Our method not only solves all conflicts among the tasks, but can
also effectively search for diverse solutions towards different trade-off
preferences among the tasks. Theoretical analysis on convergence is provided,
and performance of our algorithm is fully testified on multiple benchmarks in
various domains. Results demonstrate that our method can effectively find
multiple state-of-the-art solutions with different trade-off strategies among
the tasks on multiple datasets.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 12:13:08 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Zhu",
"Shijie",
""
],
[
"Zhao",
"Hui",
""
],
[
"Wu",
"Tianshu",
""
],
[
"Wang",
"Pengjie",
""
],
[
"Deng",
"Hongbo",
""
],
[
"Xu",
"Jian",
""
],
[
"Zheng",
"Bo",
""
]
]
| TITLE: Gradient Deconfliction via Orthogonal Projections onto Subspaces For
Multi-task Learning
ABSTRACT: Although multi-task learning (MTL) has been a preferred approach and
successfully applied in many real-world scenarios, MTL models are not
guaranteed to outperform single-task models on all tasks mainly due to the
negative effects of conflicting gradients among the tasks. In this paper, we
fully examine the influence of conflicting gradients and further emphasize the
importance and advantages of achieving non-conflicting gradients which allows
simple but effective trade-off strategies among the tasks with stable
performance. Based on our findings, we propose the Gradient Deconfliction via
Orthogonal Projections onto Subspaces (GradOPS) spanned by other task-specific
gradients. Our method not only solves all conflicts among the tasks, but can
also effectively search for diverse solutions towards different trade-off
preferences among the tasks. Theoretical analysis on convergence is provided,
and performance of our algorithm is fully testified on multiple benchmarks in
various domains. Results demonstrate that our method can effectively find
multiple state-of-the-art solutions with different trade-off strategies among
the tasks on multiple datasets.
| no_new_dataset | 0.94079 |
2503.03444 | Eunkyung Choi | Eunkyung Choi, Young Jin Suh, Hun Park, Wonseok Hwang | Taxation Perspectives from Large Language Models: A Case Study on
Additional Tax Penalties | 5 pages | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | How capable are large language models (LLMs) in the domain of taxation?
Although numerous studies have explored the legal domain in general, research
dedicated to taxation remain scarce. Moreover, the datasets used in these
studies are either simplified, failing to reflect the real-world complexities,
or unavailable as open source. To address this gap, we introduce PLAT, a new
benchmark designed to assess the ability of LLMs to predict the legitimacy of
additional tax penalties. PLAT is constructed to evaluate LLMs' understanding
of tax law, particularly in cases where resolving the issue requires more than
just applying related statutes. Our experiments with six LLMs reveal that their
baseline capabilities are limited, especially when dealing with conflicting
issues that demand a comprehensive understanding. However, we found that
enabling retrieval, self-reasoning, and discussion among multiple agents with
specific role assignments, this limitation can be mitigated.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 12:24:20 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Choi",
"Eunkyung",
""
],
[
"Suh",
"Young Jin",
""
],
[
"Park",
"Hun",
""
],
[
"Hwang",
"Wonseok",
""
]
]
| TITLE: Taxation Perspectives from Large Language Models: A Case Study on
Additional Tax Penalties
ABSTRACT: How capable are large language models (LLMs) in the domain of taxation?
Although numerous studies have explored the legal domain in general, research
dedicated to taxation remain scarce. Moreover, the datasets used in these
studies are either simplified, failing to reflect the real-world complexities,
or unavailable as open source. To address this gap, we introduce PLAT, a new
benchmark designed to assess the ability of LLMs to predict the legitimacy of
additional tax penalties. PLAT is constructed to evaluate LLMs' understanding
of tax law, particularly in cases where resolving the issue requires more than
just applying related statutes. Our experiments with six LLMs reveal that their
baseline capabilities are limited, especially when dealing with conflicting
issues that demand a comprehensive understanding. However, we found that
enabling retrieval, self-reasoning, and discussion among multiple agents with
specific role assignments, this limitation can be mitigated.
| new_dataset | 0.76986 |
2503.03446 | Iris Dominguez Catena | Iris Dominguez-Catena, Daniel Paternain, Mikel Galar, MaryBeth
Defrance, Maarten Buyl, Tijl De Bie | Biased Heritage: How Datasets Shape Models in Facial Expression
Recognition | 17 pages, 7 figures | null | null | null | cs.CV cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the rapid development of artificial intelligence (AI)
systems has raised concerns about our ability to ensure their fairness, that
is, how to avoid discrimination based on protected characteristics such as
gender, race, or age. While algorithmic fairness is well-studied in simple
binary classification tasks on tabular data, its application to complex,
real-world scenarios-such as Facial Expression Recognition (FER)-remains
underexplored. FER presents unique challenges: it is inherently multiclass, and
biases emerge across intersecting demographic variables, each potentially
comprising multiple protected groups. We present a comprehensive framework to
analyze bias propagation from datasets to trained models in image-based FER
systems, while introducing new bias metrics specifically designed for
multiclass problems with multiple demographic groups. Our methodology studies
bias propagation by (1) inducing controlled biases in FER datasets, (2)
training models on these biased datasets, and (3) analyzing the correlation
between dataset bias metrics and model fairness notions. Our findings reveal
that stereotypical biases propagate more strongly to model predictions than
representational biases, suggesting that preventing emotion-specific
demographic patterns should be prioritized over general demographic balance in
FER datasets. Additionally, we observe that biased datasets lead to reduced
model accuracy, challenging the assumed fairness-accuracy trade-off.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 12:25:22 GMT"
}
]
| 2025-03-06T00:00:00 | [
[
"Dominguez-Catena",
"Iris",
""
],
[
"Paternain",
"Daniel",
""
],
[
"Galar",
"Mikel",
""
],
[
"Defrance",
"MaryBeth",
""
],
[
"Buyl",
"Maarten",
""
],
[
"De Bie",
"Tijl",
""
]
]
| TITLE: Biased Heritage: How Datasets Shape Models in Facial Expression
Recognition
ABSTRACT: In recent years, the rapid development of artificial intelligence (AI)
systems has raised concerns about our ability to ensure their fairness, that
is, how to avoid discrimination based on protected characteristics such as
gender, race, or age. While algorithmic fairness is well-studied in simple
binary classification tasks on tabular data, its application to complex,
real-world scenarios-such as Facial Expression Recognition (FER)-remains
underexplored. FER presents unique challenges: it is inherently multiclass, and
biases emerge across intersecting demographic variables, each potentially
comprising multiple protected groups. We present a comprehensive framework to
analyze bias propagation from datasets to trained models in image-based FER
systems, while introducing new bias metrics specifically designed for
multiclass problems with multiple demographic groups. Our methodology studies
bias propagation by (1) inducing controlled biases in FER datasets, (2)
training models on these biased datasets, and (3) analyzing the correlation
between dataset bias metrics and model fairness notions. Our findings reveal
that stereotypical biases propagate more strongly to model predictions than
representational biases, suggesting that preventing emotion-specific
demographic patterns should be prioritized over general demographic balance in
FER datasets. Additionally, we observe that biased datasets lead to reduced
model accuracy, challenging the assumed fairness-accuracy trade-off.
| no_new_dataset | 0.95018 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.