id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.07317 | Jiho Lee | Jiho Lee, Hayun Lee, Jonghyeon Kim, Kyungjae Lee, and Eunwoo Kim | Self-Corrective Task Planning by Inverse Prompting with Large Language
Models | 7 pages, 5 figures, IEEE International Conference on Robotics and
Automation (ICRA) 2025 | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In robot task planning, large language models (LLMs) have shown significant
promise in generating complex and long-horizon action sequences. However, it is
observed that LLMs often produce responses that sound plausible but are not
accurate. To address these problems, existing methods typically employ
predefined error sets or external knowledge sources, requiring human efforts
and computation resources. Recently, self-correction approaches have emerged,
where LLM generates and refines plans, identifying errors by itself. Despite
their effectiveness, they are more prone to failures in correction due to
insufficient reasoning. In this paper, we introduce InversePrompt, a novel
self-corrective task planning approach that leverages inverse prompting to
enhance interpretability. Our method incorporates reasoning steps to provide
clear, interpretable feedback. It generates inverse actions corresponding to
the initially generated actions and verifies whether these inverse actions can
restore the system to its original state, explicitly validating the logical
coherence of the generated plans. The results on benchmark datasets show an
average 16.3% higher success rate over existing LLM-based task planning
methods. Our approach offers clearer justifications for feedback in real-world
environments, resulting in more successful task completion than existing
self-correction approaches across various scenarios.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 13:35:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Lee",
"Jiho",
""
],
[
"Lee",
"Hayun",
""
],
[
"Kim",
"Jonghyeon",
""
],
[
"Lee",
"Kyungjae",
""
],
[
"Kim",
"Eunwoo",
""
]
]
| TITLE: Self-Corrective Task Planning by Inverse Prompting with Large Language
Models
ABSTRACT: In robot task planning, large language models (LLMs) have shown significant
promise in generating complex and long-horizon action sequences. However, it is
observed that LLMs often produce responses that sound plausible but are not
accurate. To address these problems, existing methods typically employ
predefined error sets or external knowledge sources, requiring human efforts
and computation resources. Recently, self-correction approaches have emerged,
where LLM generates and refines plans, identifying errors by itself. Despite
their effectiveness, they are more prone to failures in correction due to
insufficient reasoning. In this paper, we introduce InversePrompt, a novel
self-corrective task planning approach that leverages inverse prompting to
enhance interpretability. Our method incorporates reasoning steps to provide
clear, interpretable feedback. It generates inverse actions corresponding to
the initially generated actions and verifies whether these inverse actions can
restore the system to its original state, explicitly validating the logical
coherence of the generated plans. The results on benchmark datasets show an
average 16.3% higher success rate over existing LLM-based task planning
methods. Our approach offers clearer justifications for feedback in real-world
environments, resulting in more successful task completion than existing
self-correction approaches across various scenarios.
| no_new_dataset | 0.947137 |
2503.07323 | Yubo Zhao | Yubo Zhao, Qi Wu, Yifan Wang, Yu-Wing Tai, Chi-Keung Tang | Dynamic Path Navigation for Motion Agents with LLM Reasoning | null | null | null | null | cs.RO cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have demonstrated strong generalizable reasoning
and planning capabilities. However, their efficacies in spatial path planning
and obstacle-free trajectory generation remain underexplored. Leveraging LLMs
for navigation holds significant potential, given LLMs' ability to handle
unseen scenarios, support user-agent interactions, and provide global control
across complex systems, making them well-suited for agentic planning and
humanoid motion generation. As one of the first studies in this domain, we
explore the zero-shot navigation and path generation capabilities of LLMs by
constructing a dataset and proposing an evaluation protocol. Specifically, we
represent paths using anchor points connected by straight lines, enabling
movement in various directions. This approach offers greater flexibility and
practicality compared to previous methods while remaining simple and intuitive
for LLMs. We demonstrate that, when tasks are well-structured in this manner,
modern LLMs exhibit substantial planning proficiency in avoiding obstacles
while autonomously refining navigation with the generated motion to reach the
target. Further, this spatial reasoning ability of a single LLM motion agent
interacting in a static environment can be seamlessly generalized in
multi-motion agents coordination in dynamic environments. Unlike traditional
approaches that rely on single-step planning or local policies, our
training-free LLM-based method enables global, dynamic, closed-loop planning,
and autonomously resolving collision issues.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 13:39:09 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhao",
"Yubo",
""
],
[
"Wu",
"Qi",
""
],
[
"Wang",
"Yifan",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Tang",
"Chi-Keung",
""
]
]
| TITLE: Dynamic Path Navigation for Motion Agents with LLM Reasoning
ABSTRACT: Large Language Models (LLMs) have demonstrated strong generalizable reasoning
and planning capabilities. However, their efficacies in spatial path planning
and obstacle-free trajectory generation remain underexplored. Leveraging LLMs
for navigation holds significant potential, given LLMs' ability to handle
unseen scenarios, support user-agent interactions, and provide global control
across complex systems, making them well-suited for agentic planning and
humanoid motion generation. As one of the first studies in this domain, we
explore the zero-shot navigation and path generation capabilities of LLMs by
constructing a dataset and proposing an evaluation protocol. Specifically, we
represent paths using anchor points connected by straight lines, enabling
movement in various directions. This approach offers greater flexibility and
practicality compared to previous methods while remaining simple and intuitive
for LLMs. We demonstrate that, when tasks are well-structured in this manner,
modern LLMs exhibit substantial planning proficiency in avoiding obstacles
while autonomously refining navigation with the generated motion to reach the
target. Further, this spatial reasoning ability of a single LLM motion agent
interacting in a static environment can be seamlessly generalized in
multi-motion agents coordination in dynamic environments. Unlike traditional
approaches that rely on single-step planning or local policies, our
training-free LLM-based method enables global, dynamic, closed-loop planning,
and autonomously resolving collision issues.
| no_new_dataset | 0.838548 |
2503.07325 | Khoat Than | Khoat Than, Dat Phan | Non-vacuous Generalization Bounds for Deep Neural Networks without any
modification to the trained models | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-sa/4.0/ | Deep neural network (NN) with millions or billions of parameters can perform
really well on unseen data, after being trained from a finite training set.
Various prior theories have been developed to explain such excellent ability of
NNs, but do not provide a meaningful bound on the test error. Some recent
theories, based on PAC-Bayes and mutual information, are non-vacuous and hence
show a great potential to explain the excellent performance of NNs. However,
they often require a stringent assumption and extensive modification (e.g.
compression, quantization) to the trained model of interest. Therefore, those
prior theories provide a guarantee for the modified versions only. In this
paper, we propose two novel bounds on the test error of a model. Our bounds
uses the training set only and require no modification to the model. Those
bounds are verified on a large class of modern NNs, pretrained by Pytorch on
the ImageNet dataset, and are non-vacuous. To the best of our knowledge, these
are the first non-vacuous bounds at this large scale, without any modification
to the pretrained models.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 13:40:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Than",
"Khoat",
""
],
[
"Phan",
"Dat",
""
]
]
| TITLE: Non-vacuous Generalization Bounds for Deep Neural Networks without any
modification to the trained models
ABSTRACT: Deep neural network (NN) with millions or billions of parameters can perform
really well on unseen data, after being trained from a finite training set.
Various prior theories have been developed to explain such excellent ability of
NNs, but do not provide a meaningful bound on the test error. Some recent
theories, based on PAC-Bayes and mutual information, are non-vacuous and hence
show a great potential to explain the excellent performance of NNs. However,
they often require a stringent assumption and extensive modification (e.g.
compression, quantization) to the trained model of interest. Therefore, those
prior theories provide a guarantee for the modified versions only. In this
paper, we propose two novel bounds on the test error of a model. Our bounds
uses the training set only and require no modification to the model. Those
bounds are verified on a large class of modern NNs, pretrained by Pytorch on
the ImageNet dataset, and are non-vacuous. To the best of our knowledge, these
are the first non-vacuous bounds at this large scale, without any modification
to the pretrained models.
| no_new_dataset | 0.948489 |
2503.07330 | Changshun Wu | Weicheng He, Changshun Wu, Chih-Hong Cheng, Xiaowei Huang, Saddek
Bensalem | Mitigating Hallucinations in YOLO-based Object Detection Models: A
Revisit to Out-of-Distribution Detection | null | null | null | null | cs.CV cs.AI cs.SE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Object detection systems must reliably perceive objects of interest without
being overly confident to ensure safe decision-making in dynamic environments.
Filtering techniques based on out-of-distribution (OoD) detection are commonly
added as an extra safeguard to filter hallucinations caused by overconfidence
in novel objects. Nevertheless, evaluating YOLO-family detectors and their
filters under existing OoD benchmarks often leads to unsatisfactory
performance. This paper studies the underlying reasons for performance
bottlenecks and proposes a methodology to improve performance fundamentally.
Our first contribution is a calibration of all existing evaluation results:
Although images in existing OoD benchmark datasets are claimed not to have
objects within in-distribution (ID) classes (i.e., categories defined in the
training dataset), around 13% of objects detected by the object detector are
actually ID objects. Dually, the ID dataset containing OoD objects can also
negatively impact the decision boundary of filters. These ultimately lead to a
significantly imprecise performance estimation. Our second contribution is to
consider the task of hallucination reduction as a joint pipeline of detectors
and filters. By developing a methodology to carefully synthesize an OoD dataset
that semantically resembles the objects to be detected, and using the crafted
OoD dataset in the fine-tuning of YOLO detectors to suppress the objectness
score, we achieve a 88% reduction in overall hallucination error with a
combined fine-tuned detection and filtering system on the self-driving
benchmark BDD-100K. Our code and dataset are available at:
https://gricad-gitlab.univ-grenoble-alpes.fr/dnn-safety/m-hood.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 13:42:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"He",
"Weicheng",
""
],
[
"Wu",
"Changshun",
""
],
[
"Cheng",
"Chih-Hong",
""
],
[
"Huang",
"Xiaowei",
""
],
[
"Bensalem",
"Saddek",
""
]
]
| TITLE: Mitigating Hallucinations in YOLO-based Object Detection Models: A
Revisit to Out-of-Distribution Detection
ABSTRACT: Object detection systems must reliably perceive objects of interest without
being overly confident to ensure safe decision-making in dynamic environments.
Filtering techniques based on out-of-distribution (OoD) detection are commonly
added as an extra safeguard to filter hallucinations caused by overconfidence
in novel objects. Nevertheless, evaluating YOLO-family detectors and their
filters under existing OoD benchmarks often leads to unsatisfactory
performance. This paper studies the underlying reasons for performance
bottlenecks and proposes a methodology to improve performance fundamentally.
Our first contribution is a calibration of all existing evaluation results:
Although images in existing OoD benchmark datasets are claimed not to have
objects within in-distribution (ID) classes (i.e., categories defined in the
training dataset), around 13% of objects detected by the object detector are
actually ID objects. Dually, the ID dataset containing OoD objects can also
negatively impact the decision boundary of filters. These ultimately lead to a
significantly imprecise performance estimation. Our second contribution is to
consider the task of hallucination reduction as a joint pipeline of detectors
and filters. By developing a methodology to carefully synthesize an OoD dataset
that semantically resembles the objects to be detected, and using the crafted
OoD dataset in the fine-tuning of YOLO detectors to suppress the objectness
score, we achieve a 88% reduction in overall hallucination error with a
combined fine-tuned detection and filtering system on the self-driving
benchmark BDD-100K. Our code and dataset are available at:
https://gricad-gitlab.univ-grenoble-alpes.fr/dnn-safety/m-hood.
| no_new_dataset | 0.91611 |
2503.07348 | Sebastian Stricker | Christoph Karg, Sebastian Stricker, Lisa Hutschenreiter, Bogdan
Savchynskyy, Dagmar Kainmueller | Fully Unsupervised Annotation of C. Elegans | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we present a novel approach for unsupervised multi-graph
matching, which applies to problems for which a Gaussian distribution of
keypoint features can be assumed. We leverage cycle consistency as loss for
self-supervised learning, and determine Gaussian parameters through Bayesian
Optimization, yielding a highly efficient approach that scales to large
datasets. Our fully unsupervised approach enables us to reach the accuracy of
state-of-the-art supervised methodology for the use case of annotating cell
nuclei in 3D microscopy images of the worm C. elegans. To this end, our
approach yields the first unsupervised atlas of C. elegans, i.e. a model of the
joint distribution of all of its cell nuclei, without the need for any ground
truth cell annotation. This advancement enables highly efficient annotation of
cell nuclei in large microscopy datasets of C. elegans. Beyond C. elegans, our
approach offers fully unsupervised construction of cell-level atlases for any
model organism with a stereotyped cell lineage, and thus bears the potential to
catalyze respective comparative developmental studies in a range of further
species.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:03:18 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Karg",
"Christoph",
""
],
[
"Stricker",
"Sebastian",
""
],
[
"Hutschenreiter",
"Lisa",
""
],
[
"Savchynskyy",
"Bogdan",
""
],
[
"Kainmueller",
"Dagmar",
""
]
]
| TITLE: Fully Unsupervised Annotation of C. Elegans
ABSTRACT: In this work we present a novel approach for unsupervised multi-graph
matching, which applies to problems for which a Gaussian distribution of
keypoint features can be assumed. We leverage cycle consistency as loss for
self-supervised learning, and determine Gaussian parameters through Bayesian
Optimization, yielding a highly efficient approach that scales to large
datasets. Our fully unsupervised approach enables us to reach the accuracy of
state-of-the-art supervised methodology for the use case of annotating cell
nuclei in 3D microscopy images of the worm C. elegans. To this end, our
approach yields the first unsupervised atlas of C. elegans, i.e. a model of the
joint distribution of all of its cell nuclei, without the need for any ground
truth cell annotation. This advancement enables highly efficient annotation of
cell nuclei in large microscopy datasets of C. elegans. Beyond C. elegans, our
approach offers fully unsupervised construction of cell-level atlases for any
model organism with a stereotyped cell lineage, and thus bears the potential to
catalyze respective comparative developmental studies in a range of further
species.
| no_new_dataset | 0.948822 |
2503.07352 | Eetu Tunturi | Eetu Tunturi, David Diaz-Guerra, Archontis Politis, Tuomas Virtanen | Score-informed Music Source Separation: Improving Synthetic-to-real
Generalization in Classical Music | 5 pages, 2 figures, submitted to Eusipco2025 | null | null | null | eess.AS cs.LG cs.SD | http://creativecommons.org/licenses/by/4.0/ | Music source separation is the task of separating a mixture of instruments
into constituent tracks. Music source separation models are typically trained
using only audio data, although additional information can be used to improve
the model's separation capability. In this paper, we propose two ways of using
musical scores to aid music source separation: a score-informed model where the
score is concatenated with the magnitude spectrogram of the audio mixture as
the input of the model, and a model where we use only the score to calculate
the separation mask. We train our models on synthetic data in the SynthSOD
dataset and evaluate our methods on the URMP and Aalto anechoic orchestra
datasets, comprised of real recordings. The score-informed model improves
separation results compared to a baseline approach, but struggles to generalize
from synthetic to real data, whereas the score-only model shows a clear
improvement in synthetic-to-real generalization.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:08:31 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Tunturi",
"Eetu",
""
],
[
"Diaz-Guerra",
"David",
""
],
[
"Politis",
"Archontis",
""
],
[
"Virtanen",
"Tuomas",
""
]
]
| TITLE: Score-informed Music Source Separation: Improving Synthetic-to-real
Generalization in Classical Music
ABSTRACT: Music source separation is the task of separating a mixture of instruments
into constituent tracks. Music source separation models are typically trained
using only audio data, although additional information can be used to improve
the model's separation capability. In this paper, we propose two ways of using
musical scores to aid music source separation: a score-informed model where the
score is concatenated with the magnitude spectrogram of the audio mixture as
the input of the model, and a model where we use only the score to calculate
the separation mask. We train our models on synthetic data in the SynthSOD
dataset and evaluate our methods on the URMP and Aalto anechoic orchestra
datasets, comprised of real recordings. The score-informed model improves
separation results compared to a baseline approach, but struggles to generalize
from synthetic to real data, whereas the score-only model shows a clear
improvement in synthetic-to-real generalization.
| no_new_dataset | 0.954393 |
2503.07353 | Yaroslava Lochman | Carl Olsson, Yaroslava Lochman, Johan Malmport, Christopher Zach | Certifiably Optimal Anisotropic Rotation Averaging | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rotation averaging is a key subproblem in applications of computer vision and
robotics. Many methods for solving this problem exist, and there are also
several theoretical results analyzing difficulty and optimality. However, one
aspect that most of these have in common is a focus on the isotropic setting,
where the intrinsic uncertainties in the measurements are not fully
incorporated into the resulting optimization task. Recent empirical results
suggest that moving to an anisotropic framework, where these uncertainties are
explicitly included, can result in an improvement of solution quality. However,
global optimization for rotation averaging has remained a challenge in this
scenario. In this paper we show how anisotropic costs can be incorporated in
certifiably optimal rotation averaging. We also demonstrate how existing
solvers, designed for isotropic situations, fail in the anisotropic setting.
Finally, we propose a stronger relaxation and show empirically that it is able
to recover global optima in all tested datasets and leads to a more accurate
reconstruction in all but one of the scenes.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:09:27 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Olsson",
"Carl",
""
],
[
"Lochman",
"Yaroslava",
""
],
[
"Malmport",
"Johan",
""
],
[
"Zach",
"Christopher",
""
]
]
| TITLE: Certifiably Optimal Anisotropic Rotation Averaging
ABSTRACT: Rotation averaging is a key subproblem in applications of computer vision and
robotics. Many methods for solving this problem exist, and there are also
several theoretical results analyzing difficulty and optimality. However, one
aspect that most of these have in common is a focus on the isotropic setting,
where the intrinsic uncertainties in the measurements are not fully
incorporated into the resulting optimization task. Recent empirical results
suggest that moving to an anisotropic framework, where these uncertainties are
explicitly included, can result in an improvement of solution quality. However,
global optimization for rotation averaging has remained a challenge in this
scenario. In this paper we show how anisotropic costs can be incorporated in
certifiably optimal rotation averaging. We also demonstrate how existing
solvers, designed for isotropic situations, fail in the anisotropic setting.
Finally, we propose a stronger relaxation and show empirically that it is able
to recover global optima in all tested datasets and leads to a more accurate
reconstruction in all but one of the scenes.
| no_new_dataset | 0.947527 |
2503.07358 | Yiqing Xie | Yiqing Xie, Alex Xie, Divyanshu Sheth, Pengfei Liu, Daniel Fried,
Carolyn Rose | RepoST: Scalable Repository-Level Coding Environment Construction with
Sandbox Testing | null | null | null | null | cs.CL cs.SE | http://creativecommons.org/licenses/by/4.0/ | We present RepoST, a scalable method to construct environments that provide
execution feedback for repository-level code generation for both training and
evaluation. Unlike existing works that aim to build entire repositories for
execution, which is challenging for both human and LLMs, we provide execution
feedback with sandbox testing, which isolates a given target function and its
dependencies to a separate script for testing. Sandbox testing reduces the
complexity of external dependencies and enables constructing environments at a
large scale. We use our method to construct RepoST-Train, a large-scale train
set with 7,415 functions from 832 repositories. Training with the execution
feedback provided by RepoST-Train leads to a performance gain of 5.5% Pass@1 on
HumanEval and 3.5% Pass@1 on RepoEval. We also build an evaluation dataset,
RepoST-Eval, and benchmark 12 code generation models.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:16:08 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xie",
"Yiqing",
""
],
[
"Xie",
"Alex",
""
],
[
"Sheth",
"Divyanshu",
""
],
[
"Liu",
"Pengfei",
""
],
[
"Fried",
"Daniel",
""
],
[
"Rose",
"Carolyn",
""
]
]
| TITLE: RepoST: Scalable Repository-Level Coding Environment Construction with
Sandbox Testing
ABSTRACT: We present RepoST, a scalable method to construct environments that provide
execution feedback for repository-level code generation for both training and
evaluation. Unlike existing works that aim to build entire repositories for
execution, which is challenging for both human and LLMs, we provide execution
feedback with sandbox testing, which isolates a given target function and its
dependencies to a separate script for testing. Sandbox testing reduces the
complexity of external dependencies and enables constructing environments at a
large scale. We use our method to construct RepoST-Train, a large-scale train
set with 7,415 functions from 832 repositories. Training with the execution
feedback provided by RepoST-Train leads to a performance gain of 5.5% Pass@1 on
HumanEval and 3.5% Pass@1 on RepoEval. We also build an evaluation dataset,
RepoST-Eval, and benchmark 12 code generation models.
| new_dataset | 0.950641 |
2503.07360 | Yi-Lin Wei | Yi-Lin Wei, Mu Lin, Yuhao Lin, Jian-Jian Jiang, Xiao-Ming Wu, Ling-An
Zeng, Wei-Shi Zheng | AffordDexGrasp: Open-set Language-guided Dexterous Grasp with
Generalizable-Instructive Affordance | 8 pages, 4 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Language-guided robot dexterous generation enables robots to grasp and
manipulate objects based on human commands. However, previous data-driven
methods are hard to understand intention and execute grasping with unseen
categories in the open set. In this work, we explore a new task, Open-set
Language-guided Dexterous Grasp, and find that the main challenge is the huge
gap between high-level human language semantics and low-level robot actions. To
solve this problem, we propose an Affordance Dexterous Grasp (AffordDexGrasp)
framework, with the insight of bridging the gap with a new
generalizable-instructive affordance representation. This affordance can
generalize to unseen categories by leveraging the object's local structure and
category-agnostic semantic attributes, thereby effectively guiding dexterous
grasp generation. Built upon the affordance, our framework introduces
Affordacne Flow Matching (AFM) for affordance generation with language as
input, and Grasp Flow Matching (GFM) for generating dexterous grasp with
affordance as input. To evaluate our framework, we build an open-set table-top
language-guided dexterous grasp dataset. Extensive experiments in the
simulation and real worlds show that our framework surpasses all previous
methods in open-set generalization.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:17:07 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Wei",
"Yi-Lin",
""
],
[
"Lin",
"Mu",
""
],
[
"Lin",
"Yuhao",
""
],
[
"Jiang",
"Jian-Jian",
""
],
[
"Wu",
"Xiao-Ming",
""
],
[
"Zeng",
"Ling-An",
""
],
[
"Zheng",
"Wei-Shi",
""
]
]
| TITLE: AffordDexGrasp: Open-set Language-guided Dexterous Grasp with
Generalizable-Instructive Affordance
ABSTRACT: Language-guided robot dexterous generation enables robots to grasp and
manipulate objects based on human commands. However, previous data-driven
methods are hard to understand intention and execute grasping with unseen
categories in the open set. In this work, we explore a new task, Open-set
Language-guided Dexterous Grasp, and find that the main challenge is the huge
gap between high-level human language semantics and low-level robot actions. To
solve this problem, we propose an Affordance Dexterous Grasp (AffordDexGrasp)
framework, with the insight of bridging the gap with a new
generalizable-instructive affordance representation. This affordance can
generalize to unseen categories by leveraging the object's local structure and
category-agnostic semantic attributes, thereby effectively guiding dexterous
grasp generation. Built upon the affordance, our framework introduces
Affordacne Flow Matching (AFM) for affordance generation with language as
input, and Grasp Flow Matching (GFM) for generating dexterous grasp with
affordance as input. To evaluate our framework, we build an open-set table-top
language-guided dexterous grasp dataset. Extensive experiments in the
simulation and real worlds show that our framework surpasses all previous
methods in open-set generalization.
| no_new_dataset | 0.921922 |
2503.07367 | Kangan Qian | Kangan Qian and Jinyu Miao and Ziang Luo and Zheng Fu and and Jinchen
Li and Yining Shi and Yunlong Wang and Kun Jiang and Mengmeng Yang and Diange
Yang | LEGO-Motion: Learning-Enhanced Grids with Occupancy Instance Modeling
for Class-Agnostic Motion Prediction | 8 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate and reliable spatial and motion information plays a pivotal role in
autonomous driving systems. However, object-level perception models struggle
with handling open scenario categories and lack precise intrinsic geometry. On
the other hand, occupancy-based class-agnostic methods excel in representing
scenes but fail to ensure physics consistency and ignore the importance of
interactions between traffic participants, hindering the model's ability to
learn accurate and reliable motion. In this paper, we introduce a novel
occupancy-instance modeling framework for class-agnostic motion prediction
tasks, named LEGO-Motion, which incorporates instance features into Bird's Eye
View (BEV) space. Our model comprises (1) a BEV encoder, (2) an
Interaction-Augmented Instance Encoder, and (3) an Instance-Enhanced BEV
Encoder, improving both interaction relationships and physics consistency
within the model, thereby ensuring a more accurate and robust understanding of
the environment. Extensive experiments on the nuScenes dataset demonstrate that
our method achieves state-of-the-art performance, outperforming existing
approaches. Furthermore, the effectiveness of our framework is validated on the
advanced FMCW LiDAR benchmark, showcasing its practical applicability and
generalization capabilities. The code will be made publicly available to
facilitate further research.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:26:21 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Qian",
"Kangan",
""
],
[
"Miao",
"Jinyu",
""
],
[
"Luo",
"Ziang",
""
],
[
"Fu",
"Zheng",
""
],
[
"Li",
"and Jinchen",
""
],
[
"Shi",
"Yining",
""
],
[
"Wang",
"Yunlong",
""
],
[
"Jiang",
"Kun",
""
],
[
"Yang",
"Mengmeng",
""
],
[
"Yang",
"Diange",
""
]
]
| TITLE: LEGO-Motion: Learning-Enhanced Grids with Occupancy Instance Modeling
for Class-Agnostic Motion Prediction
ABSTRACT: Accurate and reliable spatial and motion information plays a pivotal role in
autonomous driving systems. However, object-level perception models struggle
with handling open scenario categories and lack precise intrinsic geometry. On
the other hand, occupancy-based class-agnostic methods excel in representing
scenes but fail to ensure physics consistency and ignore the importance of
interactions between traffic participants, hindering the model's ability to
learn accurate and reliable motion. In this paper, we introduce a novel
occupancy-instance modeling framework for class-agnostic motion prediction
tasks, named LEGO-Motion, which incorporates instance features into Bird's Eye
View (BEV) space. Our model comprises (1) a BEV encoder, (2) an
Interaction-Augmented Instance Encoder, and (3) an Instance-Enhanced BEV
Encoder, improving both interaction relationships and physics consistency
within the model, thereby ensuring a more accurate and robust understanding of
the environment. Extensive experiments on the nuScenes dataset demonstrate that
our method achieves state-of-the-art performance, outperforming existing
approaches. Furthermore, the effectiveness of our framework is validated on the
advanced FMCW LiDAR benchmark, showcasing its practical applicability and
generalization capabilities. The code will be made publicly available to
facilitate further research.
| no_new_dataset | 0.946151 |
2503.07375 | Robert Hallyburton | R. Spencer Hallyburton, David Hunt, Yiwei He, Judy He, Miroslav Pajic | Probabilistic Segmentation for Robust Field of View Estimation | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Attacks on sensing and perception threaten the safe deployment of autonomous
vehicles (AVs). Security-aware sensor fusion helps mitigate threats but
requires accurate field of view (FOV) estimation which has not been evaluated
autonomy. To address this gap, we adapt classical computer graphics algorithms
to develop the first autonomy-relevant FOV estimators and create the first
datasets with ground truth FOV labels. Unfortunately, we find that these
approaches are themselves highly vulnerable to attacks on sensing. To improve
robustness of FOV estimation against attacks, we propose a learning-based
segmentation model that captures FOV features, integrates Monte Carlo dropout
(MCD) for uncertainty quantification, and performs anomaly detection on
confidence maps. We illustrate through comprehensive evaluations attack
resistance and strong generalization across environments. Architecture trade
studies demonstrate the model is feasible for real-time deployment in multiple
applications.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:30:56 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hallyburton",
"R. Spencer",
""
],
[
"Hunt",
"David",
""
],
[
"He",
"Yiwei",
""
],
[
"He",
"Judy",
""
],
[
"Pajic",
"Miroslav",
""
]
]
| TITLE: Probabilistic Segmentation for Robust Field of View Estimation
ABSTRACT: Attacks on sensing and perception threaten the safe deployment of autonomous
vehicles (AVs). Security-aware sensor fusion helps mitigate threats but
requires accurate field of view (FOV) estimation which has not been evaluated
autonomy. To address this gap, we adapt classical computer graphics algorithms
to develop the first autonomy-relevant FOV estimators and create the first
datasets with ground truth FOV labels. Unfortunately, we find that these
approaches are themselves highly vulnerable to attacks on sensing. To improve
robustness of FOV estimation against attacks, we propose a learning-based
segmentation model that captures FOV features, integrates Monte Carlo dropout
(MCD) for uncertainty quantification, and performs anomaly detection on
confidence maps. We illustrate through comprehensive evaluations attack
resistance and strong generalization across environments. Architecture trade
studies demonstrate the model is feasible for real-time deployment in multiple
applications.
| new_dataset | 0.935876 |
2503.07383 | Richard Braatz | Yunhong Che, Vivek N. Lam, Jinwook Rhyu, Joachim Schaeffer, Minsu Kim,
Martin Z. Bazant, William C. Chueh, Richard D. Braatz | Diagnostic-free onboard battery health assessment | 25 pages | null | null | null | eess.SY cs.LG cs.SY | http://creativecommons.org/licenses/by/4.0/ | Diverse usage patterns induce complex and variable aging behaviors in
lithium-ion batteries, complicating accurate health diagnosis and prognosis.
Separate diagnostic cycles are often used to untangle the battery's current
state of health from prior complex aging patterns. However, these same
diagnostic cycles alter the battery's degradation trajectory, are
time-intensive, and cannot be practically performed in onboard applications. In
this work, we leverage portions of operational measurements in combination with
an interpretable machine learning model to enable rapid, onboard battery health
diagnostics and prognostics without offline diagnostic testing and the
requirement of historical data. We integrate mechanistic constraints within an
encoder-decoder architecture to extract electrode states in a physically
interpretable latent space and enable improved reconstruction of the
degradation path. The health diagnosis model framework can be flexibly applied
across diverse application interests with slight fine-tuning. We demonstrate
the versatility of this model framework by applying it to three battery-cycling
datasets consisting of 422 cells under different operating conditions,
highlighting the utility of an interpretable diagnostic-free, onboard battery
diagnosis and prognosis model.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:32:27 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Che",
"Yunhong",
""
],
[
"Lam",
"Vivek N.",
""
],
[
"Rhyu",
"Jinwook",
""
],
[
"Schaeffer",
"Joachim",
""
],
[
"Kim",
"Minsu",
""
],
[
"Bazant",
"Martin Z.",
""
],
[
"Chueh",
"William C.",
""
],
[
"Braatz",
"Richard D.",
""
]
]
| TITLE: Diagnostic-free onboard battery health assessment
ABSTRACT: Diverse usage patterns induce complex and variable aging behaviors in
lithium-ion batteries, complicating accurate health diagnosis and prognosis.
Separate diagnostic cycles are often used to untangle the battery's current
state of health from prior complex aging patterns. However, these same
diagnostic cycles alter the battery's degradation trajectory, are
time-intensive, and cannot be practically performed in onboard applications. In
this work, we leverage portions of operational measurements in combination with
an interpretable machine learning model to enable rapid, onboard battery health
diagnostics and prognostics without offline diagnostic testing and the
requirement of historical data. We integrate mechanistic constraints within an
encoder-decoder architecture to extract electrode states in a physically
interpretable latent space and enable improved reconstruction of the
degradation path. The health diagnosis model framework can be flexibly applied
across diverse application interests with slight fine-tuning. We demonstrate
the versatility of this model framework by applying it to three battery-cycling
datasets consisting of 422 cells under different operating conditions,
highlighting the utility of an interpretable diagnostic-free, onboard battery
diagnosis and prognosis model.
| no_new_dataset | 0.944331 |
2503.07395 | Nadav Borenstein | Nadav Borenstein | Revisiting Noise in Natural Language Processing for Computational Social
Science | PhD thesis. Under the supervision of Prof. Isabelle Augenstein | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Computational Social Science (CSS) is an emerging field driven by the
unprecedented availability of human-generated content for researchers. This
field, however, presents a unique set of challenges due to the nature of the
theories and datasets it explores, including highly subjective tasks and
complex, unstructured textual corpora. Among these challenges, one of the less
well-studied topics is the pervasive presence of noise. This thesis aims to
address this gap in the literature by presenting a series of interconnected
case studies that examine different manifestations of noise in CSS. These
include character-level errors following the OCR processing of historical
records, archaic language, inconsistencies in annotations for subjective and
ambiguous tasks, and even noise and biases introduced by large language models
during content generation. This thesis challenges the conventional notion that
noise in CSS is inherently harmful or useless. Rather, it argues that certain
forms of noise can encode meaningful information that is invaluable for
advancing CSS research, such as the unique communication styles of individuals
or the culture-dependent nature of datasets and tasks. Further, this thesis
highlights the importance of nuance in dealing with noise and the
considerations CSS researchers must address when encountering it, demonstrating
that different types of noise require distinct strategies.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:42:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Borenstein",
"Nadav",
""
]
]
| TITLE: Revisiting Noise in Natural Language Processing for Computational Social
Science
ABSTRACT: Computational Social Science (CSS) is an emerging field driven by the
unprecedented availability of human-generated content for researchers. This
field, however, presents a unique set of challenges due to the nature of the
theories and datasets it explores, including highly subjective tasks and
complex, unstructured textual corpora. Among these challenges, one of the less
well-studied topics is the pervasive presence of noise. This thesis aims to
address this gap in the literature by presenting a series of interconnected
case studies that examine different manifestations of noise in CSS. These
include character-level errors following the OCR processing of historical
records, archaic language, inconsistencies in annotations for subjective and
ambiguous tasks, and even noise and biases introduced by large language models
during content generation. This thesis challenges the conventional notion that
noise in CSS is inherently harmful or useless. Rather, it argues that certain
forms of noise can encode meaningful information that is invaluable for
advancing CSS research, such as the unique communication styles of individuals
or the culture-dependent nature of datasets and tasks. Further, this thesis
highlights the importance of nuance in dealing with noise and the
considerations CSS researchers must address when encountering it, demonstrating
that different types of noise require distinct strategies.
| no_new_dataset | 0.949012 |
2503.07396 | Kexin Di | Kexin Di, Xiuxing Li, Yuyang Han, Ziyu Li, Qing Li, Xia Wu | Brain Inspired Adaptive Memory Dual-Net for Few-Shot Image
Classification | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Few-shot image classification has become a popular research topic for its
wide application in real-world scenarios, however the problem of supervision
collapse induced by single image-level annotation remains a major challenge.
Existing methods aim to tackle this problem by locating and aligning relevant
local features. However, the high intra-class variability in real-world images
poses significant challenges in locating semantically relevant local regions
under few-shot settings. Drawing inspiration from the human's complementary
learning system, which excels at rapidly capturing and integrating semantic
features from limited examples, we propose the generalization-optimized Systems
Consolidation Adaptive Memory Dual-Network, SCAM-Net. This approach simulates
the systems consolidation of complementary learning system with an adaptive
memory module, which successfully addresses the difficulty of identifying
meaningful features in few-shot scenarios. Specifically, we construct a
Hippocampus-Neocortex dual-network that consolidates structured representation
of each category, the structured representation is then stored and adaptively
regulated following the generalization optimization principle in a long-term
memory inside Neocortex. Extensive experiments on benchmark datasets show that
the proposed model has achieved state-of-the-art performance.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:42:51 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Di",
"Kexin",
""
],
[
"Li",
"Xiuxing",
""
],
[
"Han",
"Yuyang",
""
],
[
"Li",
"Ziyu",
""
],
[
"Li",
"Qing",
""
],
[
"Wu",
"Xia",
""
]
]
| TITLE: Brain Inspired Adaptive Memory Dual-Net for Few-Shot Image
Classification
ABSTRACT: Few-shot image classification has become a popular research topic for its
wide application in real-world scenarios, however the problem of supervision
collapse induced by single image-level annotation remains a major challenge.
Existing methods aim to tackle this problem by locating and aligning relevant
local features. However, the high intra-class variability in real-world images
poses significant challenges in locating semantically relevant local regions
under few-shot settings. Drawing inspiration from the human's complementary
learning system, which excels at rapidly capturing and integrating semantic
features from limited examples, we propose the generalization-optimized Systems
Consolidation Adaptive Memory Dual-Network, SCAM-Net. This approach simulates
the systems consolidation of complementary learning system with an adaptive
memory module, which successfully addresses the difficulty of identifying
meaningful features in few-shot scenarios. Specifically, we construct a
Hippocampus-Neocortex dual-network that consolidates structured representation
of each category, the structured representation is then stored and adaptively
regulated following the generalization optimization principle in a long-term
memory inside Neocortex. Extensive experiments on benchmark datasets show that
the proposed model has achieved state-of-the-art performance.
| no_new_dataset | 0.948965 |
2503.07399 | Wenqiang Zu | Wenqiang Zu, Shenghao Xie, Hao Chen, Yiming Liang, Lei Ma | Keeping Representation Similarity in Finetuning for Medical Image
Analysis | 12 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation models pretrained on large-scale natural images have been widely
used to adapt to medical image analysis through finetuning. This is largely
attributed to pretrained representations capturing universal, robust, and
generalizable features, which can be reutilized by downstream tasks. However,
these representations are later found to gradually vanish during finetuning,
accompanied by a degradation of foundation model's original abilities, e.g.,
generalizability. In this paper, we argue that pretrained representations can
be well preserved while still effectively adapting to downstream tasks. We
study this by proposing a new finetuning method RepSim, which minimizes the
distance between pretrained and finetuned representations via constraining
learnable orthogonal manifold based on similarity invariance. Compared to
standard finetuning methods, e.g., full finetuning, our method improves
representation similarity by over 30% while maintaining competitive accuracy,
and reduces sharpness by 42% across five medical image classification datasets.
The code will be released.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:44:37 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zu",
"Wenqiang",
""
],
[
"Xie",
"Shenghao",
""
],
[
"Chen",
"Hao",
""
],
[
"Liang",
"Yiming",
""
],
[
"Ma",
"Lei",
""
]
]
| TITLE: Keeping Representation Similarity in Finetuning for Medical Image
Analysis
ABSTRACT: Foundation models pretrained on large-scale natural images have been widely
used to adapt to medical image analysis through finetuning. This is largely
attributed to pretrained representations capturing universal, robust, and
generalizable features, which can be reutilized by downstream tasks. However,
these representations are later found to gradually vanish during finetuning,
accompanied by a degradation of foundation model's original abilities, e.g.,
generalizability. In this paper, we argue that pretrained representations can
be well preserved while still effectively adapting to downstream tasks. We
study this by proposing a new finetuning method RepSim, which minimizes the
distance between pretrained and finetuned representations via constraining
learnable orthogonal manifold based on similarity invariance. Compared to
standard finetuning methods, e.g., full finetuning, our method improves
representation similarity by over 30% while maintaining competitive accuracy,
and reduces sharpness by 42% across five medical image classification datasets.
The code will be released.
| no_new_dataset | 0.945298 |
2503.07413 | Yan Tai | Yan Tai, Luhao Zhu, Zhiqiang Chen, Ynan Ding, Yiying Dong, Xiaohong
Liu, Guodong Guo | REF-VLM: Triplet-Based Referring Paradigm for Unified Visual Decoding | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multimodal Large Language Models (MLLMs) demonstrate robust zero-shot
capabilities across diverse vision-language tasks after training on mega-scale
datasets. However, dense prediction tasks, such as semantic segmentation and
keypoint detection, pose significant challenges for MLLMs when represented
solely as text outputs. Simultaneously, current MLLMs utilizing latent
embeddings for visual task decoding generally demonstrate limited adaptability
to both multi-task learning and multi-granularity scenarios. In this work, we
present REF-VLM, an end-to-end framework for unified training of various visual
decoding tasks. To address complex visual decoding scenarios, we introduce the
Triplet-Based Referring Paradigm (TRP), which explicitly decouples three
critical dimensions in visual decoding tasks through a triplet structure:
concepts, decoding types, and targets. TRP employs symbolic delimiters to
enforce structured representation learning, enhancing the parsability and
interpretability of model outputs. Additionally, we construct Visual-Task
Instruction Following Dataset (VTInstruct), a large-scale multi-task dataset
containing over 100 million multimodal dialogue samples across 25 task types.
Beyond text inputs and outputs, VT-Instruct incorporates various visual prompts
such as point, box, scribble, and mask, and generates outputs composed of text
and visual units like box, keypoint, depth and mask. The combination of
different visual prompts and visual units generates a wide variety of task
types, expanding the applicability of REF-VLM significantly. Both qualitative
and quantitative experiments demonstrate that our REF-VLM outperforms other
MLLMs across a variety of standard benchmarks. The code, dataset, and demo
available at https://github.com/MacavityT/REF-VLM.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 14:59:14 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Tai",
"Yan",
""
],
[
"Zhu",
"Luhao",
""
],
[
"Chen",
"Zhiqiang",
""
],
[
"Ding",
"Ynan",
""
],
[
"Dong",
"Yiying",
""
],
[
"Liu",
"Xiaohong",
""
],
[
"Guo",
"Guodong",
""
]
]
| TITLE: REF-VLM: Triplet-Based Referring Paradigm for Unified Visual Decoding
ABSTRACT: Multimodal Large Language Models (MLLMs) demonstrate robust zero-shot
capabilities across diverse vision-language tasks after training on mega-scale
datasets. However, dense prediction tasks, such as semantic segmentation and
keypoint detection, pose significant challenges for MLLMs when represented
solely as text outputs. Simultaneously, current MLLMs utilizing latent
embeddings for visual task decoding generally demonstrate limited adaptability
to both multi-task learning and multi-granularity scenarios. In this work, we
present REF-VLM, an end-to-end framework for unified training of various visual
decoding tasks. To address complex visual decoding scenarios, we introduce the
Triplet-Based Referring Paradigm (TRP), which explicitly decouples three
critical dimensions in visual decoding tasks through a triplet structure:
concepts, decoding types, and targets. TRP employs symbolic delimiters to
enforce structured representation learning, enhancing the parsability and
interpretability of model outputs. Additionally, we construct Visual-Task
Instruction Following Dataset (VTInstruct), a large-scale multi-task dataset
containing over 100 million multimodal dialogue samples across 25 task types.
Beyond text inputs and outputs, VT-Instruct incorporates various visual prompts
such as point, box, scribble, and mask, and generates outputs composed of text
and visual units like box, keypoint, depth and mask. The combination of
different visual prompts and visual units generates a wide variety of task
types, expanding the applicability of REF-VLM significantly. Both qualitative
and quantitative experiments demonstrate that our REF-VLM outperforms other
MLLMs across a variety of standard benchmarks. The code, dataset, and demo
available at https://github.com/MacavityT/REF-VLM.
| new_dataset | 0.956957 |
2503.07419 | Lu Cao | Tijs Konijn, Imaan Bijl, Lu Cao and Fons Verbeek | Analysis of 3D Urticaceae Pollen Classification Using Deep Learning
Models | null | Proceedings of the 18th International Joint Conference on
Biomedical Engineering Systems and Technologies - BIOIMAGING, 2025 | 10.5220/0013102700003911 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Due to the climate change, hay fever becomes a pressing healthcare problem
with an increasing number of affected population, prolonged period of affect
and severer symptoms. A precise pollen classification could help monitor the
trend of allergic pollen in the air throughout the year and guide preventive
strategies launched by municipalities. Most of the pollen classification works
use 2D microscopy image or 2D projection derived from 3D image datasets. In
this paper, we aim at using whole stack of 3D images for the classification and
evaluating the classification performance with different deep learning models.
The 3D image dataset used in this paper is from Urticaceae family, particularly
the genera Urtica and Parietaria, which are morphologically similar yet differ
significantly in allergenic potential. The pre-trained ResNet3D model, using
optimal layer selection and extended epochs, achieved the best performance with
an F1-score of 98.3%.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 15:07:04 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Konijn",
"Tijs",
""
],
[
"Bijl",
"Imaan",
""
],
[
"Cao",
"Lu",
""
],
[
"Verbeek",
"Fons",
""
]
]
| TITLE: Analysis of 3D Urticaceae Pollen Classification Using Deep Learning
Models
ABSTRACT: Due to the climate change, hay fever becomes a pressing healthcare problem
with an increasing number of affected population, prolonged period of affect
and severer symptoms. A precise pollen classification could help monitor the
trend of allergic pollen in the air throughout the year and guide preventive
strategies launched by municipalities. Most of the pollen classification works
use 2D microscopy image or 2D projection derived from 3D image datasets. In
this paper, we aim at using whole stack of 3D images for the classification and
evaluating the classification performance with different deep learning models.
The 3D image dataset used in this paper is from Urticaceae family, particularly
the genera Urtica and Parietaria, which are morphologically similar yet differ
significantly in allergenic potential. The pre-trained ResNet3D model, using
optimal layer selection and extended epochs, achieved the best performance with
an F1-score of 98.3%.
| new_dataset | 0.669799 |
2503.07424 | Chichun Zhou | Zhangdi Liu, Ling An, Mengke Song, Zhuohang Yu, Shan Wang, Kezhen Qi,
Zhenyu Zhang and Chichun Zhou | Inorganic Catalyst Efficiency Prediction Based on EAPCR Model: A Deep
Learning Solution for Multi-Source Heterogeneous Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The design of inorganic catalysts and the prediction of their catalytic
efficiency are fundamental challenges in chemistry and materials science.
Traditional catalyst evaluation methods primarily rely on machine learning
techniques; however, these methods often struggle to process multi-source
heterogeneous data, limiting both predictive accuracy and generalization. To
address these limitations, this study introduces the
Embedding-Attention-Permutated CNN-Residual (EAPCR) deep learning model. EAPCR
constructs a feature association matrix using embedding and attention
mechanisms and enhances predictive performance through permutated CNN
architectures and residual connections. This approach enables the model to
accurately capture complex feature interactions across various catalytic
conditions, leading to precise efficiency predictions. EAPCR serves as a
powerful tool for computational researchers while also assisting domain experts
in optimizing catalyst design, effectively bridging the gap between data-driven
modeling and experimental applications. We evaluate EAPCR on datasets from TiO2
photocatalysis, thermal catalysis, and electrocatalysis, demonstrating its
superiority over traditional machine learning methods (e.g., linear regression,
random forest) as well as conventional deep learning models (e.g., ANN, NNs).
Across multiple evaluation metrics (MAE, MSE, R2, and RMSE), EAPCR consistently
outperforms existing approaches. These findings highlight the strong potential
of EAPCR in inorganic catalytic efficiency prediction. As a versatile deep
learning framework, EAPCR not only improves predictive accuracy but also
establishes a solid foundation for future large-scale model development in
inorganic catalysis.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 15:10:22 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Zhangdi",
""
],
[
"An",
"Ling",
""
],
[
"Song",
"Mengke",
""
],
[
"Yu",
"Zhuohang",
""
],
[
"Wang",
"Shan",
""
],
[
"Qi",
"Kezhen",
""
],
[
"Zhang",
"Zhenyu",
""
],
[
"Zhou",
"Chichun",
""
]
]
| TITLE: Inorganic Catalyst Efficiency Prediction Based on EAPCR Model: A Deep
Learning Solution for Multi-Source Heterogeneous Data
ABSTRACT: The design of inorganic catalysts and the prediction of their catalytic
efficiency are fundamental challenges in chemistry and materials science.
Traditional catalyst evaluation methods primarily rely on machine learning
techniques; however, these methods often struggle to process multi-source
heterogeneous data, limiting both predictive accuracy and generalization. To
address these limitations, this study introduces the
Embedding-Attention-Permutated CNN-Residual (EAPCR) deep learning model. EAPCR
constructs a feature association matrix using embedding and attention
mechanisms and enhances predictive performance through permutated CNN
architectures and residual connections. This approach enables the model to
accurately capture complex feature interactions across various catalytic
conditions, leading to precise efficiency predictions. EAPCR serves as a
powerful tool for computational researchers while also assisting domain experts
in optimizing catalyst design, effectively bridging the gap between data-driven
modeling and experimental applications. We evaluate EAPCR on datasets from TiO2
photocatalysis, thermal catalysis, and electrocatalysis, demonstrating its
superiority over traditional machine learning methods (e.g., linear regression,
random forest) as well as conventional deep learning models (e.g., ANN, NNs).
Across multiple evaluation metrics (MAE, MSE, R2, and RMSE), EAPCR consistently
outperforms existing approaches. These findings highlight the strong potential
of EAPCR in inorganic catalytic efficiency prediction. As a versatile deep
learning framework, EAPCR not only improves predictive accuracy but also
establishes a solid foundation for future large-scale model development in
inorganic catalysis.
| no_new_dataset | 0.947186 |
2503.07462 | Elena Atroshchenko | P. Peralta-Braz, M. M. Alamdari, C. T. Chou, M. Hassan, E.
Atroshchenko | Simultaneous Energy Harvesting and Bearing Fault Detection using
Piezoelectric Cantilevers | null | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | Bearings are critical components in industrial machinery, yet their
vulnerability to faults often leads to costly breakdowns. Conventional fault
detection methods depend on continuous, high-frequency vibration sensing,
digitising, and wireless transmission to the cloud-an approach that
significantly drains the limited energy reserves of battery-powered sensors,
accelerating their depletion and increasing maintenance costs. This work
proposes a fundamentally different approach: rather than using instantaneous
vibration data, we employ piezoelectric energy harvesters (PEHs) tuned to
specific frequencies and leverage the cumulative harvested energy over time as
the key diagnostic feature. By directly utilising the energy generated from the
machinery's vibrations, we eliminate the need for frequent analog-to-digital
conversions and data transmission, thereby reducing energy consumption at the
sensor node and extending its operational lifetime. To validate this approach,
we use a numerical PEH model and publicly available acceleration datasets,
examining various PEH designs with different natural frequencies. We also
consider the influence of the classification algorithm, the number of devices,
and the observation window duration. The results demonstrate that the harvested
energy reliably indicates bearing faults across a range of conditions and
severities. By converting vibration energy into both a power source and a
diagnostic feature, our solution offers a more sustainable, low-maintenance
strategy for fault detection in smart machinery.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 15:41:22 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Peralta-Braz",
"P.",
""
],
[
"Alamdari",
"M. M.",
""
],
[
"Chou",
"C. T.",
""
],
[
"Hassan",
"M.",
""
],
[
"Atroshchenko",
"E.",
""
]
]
| TITLE: Simultaneous Energy Harvesting and Bearing Fault Detection using
Piezoelectric Cantilevers
ABSTRACT: Bearings are critical components in industrial machinery, yet their
vulnerability to faults often leads to costly breakdowns. Conventional fault
detection methods depend on continuous, high-frequency vibration sensing,
digitising, and wireless transmission to the cloud-an approach that
significantly drains the limited energy reserves of battery-powered sensors,
accelerating their depletion and increasing maintenance costs. This work
proposes a fundamentally different approach: rather than using instantaneous
vibration data, we employ piezoelectric energy harvesters (PEHs) tuned to
specific frequencies and leverage the cumulative harvested energy over time as
the key diagnostic feature. By directly utilising the energy generated from the
machinery's vibrations, we eliminate the need for frequent analog-to-digital
conversions and data transmission, thereby reducing energy consumption at the
sensor node and extending its operational lifetime. To validate this approach,
we use a numerical PEH model and publicly available acceleration datasets,
examining various PEH designs with different natural frequencies. We also
consider the influence of the classification algorithm, the number of devices,
and the observation window duration. The results demonstrate that the harvested
energy reliably indicates bearing faults across a range of conditions and
severities. By converting vibration energy into both a power source and a
diagnostic feature, our solution offers a more sustainable, low-maintenance
strategy for fault detection in smart machinery.
| no_new_dataset | 0.952175 |
2503.07464 | Jimmy Gammell | Jimmy Gammell, Anand Raghunathan, Abolfazl Hashemi, Kaushik Roy | Learning to Localize Leakage of Cryptographic Sensitive Variables | 52 pages, 30 figures. Our code can be found at
https://github.com/jimgammell/learning_to_localize_leakage | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | While cryptographic algorithms such as the ubiquitous Advanced Encryption
Standard (AES) are secure, *physical implementations* of these algorithms in
hardware inevitably 'leak' sensitive data such as cryptographic keys. A
particularly insidious form of leakage arises from the fact that hardware
consumes power and emits radiation in a manner that is statistically associated
with the data it processes and the instructions it executes. Supervised deep
learning has emerged as a state-of-the-art tool for carrying out *side-channel
attacks*, which exploit this leakage by learning to map power/radiation
measurements throughout encryption to the sensitive data operated on during
that encryption. In this work we develop a principled deep learning framework
for determining the relative leakage due to measurements recorded at different
points in time, in order to inform *defense* against such attacks. This
information is invaluable to cryptographic hardware designers for understanding
*why* their hardware leaks and how they can mitigate it (e.g. by indicating the
particular sections of code or electronic components which are responsible).
Our framework is based on an adversarial game between a family of classifiers
trained to estimate the conditional distributions of sensitive data given
subsets of measurements, and a budget-constrained noise distribution which
probabilistically erases individual measurements to maximize the loss of these
classifiers. We demonstrate our method's efficacy and ability to overcome
limitations of prior work through extensive experimental comparison with 8
baseline methods using 3 evaluation metrics and 6 publicly-available power/EM
trace datasets from AES, ECC and RSA implementations. We provide an open-source
PyTorch implementation of these experiments.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 15:42:30 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Gammell",
"Jimmy",
""
],
[
"Raghunathan",
"Anand",
""
],
[
"Hashemi",
"Abolfazl",
""
],
[
"Roy",
"Kaushik",
""
]
]
| TITLE: Learning to Localize Leakage of Cryptographic Sensitive Variables
ABSTRACT: While cryptographic algorithms such as the ubiquitous Advanced Encryption
Standard (AES) are secure, *physical implementations* of these algorithms in
hardware inevitably 'leak' sensitive data such as cryptographic keys. A
particularly insidious form of leakage arises from the fact that hardware
consumes power and emits radiation in a manner that is statistically associated
with the data it processes and the instructions it executes. Supervised deep
learning has emerged as a state-of-the-art tool for carrying out *side-channel
attacks*, which exploit this leakage by learning to map power/radiation
measurements throughout encryption to the sensitive data operated on during
that encryption. In this work we develop a principled deep learning framework
for determining the relative leakage due to measurements recorded at different
points in time, in order to inform *defense* against such attacks. This
information is invaluable to cryptographic hardware designers for understanding
*why* their hardware leaks and how they can mitigate it (e.g. by indicating the
particular sections of code or electronic components which are responsible).
Our framework is based on an adversarial game between a family of classifiers
trained to estimate the conditional distributions of sensitive data given
subsets of measurements, and a budget-constrained noise distribution which
probabilistically erases individual measurements to maximize the loss of these
classifiers. We demonstrate our method's efficacy and ability to overcome
limitations of prior work through extensive experimental comparison with 8
baseline methods using 3 evaluation metrics and 6 publicly-available power/EM
trace datasets from AES, ECC and RSA implementations. We provide an open-source
PyTorch implementation of these experiments.
| no_new_dataset | 0.943867 |
2503.07478 | Jiacheng Ruan | Jiacheng Ruan, Wenzhen Yuan, Xian Gao, Ye Guo, Daoxin Zhang, Zhe Xu,
Yao Hu, Ting Liu, Yuzhuo Fu | VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language
Reward Models | 12 pages, 4 figures. This work is in progress | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although large visual-language models (LVLMs) have demonstrated strong
performance in multimodal tasks, errors may occasionally arise due to biases
during the reasoning process. Recently, reward models (RMs) have become
increasingly pivotal in the reasoning process. Specifically, process RMs
evaluate each reasoning step, outcome RMs focus on the assessment of reasoning
results, and critique RMs perform error analysis on the entire reasoning
process, followed by corrections. However, existing benchmarks for
vision-language RMs (VLRMs) typically assess only a single aspect of their
capabilities (e.g., distinguishing between two answers), thus limiting the
all-round evaluation and restricting the development of RMs in the
visual-language domain. To address this gap, we propose a comprehensive and
challenging benchmark, dubbed as VLRMBench, encompassing 12,634 questions.
VLRMBench is constructed based on three distinct types of datasets, covering
mathematical reasoning, hallucination understanding, and multi-image
understanding. We design 12 tasks across three major categories, focusing on
evaluating VLRMs in the aspects of process understanding, outcome judgment, and
critique generation. Extensive experiments are conducted on 21 open-source
models and 5 advanced closed-source models, highlighting the challenges posed
by VLRMBench. For instance, in the `Forecasting Future', a binary
classification task, the advanced GPT-4o achieves only a 76.0% accuracy.
Additionally, we perform comprehensive analytical studies, offering valuable
insights for the future development of VLRMs. We anticipate that VLRMBench will
serve as a pivotal benchmark in advancing VLRMs. Code and datasets will be
available at https://github.com/JCruan519/VLRMBench.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 15:52:57 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Ruan",
"Jiacheng",
""
],
[
"Yuan",
"Wenzhen",
""
],
[
"Gao",
"Xian",
""
],
[
"Guo",
"Ye",
""
],
[
"Zhang",
"Daoxin",
""
],
[
"Xu",
"Zhe",
""
],
[
"Hu",
"Yao",
""
],
[
"Liu",
"Ting",
""
],
[
"Fu",
"Yuzhuo",
""
]
]
| TITLE: VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language
Reward Models
ABSTRACT: Although large visual-language models (LVLMs) have demonstrated strong
performance in multimodal tasks, errors may occasionally arise due to biases
during the reasoning process. Recently, reward models (RMs) have become
increasingly pivotal in the reasoning process. Specifically, process RMs
evaluate each reasoning step, outcome RMs focus on the assessment of reasoning
results, and critique RMs perform error analysis on the entire reasoning
process, followed by corrections. However, existing benchmarks for
vision-language RMs (VLRMs) typically assess only a single aspect of their
capabilities (e.g., distinguishing between two answers), thus limiting the
all-round evaluation and restricting the development of RMs in the
visual-language domain. To address this gap, we propose a comprehensive and
challenging benchmark, dubbed as VLRMBench, encompassing 12,634 questions.
VLRMBench is constructed based on three distinct types of datasets, covering
mathematical reasoning, hallucination understanding, and multi-image
understanding. We design 12 tasks across three major categories, focusing on
evaluating VLRMs in the aspects of process understanding, outcome judgment, and
critique generation. Extensive experiments are conducted on 21 open-source
models and 5 advanced closed-source models, highlighting the challenges posed
by VLRMBench. For instance, in the `Forecasting Future', a binary
classification task, the advanced GPT-4o achieves only a 76.0% accuracy.
Additionally, we perform comprehensive analytical studies, offering valuable
insights for the future development of VLRMs. We anticipate that VLRMBench will
serve as a pivotal benchmark in advancing VLRMs. Code and datasets will be
available at https://github.com/JCruan519/VLRMBench.
| no_new_dataset | 0.80456 |
2503.07482 | Zhenlong Liu | Zhenlong Liu, Wenyu Jiang, Feng Zhou, Hongxin Wei | Efficient Membership Inference Attacks by Bayesian Neural Network | 8 pages, under review | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Membership Inference Attacks (MIAs) aim to estimate whether a specific data
point was used in the training of a given model. Previous attacks often utilize
multiple reference models to approximate the conditional score distribution,
leading to significant computational overhead. While recent work leverages
quantile regression to estimate conditional thresholds, it fails to capture
epistemic uncertainty, resulting in bias in low-density regions. In this work,
we propose a novel approach - Bayesian Membership Inference Attack (BMIA),
which performs conditional attack through Bayesian inference. In particular, we
transform a trained reference model into Bayesian neural networks by Laplace
approximation, enabling the direct estimation of the conditional score
distribution by probabilistic model parameters. Our method addresses both
epistemic and aleatoric uncertainty with only a reference model, enabling
efficient and powerful MIA. Extensive experiments on five datasets demonstrate
the effectiveness and efficiency of BMIA.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 15:58:43 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Liu",
"Zhenlong",
""
],
[
"Jiang",
"Wenyu",
""
],
[
"Zhou",
"Feng",
""
],
[
"Wei",
"Hongxin",
""
]
]
| TITLE: Efficient Membership Inference Attacks by Bayesian Neural Network
ABSTRACT: Membership Inference Attacks (MIAs) aim to estimate whether a specific data
point was used in the training of a given model. Previous attacks often utilize
multiple reference models to approximate the conditional score distribution,
leading to significant computational overhead. While recent work leverages
quantile regression to estimate conditional thresholds, it fails to capture
epistemic uncertainty, resulting in bias in low-density regions. In this work,
we propose a novel approach - Bayesian Membership Inference Attack (BMIA),
which performs conditional attack through Bayesian inference. In particular, we
transform a trained reference model into Bayesian neural networks by Laplace
approximation, enabling the direct estimation of the conditional score
distribution by probabilistic model parameters. Our method addresses both
epistemic and aleatoric uncertainty with only a reference model, enabling
efficient and powerful MIA. Extensive experiments on five datasets demonstrate
the effectiveness and efficiency of BMIA.
| no_new_dataset | 0.951774 |
2503.07483 | Chih-Hsun Lin | I-Jung Hsu, Chih-Hsun Lin, Chia-Mu Yu, Sy-Yen Kuo, Chun-Ying Huang | Poisoning Attacks to Local Differential Privacy Protocols for Trajectory
Data | null | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trajectory data, which tracks movements through geographic locations, is
crucial for improving real-world applications. However, collecting such
sensitive data raises considerable privacy concerns. Local differential privacy
(LDP) offers a solution by allowing individuals to locally perturb their
trajectory data before sharing it. Despite its privacy benefits, LDP protocols
are vulnerable to data poisoning attacks, where attackers inject fake data to
manipulate aggregated results. In this work, we make the first attempt to
analyze vulnerabilities in several representative LDP trajectory protocols. We
propose \textsc{TraP}, a heuristic algorithm for data \underline{P}oisoning
attacks using a prefix-suffix method to optimize fake \underline{Tra}jectory
selection, significantly reducing computational complexity. Our experimental
results demonstrate that our attack can substantially increase target pattern
occurrences in the perturbed trajectory dataset with few fake users. This study
underscores the urgent need for robust defenses and better protocol designs to
safeguard LDP trajectory data against malicious manipulation.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 02:31:45 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Hsu",
"I-Jung",
""
],
[
"Lin",
"Chih-Hsun",
""
],
[
"Yu",
"Chia-Mu",
""
],
[
"Kuo",
"Sy-Yen",
""
],
[
"Huang",
"Chun-Ying",
""
]
]
| TITLE: Poisoning Attacks to Local Differential Privacy Protocols for Trajectory
Data
ABSTRACT: Trajectory data, which tracks movements through geographic locations, is
crucial for improving real-world applications. However, collecting such
sensitive data raises considerable privacy concerns. Local differential privacy
(LDP) offers a solution by allowing individuals to locally perturb their
trajectory data before sharing it. Despite its privacy benefits, LDP protocols
are vulnerable to data poisoning attacks, where attackers inject fake data to
manipulate aggregated results. In this work, we make the first attempt to
analyze vulnerabilities in several representative LDP trajectory protocols. We
propose \textsc{TraP}, a heuristic algorithm for data \underline{P}oisoning
attacks using a prefix-suffix method to optimize fake \underline{Tra}jectory
selection, significantly reducing computational complexity. Our experimental
results demonstrate that our attack can substantially increase target pattern
occurrences in the perturbed trajectory dataset with few fake users. This study
underscores the urgent need for robust defenses and better protocol designs to
safeguard LDP trajectory data against malicious manipulation.
| no_new_dataset | 0.949153 |
2503.07485 | Zongzheng Zhang | Zongzheng Zhang, Xinrun Li, Sizhe Zou, Guoxuan Chi, Siqi Li, Xuchong
Qiu, Guoliang Wang, Guantian Zheng, Leichen Wang, Hang Zhao, Hao Zhao | Chameleon: Fast-slow Neuro-symbolic Lane Topology Extraction | ICRA 2025, Project Page: https://github.com/XR-Lee/neural-symbolic | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lane topology extraction involves detecting lanes and traffic elements and
determining their relationships, a key perception task for mapless autonomous
driving. This task requires complex reasoning, such as determining whether it
is possible to turn left into a specific lane. To address this challenge, we
introduce neuro-symbolic methods powered by vision-language foundation models
(VLMs). Existing approaches have notable limitations: (1) Dense visual
prompting with VLMs can achieve strong performance but is costly in terms of
both financial resources and carbon footprint, making it impractical for
robotics applications. (2) Neuro-symbolic reasoning methods for 3D scene
understanding fail to integrate visual inputs when synthesizing programs,
making them ineffective in handling complex corner cases. To this end, we
propose a fast-slow neuro-symbolic lane topology extraction algorithm, named
Chameleon, which alternates between a fast system that directly reasons over
detected instances using synthesized programs and a slow system that utilizes a
VLM with a chain-of-thought design to handle corner cases. Chameleon leverages
the strengths of both approaches, providing an affordable solution while
maintaining high performance. We evaluate the method on the OpenLane-V2
dataset, showing consistent improvements across various baseline detectors. Our
code, data, and models are publicly available at
https://github.com/XR-Lee/neural-symbolic
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 16:02:35 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Zongzheng",
""
],
[
"Li",
"Xinrun",
""
],
[
"Zou",
"Sizhe",
""
],
[
"Chi",
"Guoxuan",
""
],
[
"Li",
"Siqi",
""
],
[
"Qiu",
"Xuchong",
""
],
[
"Wang",
"Guoliang",
""
],
[
"Zheng",
"Guantian",
""
],
[
"Wang",
"Leichen",
""
],
[
"Zhao",
"Hang",
""
],
[
"Zhao",
"Hao",
""
]
]
| TITLE: Chameleon: Fast-slow Neuro-symbolic Lane Topology Extraction
ABSTRACT: Lane topology extraction involves detecting lanes and traffic elements and
determining their relationships, a key perception task for mapless autonomous
driving. This task requires complex reasoning, such as determining whether it
is possible to turn left into a specific lane. To address this challenge, we
introduce neuro-symbolic methods powered by vision-language foundation models
(VLMs). Existing approaches have notable limitations: (1) Dense visual
prompting with VLMs can achieve strong performance but is costly in terms of
both financial resources and carbon footprint, making it impractical for
robotics applications. (2) Neuro-symbolic reasoning methods for 3D scene
understanding fail to integrate visual inputs when synthesizing programs,
making them ineffective in handling complex corner cases. To this end, we
propose a fast-slow neuro-symbolic lane topology extraction algorithm, named
Chameleon, which alternates between a fast system that directly reasons over
detected instances using synthesized programs and a slow system that utilizes a
VLM with a chain-of-thought design to handle corner cases. Chameleon leverages
the strengths of both approaches, providing an affordable solution while
maintaining high performance. We evaluate the method on the OpenLane-V2
dataset, showing consistent improvements across various baseline detectors. Our
code, data, and models are publicly available at
https://github.com/XR-Lee/neural-symbolic
| no_new_dataset | 0.945197 |
2503.07504 | Brady Moon | Seungjae Baek, Brady Moon, Seungchan Kim, Muqing Cao, Cherie Ho,
Sebastian Scherer, Jeong hwan Jeon | PIPE Planner: Pathwise Information Gain with Map Predictions for Indoor
Robot Exploration | 8 pages, 8 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous exploration in unknown environments requires estimating the
information gain of an action to guide planning decisions. While prior
approaches often compute information gain at discrete waypoints, pathwise
integration offers a more comprehensive estimation but is often computationally
challenging or infeasible and prone to overestimation. In this work, we propose
the Pathwise Information Gain with Map Prediction for Exploration (PIPE)
planner, which integrates cumulative sensor coverage along planned trajectories
while leveraging map prediction to mitigate overestimation. To enable efficient
pathwise coverage computation, we introduce a method to efficiently calculate
the expected observation mask along the planned path, significantly reducing
computational overhead. We validate PIPE on real-world floorplan datasets,
demonstrating its superior performance over state-of-the-art baselines. Our
results highlight the benefits of integrating predictive mapping with pathwise
information gain for efficient and informed exploration.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 16:27:00 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Baek",
"Seungjae",
""
],
[
"Moon",
"Brady",
""
],
[
"Kim",
"Seungchan",
""
],
[
"Cao",
"Muqing",
""
],
[
"Ho",
"Cherie",
""
],
[
"Scherer",
"Sebastian",
""
],
[
"Jeon",
"Jeong hwan",
""
]
]
| TITLE: PIPE Planner: Pathwise Information Gain with Map Predictions for Indoor
Robot Exploration
ABSTRACT: Autonomous exploration in unknown environments requires estimating the
information gain of an action to guide planning decisions. While prior
approaches often compute information gain at discrete waypoints, pathwise
integration offers a more comprehensive estimation but is often computationally
challenging or infeasible and prone to overestimation. In this work, we propose
the Pathwise Information Gain with Map Prediction for Exploration (PIPE)
planner, which integrates cumulative sensor coverage along planned trajectories
while leveraging map prediction to mitigate overestimation. To enable efficient
pathwise coverage computation, we introduce a method to efficiently calculate
the expected observation mask along the planned path, significantly reducing
computational overhead. We validate PIPE on real-world floorplan datasets,
demonstrating its superior performance over state-of-the-art baselines. Our
results highlight the benefits of integrating predictive mapping with pathwise
information gain for efficient and informed exploration.
| no_new_dataset | 0.95096 |
2503.07506 | Soumya Banerjee | Soumya Banerjee and Vinay Kumar Verma | ADROIT: A Self-Supervised Framework for Learning Robust Representations
for Active Learning | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Active learning aims to select optimal samples for labeling, minimizing
annotation costs. This paper introduces a unified representation learning
framework tailored for active learning with task awareness. It integrates
diverse sources, comprising reconstruction, adversarial, self-supervised,
knowledge-distillation, and classification losses into a unified VAE-based
ADROIT approach. The proposed approach comprises three key components - a
unified representation generator (VAE), a state discriminator, and a (proxy)
task-learner or classifier. ADROIT learns a latent code using both labeled and
unlabeled data, incorporating task-awareness by leveraging labeled data with
the proxy classifier. Unlike previous approaches, the proxy classifier
additionally employs a self-supervised loss on unlabeled data and utilizes
knowledge distillation to align with the target task-learner. The state
discriminator distinguishes between labeled and unlabeled data, facilitating
the selection of informative unlabeled samples. The dynamic interaction between
VAE and the state discriminator creates a competitive environment, with the VAE
attempting to deceive the discriminator, while the state discriminator learns
to differentiate between labeled and unlabeled inputs. Extensive evaluations on
diverse datasets and ablation analysis affirm the effectiveness of the proposed
model.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 16:28:04 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Banerjee",
"Soumya",
""
],
[
"Verma",
"Vinay Kumar",
""
]
]
| TITLE: ADROIT: A Self-Supervised Framework for Learning Robust Representations
for Active Learning
ABSTRACT: Active learning aims to select optimal samples for labeling, minimizing
annotation costs. This paper introduces a unified representation learning
framework tailored for active learning with task awareness. It integrates
diverse sources, comprising reconstruction, adversarial, self-supervised,
knowledge-distillation, and classification losses into a unified VAE-based
ADROIT approach. The proposed approach comprises three key components - a
unified representation generator (VAE), a state discriminator, and a (proxy)
task-learner or classifier. ADROIT learns a latent code using both labeled and
unlabeled data, incorporating task-awareness by leveraging labeled data with
the proxy classifier. Unlike previous approaches, the proxy classifier
additionally employs a self-supervised loss on unlabeled data and utilizes
knowledge distillation to align with the target task-learner. The state
discriminator distinguishes between labeled and unlabeled data, facilitating
the selection of informative unlabeled samples. The dynamic interaction between
VAE and the state discriminator creates a competitive environment, with the VAE
attempting to deceive the discriminator, while the state discriminator learns
to differentiate between labeled and unlabeled inputs. Extensive evaluations on
diverse datasets and ablation analysis affirm the effectiveness of the proposed
model.
| no_new_dataset | 0.94428 |
2503.07511 | Chengmeng Li | Chengmeng Li, Junjie Wen, Yan Peng, Yaxin Peng, Feifei Feng, Yichen
Zhu | PointVLA: Injecting the 3D World into Vision-Language-Action Models | null | null | null | null | cs.RO cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Vision-Language-Action (VLA) models excel at robotic tasks by leveraging
large-scale 2D vision-language pretraining, but their reliance on RGB images
limits spatial reasoning critical for real-world interaction. Retraining these
models with 3D data is computationally prohibitive, while discarding existing
2D datasets wastes valuable resources. To bridge this gap, we propose PointVLA,
a framework that enhances pre-trained VLAs with point cloud inputs without
requiring retraining. Our method freezes the vanilla action expert and injects
3D features via a lightweight modular block. To identify the most effective way
of integrating point cloud representations, we conduct a skip-block analysis to
pinpoint less useful blocks in the vanilla action expert, ensuring that 3D
features are injected only into these blocks--minimizing disruption to
pre-trained representations.
Extensive experiments demonstrate that PointVLA outperforms state-of-the-art
2D imitation learning methods, such as OpenVLA, Diffusion Policy and DexVLA,
across both simulated and real-world robotic tasks. Specifically, we highlight
several key advantages of PointVLA enabled by point cloud integration: (1)
Few-shot multi-tasking, where PointVLA successfully performs four different
tasks using only 20 demonstrations each; (2) Real-vs-photo discrimination,
where PointVLA distinguishes real objects from their images, leveraging 3D
world knowledge to improve safety and reliability; (3) Height adaptability,
Unlike conventional 2D imitation learning methods, PointVLA enables robots to
adapt to objects at varying table height that unseen in train data.
Furthermore, PointVLA achieves strong performance in long-horizon tasks, such
as picking and packing objects from a moving conveyor belt, showcasing its
ability to generalize across complex, dynamic environments.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 16:32:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Chengmeng",
""
],
[
"Wen",
"Junjie",
""
],
[
"Peng",
"Yan",
""
],
[
"Peng",
"Yaxin",
""
],
[
"Feng",
"Feifei",
""
],
[
"Zhu",
"Yichen",
""
]
]
| TITLE: PointVLA: Injecting the 3D World into Vision-Language-Action Models
ABSTRACT: Vision-Language-Action (VLA) models excel at robotic tasks by leveraging
large-scale 2D vision-language pretraining, but their reliance on RGB images
limits spatial reasoning critical for real-world interaction. Retraining these
models with 3D data is computationally prohibitive, while discarding existing
2D datasets wastes valuable resources. To bridge this gap, we propose PointVLA,
a framework that enhances pre-trained VLAs with point cloud inputs without
requiring retraining. Our method freezes the vanilla action expert and injects
3D features via a lightweight modular block. To identify the most effective way
of integrating point cloud representations, we conduct a skip-block analysis to
pinpoint less useful blocks in the vanilla action expert, ensuring that 3D
features are injected only into these blocks--minimizing disruption to
pre-trained representations.
Extensive experiments demonstrate that PointVLA outperforms state-of-the-art
2D imitation learning methods, such as OpenVLA, Diffusion Policy and DexVLA,
across both simulated and real-world robotic tasks. Specifically, we highlight
several key advantages of PointVLA enabled by point cloud integration: (1)
Few-shot multi-tasking, where PointVLA successfully performs four different
tasks using only 20 demonstrations each; (2) Real-vs-photo discrimination,
where PointVLA distinguishes real objects from their images, leveraging 3D
world knowledge to improve safety and reliability; (3) Height adaptability,
Unlike conventional 2D imitation learning methods, PointVLA enables robots to
adapt to objects at varying table height that unseen in train data.
Furthermore, PointVLA achieves strong performance in long-horizon tasks, such
as picking and packing objects from a moving conveyor belt, showcasing its
ability to generalize across complex, dynamic environments.
| no_new_dataset | 0.946843 |
2503.07516 | Weize Li | Weize Li, Yunhao Du, Qixiang Yin, Zhicheng Zhao, Fei Su, Daqi Liu | CPAny: Couple With Any Encoder to Refer Multi-Object Tracking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Referring Multi-Object Tracking (RMOT) aims to localize target trajectories
specified by natural language expressions in videos. Existing RMOT methods
mainly follow two paradigms, namely, one-stage strategies and two-stage ones.
The former jointly trains tracking with referring but suffers from substantial
computational overhead. Although the latter improves computational efficiency,
its CLIP-inspired dual-tower architecture restricts compatibility with other
visual/text backbones and is not future-proof. To overcome these limitations,
we propose CPAny, a novel encoder-decoder framework for two-stage RMOT, which
introduces two core components: (1) a Contextual Visual Semantic Abstractor
(CVSA) performs context-aware aggregation on visual backbone features and
projects them into a unified semantic space; (2) a Parallel Semantic Summarizer
(PSS) decodes the visual and linguistic features at the semantic level in
parallel and generates referring scores. By replacing the inherent feature
alignment of encoders with a self-constructed unified semantic space, CPAny
achieves flexible compatibility with arbitrary emerging visual / text encoders.
Meanwhile, CPAny aggregates contextual information by encoding only once and
processes multiple expressions in parallel, significantly reducing
computational redundancy. Extensive experiments on the Refer-KITTI and
Refer-KITTI-V2 datasets show that CPAny outperforms SOTA methods across diverse
encoder combinations, with a particular 7.77\% HOTA improvement on
Refer-KITTI-V2. Code will be available soon.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 16:38:42 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Weize",
""
],
[
"Du",
"Yunhao",
""
],
[
"Yin",
"Qixiang",
""
],
[
"Zhao",
"Zhicheng",
""
],
[
"Su",
"Fei",
""
],
[
"Liu",
"Daqi",
""
]
]
| TITLE: CPAny: Couple With Any Encoder to Refer Multi-Object Tracking
ABSTRACT: Referring Multi-Object Tracking (RMOT) aims to localize target trajectories
specified by natural language expressions in videos. Existing RMOT methods
mainly follow two paradigms, namely, one-stage strategies and two-stage ones.
The former jointly trains tracking with referring but suffers from substantial
computational overhead. Although the latter improves computational efficiency,
its CLIP-inspired dual-tower architecture restricts compatibility with other
visual/text backbones and is not future-proof. To overcome these limitations,
we propose CPAny, a novel encoder-decoder framework for two-stage RMOT, which
introduces two core components: (1) a Contextual Visual Semantic Abstractor
(CVSA) performs context-aware aggregation on visual backbone features and
projects them into a unified semantic space; (2) a Parallel Semantic Summarizer
(PSS) decodes the visual and linguistic features at the semantic level in
parallel and generates referring scores. By replacing the inherent feature
alignment of encoders with a self-constructed unified semantic space, CPAny
achieves flexible compatibility with arbitrary emerging visual / text encoders.
Meanwhile, CPAny aggregates contextual information by encoding only once and
processes multiple expressions in parallel, significantly reducing
computational redundancy. Extensive experiments on the Refer-KITTI and
Refer-KITTI-V2 datasets show that CPAny outperforms SOTA methods across diverse
encoder combinations, with a particular 7.77\% HOTA improvement on
Refer-KITTI-V2. Code will be available soon.
| no_new_dataset | 0.938407 |
2503.07517 | Takeru Inoue | Takeru Inoue, Ryusuke Miyamoto | FastInstShadow: A Simple Query-Based Model for Instance Shadow Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Instance shadow detection is the task of detecting pairs of shadows and
objects, where existing methods first detect shadows and objects independently,
then associate them. This paper introduces FastInstShadow, a method that
enhances detection accuracy through a query-based architecture featuring an
association transformer decoder with two dual-path transformer decoders to
assess relationships between shadows and objects during detection. Experimental
results using the SOBA dataset showed that the proposed method outperforms all
existing methods across all criteria. This method makes real-time processing
feasible for moderate-resolution images with better accuracy than SSISv2, the
most accurate existing method. Our code is available at
https://github.com/wlotkr/FastInstShadow.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 16:39:01 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Inoue",
"Takeru",
""
],
[
"Miyamoto",
"Ryusuke",
""
]
]
| TITLE: FastInstShadow: A Simple Query-Based Model for Instance Shadow Detection
ABSTRACT: Instance shadow detection is the task of detecting pairs of shadows and
objects, where existing methods first detect shadows and objects independently,
then associate them. This paper introduces FastInstShadow, a method that
enhances detection accuracy through a query-based architecture featuring an
association transformer decoder with two dual-path transformer decoders to
assess relationships between shadows and objects during detection. Experimental
results using the SOBA dataset showed that the proposed method outperforms all
existing methods across all criteria. This method makes real-time processing
feasible for moderate-resolution images with better accuracy than SSISv2, the
most accurate existing method. Our code is available at
https://github.com/wlotkr/FastInstShadow.
| no_new_dataset | 0.947186 |
2503.07550 | Haoran Li | Haoran Li, Junfeng Hu | KSOD: Knowledge Supplement for LLMs On Demand | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have demonstrated remarkable capabilities in
various tasks, yet still produce errors in domain-specific tasks. To further
improve their performance, we propose KSOD (Knowledge Supplement for LLMs On
Demand), a novel framework that empowers LLMs to improve their capabilities
with knowledge-based supervised fine-tuning (SFT). KSOD analyzes the causes of
errors from the perspective of knowledge deficiency by identifying potential
missing knowledge in LLM that may lead to the errors. Subsequently, KSOD tunes
a knowledge module on knowledge dataset and verifies whether the LLM lacks the
identified knowledge based on it. If the knowledge is verified, KSOD
supplements the LLM with the identified knowledge using the knowledge module.
Tuning LLMs on specific knowledge instead of specific task decouples task and
knowledge and our experiments on two domain-specific benchmarks and four
general benchmarks empirically demonstrate that KSOD enhances the performance
of LLMs on tasks requiring the supplemented knowledge while preserving their
performance on other tasks. Our findings shed light on the potential of
improving the capabilities of LLMs with knowledge-based SFT.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:17:41 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Li",
"Haoran",
""
],
[
"Hu",
"Junfeng",
""
]
]
| TITLE: KSOD: Knowledge Supplement for LLMs On Demand
ABSTRACT: Large Language Models (LLMs) have demonstrated remarkable capabilities in
various tasks, yet still produce errors in domain-specific tasks. To further
improve their performance, we propose KSOD (Knowledge Supplement for LLMs On
Demand), a novel framework that empowers LLMs to improve their capabilities
with knowledge-based supervised fine-tuning (SFT). KSOD analyzes the causes of
errors from the perspective of knowledge deficiency by identifying potential
missing knowledge in LLM that may lead to the errors. Subsequently, KSOD tunes
a knowledge module on knowledge dataset and verifies whether the LLM lacks the
identified knowledge based on it. If the knowledge is verified, KSOD
supplements the LLM with the identified knowledge using the knowledge module.
Tuning LLMs on specific knowledge instead of specific task decouples task and
knowledge and our experiments on two domain-specific benchmarks and four
general benchmarks empirically demonstrate that KSOD enhances the performance
of LLMs on tasks requiring the supplemented knowledge while preserving their
performance on other tasks. Our findings shed light on the potential of
improving the capabilities of LLMs with knowledge-based SFT.
| no_new_dataset | 0.941975 |
2503.07561 | Thibaut Loiseau | Thibaut Loiseau, Guillaume Bourmaud, Vincent Lepetit | Alligat0R: Pre-Training Through Co-Visibility Segmentation for Relative
Camera Pose Regression | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-training techniques have greatly advanced computer vision, with CroCo's
cross-view completion approach yielding impressive results in tasks like 3D
reconstruction and pose regression. However, this method requires substantial
overlap between training pairs, limiting its effectiveness. We introduce
Alligat0R, a novel pre-training approach that reformulates cross-view learning
as a co-visibility segmentation task. Our method predicts whether each pixel in
one image is co-visible in the second image, occluded, or outside the field of
view (FOV), enabling the use of image pairs with any degree of overlap and
providing interpretable predictions. To support this, we present Cub3, a
large-scale dataset with 2.5 million image pairs and dense co-visibility
annotations derived from the nuScenes dataset. This dataset includes diverse
scenarios with varying degrees of overlap. The experiments show that Alligat0R
significantly outperforms CroCo in relative pose regression, especially in
scenarios with limited overlap. Alligat0R and Cub3 will be made publicly
available.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:29:48 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Loiseau",
"Thibaut",
""
],
[
"Bourmaud",
"Guillaume",
""
],
[
"Lepetit",
"Vincent",
""
]
]
| TITLE: Alligat0R: Pre-Training Through Co-Visibility Segmentation for Relative
Camera Pose Regression
ABSTRACT: Pre-training techniques have greatly advanced computer vision, with CroCo's
cross-view completion approach yielding impressive results in tasks like 3D
reconstruction and pose regression. However, this method requires substantial
overlap between training pairs, limiting its effectiveness. We introduce
Alligat0R, a novel pre-training approach that reformulates cross-view learning
as a co-visibility segmentation task. Our method predicts whether each pixel in
one image is co-visible in the second image, occluded, or outside the field of
view (FOV), enabling the use of image pairs with any degree of overlap and
providing interpretable predictions. To support this, we present Cub3, a
large-scale dataset with 2.5 million image pairs and dense co-visibility
annotations derived from the nuScenes dataset. This dataset includes diverse
scenarios with varying degrees of overlap. The experiments show that Alligat0R
significantly outperforms CroCo in relative pose regression, especially in
scenarios with limited overlap. Alligat0R and Cub3 will be made publicly
available.
| new_dataset | 0.960768 |
2503.07563 | Canyi Chen | Canyi Chen, Nan Qiao, Liping Zhu | Efficient Distributed Learning over Decentralized Networks with
Convoluted Support Vector Machine | null | null | null | null | stat.ML cs.DC cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper addresses the problem of efficiently classifying high-dimensional
data over decentralized networks. Penalized support vector machines (SVMs) are
widely used for high-dimensional classification tasks. However, the double
nonsmoothness of the objective function poses significant challenges in
developing efficient decentralized learning methods. Many existing procedures
suffer from slow, sublinear convergence rates. To overcome this limitation, we
consider a convolution-based smoothing technique for the nonsmooth hinge loss
function. The resulting loss function remains convex and smooth. We then
develop an efficient generalized alternating direction method of multipliers
(ADMM) algorithm for solving penalized SVM over decentralized networks. Our
theoretical contributions are twofold. First, we establish that our generalized
ADMM algorithm achieves provable linear convergence with a simple
implementation. Second, after a sufficient number of ADMM iterations, the final
sparse estimator attains near-optimal statistical convergence and accurately
recovers the true support of the underlying parameters. Extensive numerical
experiments on both simulated and real-world datasets validate our theoretical
findings.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:31:26 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Canyi",
""
],
[
"Qiao",
"Nan",
""
],
[
"Zhu",
"Liping",
""
]
]
| TITLE: Efficient Distributed Learning over Decentralized Networks with
Convoluted Support Vector Machine
ABSTRACT: This paper addresses the problem of efficiently classifying high-dimensional
data over decentralized networks. Penalized support vector machines (SVMs) are
widely used for high-dimensional classification tasks. However, the double
nonsmoothness of the objective function poses significant challenges in
developing efficient decentralized learning methods. Many existing procedures
suffer from slow, sublinear convergence rates. To overcome this limitation, we
consider a convolution-based smoothing technique for the nonsmooth hinge loss
function. The resulting loss function remains convex and smooth. We then
develop an efficient generalized alternating direction method of multipliers
(ADMM) algorithm for solving penalized SVM over decentralized networks. Our
theoretical contributions are twofold. First, we establish that our generalized
ADMM algorithm achieves provable linear convergence with a simple
implementation. Second, after a sufficient number of ADMM iterations, the final
sparse estimator attains near-optimal statistical convergence and accurately
recovers the true support of the underlying parameters. Extensive numerical
experiments on both simulated and real-world datasets validate our theoretical
findings.
| no_new_dataset | 0.945751 |
2503.07578 | Tianyu Chen | Tianyu Chen, Yasi Zhang, Zhendong Wang, Ying Nian Wu, Oscar Leong,
Mingyuan Zhou | Denoising Score Distillation: From Noisy Diffusion Pretraining to
One-Step High-Quality Generation | First Author and Second Author contributed equally to this work. The
last two authors equally advised this work | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffusion models have achieved remarkable success in generating
high-resolution, realistic images across diverse natural distributions.
However, their performance heavily relies on high-quality training data, making
it challenging to learn meaningful distributions from corrupted samples. This
limitation restricts their applicability in scientific domains where clean data
is scarce or costly to obtain. In this work, we introduce denoising score
distillation (DSD), a surprisingly effective and novel approach for training
high-quality generative models from low-quality data. DSD first pretrains a
diffusion model exclusively on noisy, corrupted samples and then distills it
into a one-step generator capable of producing refined, clean outputs. While
score distillation is traditionally viewed as a method to accelerate diffusion
models, we show that it can also significantly enhance sample quality,
particularly when starting from a degraded teacher model. Across varying noise
levels and datasets, DSD consistently improves generative performancewe
summarize our empirical evidence in Fig. 1. Furthermore, we provide theoretical
insights showing that, in a linear model setting, DSD identifies the eigenspace
of the clean data distributions covariance matrix, implicitly regularizing the
generator. This perspective reframes score distillation as not only a tool for
efficiency but also a mechanism for improving generative models, particularly
in low-quality data settings.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:44:46 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Chen",
"Tianyu",
""
],
[
"Zhang",
"Yasi",
""
],
[
"Wang",
"Zhendong",
""
],
[
"Wu",
"Ying Nian",
""
],
[
"Leong",
"Oscar",
""
],
[
"Zhou",
"Mingyuan",
""
]
]
| TITLE: Denoising Score Distillation: From Noisy Diffusion Pretraining to
One-Step High-Quality Generation
ABSTRACT: Diffusion models have achieved remarkable success in generating
high-resolution, realistic images across diverse natural distributions.
However, their performance heavily relies on high-quality training data, making
it challenging to learn meaningful distributions from corrupted samples. This
limitation restricts their applicability in scientific domains where clean data
is scarce or costly to obtain. In this work, we introduce denoising score
distillation (DSD), a surprisingly effective and novel approach for training
high-quality generative models from low-quality data. DSD first pretrains a
diffusion model exclusively on noisy, corrupted samples and then distills it
into a one-step generator capable of producing refined, clean outputs. While
score distillation is traditionally viewed as a method to accelerate diffusion
models, we show that it can also significantly enhance sample quality,
particularly when starting from a degraded teacher model. Across varying noise
levels and datasets, DSD consistently improves generative performancewe
summarize our empirical evidence in Fig. 1. Furthermore, we provide theoretical
insights showing that, in a linear model setting, DSD identifies the eigenspace
of the clean data distributions covariance matrix, implicitly regularizing the
generator. This perspective reframes score distillation as not only a tool for
efficiency but also a mechanism for improving generative models, particularly
in low-quality data settings.
| no_new_dataset | 0.947575 |
2503.07584 | Audun Myers | Audun Myers, Max Vargas, Sinan G. Aksoy, Cliff Joslyn, Benjamin
Wilson, Tom Grimes | Talking to GDELT Through Knowledge Graphs | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | In this work we study various Retrieval Augmented Regeneration (RAG)
approaches to gain an understanding of the strengths and weaknesses of each
approach in a question-answering analysis. To gain this understanding we use a
case-study subset of the Global Database of Events, Language, and Tone (GDELT)
dataset as well as a corpus of raw text scraped from the online news articles.
To retrieve information from the text corpus we implement a traditional vector
store RAG as well as state-of-the-art large language model (LLM) based
approaches for automatically constructing KGs and retrieving the relevant
subgraphs. In addition to these corpus approaches, we develop a novel
ontology-based framework for constructing knowledge graphs (KGs) from GDELT
directly which leverages the underlying schema of GDELT to create structured
representations of global events. For retrieving relevant information from the
ontology-based KGs we implement both direct graph queries and state-of-the-art
graph retrieval approaches. We compare the performance of each method in a
question-answering task. We find that while our ontology-based KGs are valuable
for question-answering, automated extraction of the relevant subgraphs is
challenging. Conversely, LLM-generated KGs, while capturing event summaries,
often lack consistency and interpretability. Our findings suggest benefits of a
synergistic approach between ontology and LLM-based KG construction, with
proposed avenues toward that end.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:48:10 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Myers",
"Audun",
""
],
[
"Vargas",
"Max",
""
],
[
"Aksoy",
"Sinan G.",
""
],
[
"Joslyn",
"Cliff",
""
],
[
"Wilson",
"Benjamin",
""
],
[
"Grimes",
"Tom",
""
]
]
| TITLE: Talking to GDELT Through Knowledge Graphs
ABSTRACT: In this work we study various Retrieval Augmented Regeneration (RAG)
approaches to gain an understanding of the strengths and weaknesses of each
approach in a question-answering analysis. To gain this understanding we use a
case-study subset of the Global Database of Events, Language, and Tone (GDELT)
dataset as well as a corpus of raw text scraped from the online news articles.
To retrieve information from the text corpus we implement a traditional vector
store RAG as well as state-of-the-art large language model (LLM) based
approaches for automatically constructing KGs and retrieving the relevant
subgraphs. In addition to these corpus approaches, we develop a novel
ontology-based framework for constructing knowledge graphs (KGs) from GDELT
directly which leverages the underlying schema of GDELT to create structured
representations of global events. For retrieving relevant information from the
ontology-based KGs we implement both direct graph queries and state-of-the-art
graph retrieval approaches. We compare the performance of each method in a
question-answering task. We find that while our ontology-based KGs are valuable
for question-answering, automated extraction of the relevant subgraphs is
challenging. Conversely, LLM-generated KGs, while capturing event summaries,
often lack consistency and interpretability. Our findings suggest benefits of a
synergistic approach between ontology and LLM-based KG construction, with
proposed avenues toward that end.
| no_new_dataset | 0.942876 |
2503.07587 | Arturo Deza | Dunant Cusipuma, David Ortega, Victor Flores-Benites, Arturo Deza | Robusto-1 Dataset: Comparing Humans and VLMs on real out-of-distribution
Autonomous Driving VQA from Peru | A pre-print. 26 pages. Link to Code + Data:
https://huggingface.co/datasets/Artificio/robusto-1 | null | null | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | As multimodal foundational models start being deployed experimentally in
Self-Driving cars, a reasonable question we ask ourselves is how similar to
humans do these systems respond in certain driving situations -- especially
those that are out-of-distribution? To study this, we create the Robusto-1
dataset that uses dashcam video data from Peru, a country with one of the worst
(aggressive) drivers in the world, a high traffic index, and a high ratio of
bizarre to non-bizarre street objects likely never seen in training. In
particular, to preliminarly test at a cognitive level how well Foundational
Visual Language Models (VLMs) compare to Humans in Driving, we move away from
bounding boxes, segmentation maps, occupancy maps or trajectory estimation to
multi-modal Visual Question Answering (VQA) comparing both humans and machines
through a popular method in systems neuroscience known as Representational
Similarity Analysis (RSA). Depending on the type of questions we ask and the
answers these systems give, we will show in what cases do VLMs and Humans
converge or diverge allowing us to probe on their cognitive alignment. We find
that the degree of alignment varies significantly depending on the type of
questions asked to each type of system (Humans vs VLMs), highlighting a gap in
their alignment.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:50:04 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Cusipuma",
"Dunant",
""
],
[
"Ortega",
"David",
""
],
[
"Flores-Benites",
"Victor",
""
],
[
"Deza",
"Arturo",
""
]
]
| TITLE: Robusto-1 Dataset: Comparing Humans and VLMs on real out-of-distribution
Autonomous Driving VQA from Peru
ABSTRACT: As multimodal foundational models start being deployed experimentally in
Self-Driving cars, a reasonable question we ask ourselves is how similar to
humans do these systems respond in certain driving situations -- especially
those that are out-of-distribution? To study this, we create the Robusto-1
dataset that uses dashcam video data from Peru, a country with one of the worst
(aggressive) drivers in the world, a high traffic index, and a high ratio of
bizarre to non-bizarre street objects likely never seen in training. In
particular, to preliminarly test at a cognitive level how well Foundational
Visual Language Models (VLMs) compare to Humans in Driving, we move away from
bounding boxes, segmentation maps, occupancy maps or trajectory estimation to
multi-modal Visual Question Answering (VQA) comparing both humans and machines
through a popular method in systems neuroscience known as Representational
Similarity Analysis (RSA). Depending on the type of questions we ask and the
answers these systems give, we will show in what cases do VLMs and Humans
converge or diverge allowing us to probe on their cognitive alignment. We find
that the degree of alignment varies significantly depending on the type of
questions asked to each type of system (Humans vs VLMs), highlighting a gap in
their alignment.
| new_dataset | 0.965283 |
2503.07597 | Guanlin Wu | Yuhong Zhang, Guanlin Wu, Ling-Hao Chen, Zhuokai Zhao, Jing Lin,
Xiaoke Jiang, Jiamin Wu, Zhuoheng Li, Hao Frank Yang, Haoqian Wang, Lei Zhang | HumanMM: Global Human Motion Recovery from Multi-shot Videos | CVPR 2025; Project page: https://zhangyuhong01.github.io/HumanMM/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel framework designed to reconstruct
long-sequence 3D human motion in the world coordinates from in-the-wild videos
with multiple shot transitions. Such long-sequence in-the-wild motions are
highly valuable to applications such as motion generation and motion
understanding, but are of great challenge to be recovered due to abrupt shot
transitions, partial occlusions, and dynamic backgrounds presented in such
videos. Existing methods primarily focus on single-shot videos, where
continuity is maintained within a single camera view, or simplify multi-shot
alignment in camera space only. In this work, we tackle the challenges by
integrating an enhanced camera pose estimation with Human Motion Recovery (HMR)
by incorporating a shot transition detector and a robust alignment module for
accurate pose and orientation continuity across shots. By leveraging a custom
motion integrator, we effectively mitigate the problem of foot sliding and
ensure temporal consistency in human pose. Extensive evaluations on our created
multi-shot dataset from public 3D human datasets demonstrate the robustness of
our method in reconstructing realistic human motion in world coordinates.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:57:03 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Zhang",
"Yuhong",
""
],
[
"Wu",
"Guanlin",
""
],
[
"Chen",
"Ling-Hao",
""
],
[
"Zhao",
"Zhuokai",
""
],
[
"Lin",
"Jing",
""
],
[
"Jiang",
"Xiaoke",
""
],
[
"Wu",
"Jiamin",
""
],
[
"Li",
"Zhuoheng",
""
],
[
"Yang",
"Hao Frank",
""
],
[
"Wang",
"Haoqian",
""
],
[
"Zhang",
"Lei",
""
]
]
| TITLE: HumanMM: Global Human Motion Recovery from Multi-shot Videos
ABSTRACT: In this paper, we present a novel framework designed to reconstruct
long-sequence 3D human motion in the world coordinates from in-the-wild videos
with multiple shot transitions. Such long-sequence in-the-wild motions are
highly valuable to applications such as motion generation and motion
understanding, but are of great challenge to be recovered due to abrupt shot
transitions, partial occlusions, and dynamic backgrounds presented in such
videos. Existing methods primarily focus on single-shot videos, where
continuity is maintained within a single camera view, or simplify multi-shot
alignment in camera space only. In this work, we tackle the challenges by
integrating an enhanced camera pose estimation with Human Motion Recovery (HMR)
by incorporating a shot transition detector and a robust alignment module for
accurate pose and orientation continuity across shots. By leveraging a custom
motion integrator, we effectively mitigate the problem of foot sliding and
ensure temporal consistency in human pose. Extensive evaluations on our created
multi-shot dataset from public 3D human datasets demonstrate the robustness of
our method in reconstructing realistic human motion in world coordinates.
| new_dataset | 0.956022 |
2503.07603 | Sedrick Keh | Sedrick Keh, Jean Mercat, Samir Yitzhak Gadre, Kushal Arora, Igor
Vasiljevic, Benjamin Burchfiel, Shuran Song, Russ Tedrake, Thomas Kollar,
Ludwig Schmidt, Achal Dave | Should VLMs be Pre-trained with Image Data? | ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pre-trained LLMs that are further trained with image data perform well on
vision-language tasks. While adding images during a second training phase
effectively unlocks this capability, it is unclear how much of a gain or loss
this two-step pipeline gives over VLMs which integrate images earlier into the
training process. To investigate this, we train models spanning various
datasets, scales, image-text ratios, and amount of pre-training done before
introducing vision tokens. We then fine-tune these models and evaluate their
downstream performance on a suite of vision-language and text-only tasks. We
find that pre-training with a mixture of image and text data allows models to
perform better on vision-language tasks while maintaining strong performance on
text-only evaluations. On an average of 6 diverse tasks, we find that for a 1B
model, introducing visual tokens 80% of the way through pre-training results in
a 2% average improvement over introducing visual tokens to a fully pre-trained
model.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:58:19 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Keh",
"Sedrick",
""
],
[
"Mercat",
"Jean",
""
],
[
"Gadre",
"Samir Yitzhak",
""
],
[
"Arora",
"Kushal",
""
],
[
"Vasiljevic",
"Igor",
""
],
[
"Burchfiel",
"Benjamin",
""
],
[
"Song",
"Shuran",
""
],
[
"Tedrake",
"Russ",
""
],
[
"Kollar",
"Thomas",
""
],
[
"Schmidt",
"Ludwig",
""
],
[
"Dave",
"Achal",
""
]
]
| TITLE: Should VLMs be Pre-trained with Image Data?
ABSTRACT: Pre-trained LLMs that are further trained with image data perform well on
vision-language tasks. While adding images during a second training phase
effectively unlocks this capability, it is unclear how much of a gain or loss
this two-step pipeline gives over VLMs which integrate images earlier into the
training process. To investigate this, we train models spanning various
datasets, scales, image-text ratios, and amount of pre-training done before
introducing vision tokens. We then fine-tune these models and evaluate their
downstream performance on a suite of vision-language and text-only tasks. We
find that pre-training with a mixture of image and text data allows models to
perform better on vision-language tasks while maintaining strong performance on
text-only evaluations. On an average of 6 diverse tasks, we find that for a 1B
model, introducing visual tokens 80% of the way through pre-training results in
a 2% average improvement over introducing visual tokens to a fully pre-trained
model.
| no_new_dataset | 0.947914 |
2503.07607 | Ying Xu | Ying Xu, Marius Pedersen, Kiran Raja | VoD: Learning Volume of Differences for Video-Based Deepfake Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The rapid development of deep learning and generative AI technologies has
profoundly transformed the digital contact landscape, creating realistic
Deepfake that poses substantial challenges to public trust and digital media
integrity. This paper introduces a novel Deepfake detention framework, Volume
of Differences (VoD), designed to enhance detection accuracy by exploiting
temporal and spatial inconsistencies between consecutive video frames. VoD
employs a progressive learning approach that captures differences across
multiple axes through the use of consecutive frame differences (CFD) and a
network with stepwise expansions. We evaluate our approach with intra-dataset
and cross-dataset testing scenarios on various well-known Deepfake datasets.
Our findings demonstrate that VoD excels with the data it has been trained on
and shows strong adaptability to novel, unseen data. Additionally,
comprehensive ablation studies examine various configurations of segment
length, sampling steps, and intervals, offering valuable insights for
optimizing the framework. The code for our VoD framework is available at
https://github.com/xuyingzhongguo/VoD.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:59:38 GMT"
}
]
| 2025-03-11T00:00:00 | [
[
"Xu",
"Ying",
""
],
[
"Pedersen",
"Marius",
""
],
[
"Raja",
"Kiran",
""
]
]
| TITLE: VoD: Learning Volume of Differences for Video-Based Deepfake Detection
ABSTRACT: The rapid development of deep learning and generative AI technologies has
profoundly transformed the digital contact landscape, creating realistic
Deepfake that poses substantial challenges to public trust and digital media
integrity. This paper introduces a novel Deepfake detention framework, Volume
of Differences (VoD), designed to enhance detection accuracy by exploiting
temporal and spatial inconsistencies between consecutive video frames. VoD
employs a progressive learning approach that captures differences across
multiple axes through the use of consecutive frame differences (CFD) and a
network with stepwise expansions. We evaluate our approach with intra-dataset
and cross-dataset testing scenarios on various well-known Deepfake datasets.
Our findings demonstrate that VoD excels with the data it has been trained on
and shows strong adaptability to novel, unseen data. Additionally,
comprehensive ablation studies examine various configurations of segment
length, sampling steps, and intervals, offering valuable insights for
optimizing the framework. The code for our VoD framework is available at
https://github.com/xuyingzhongguo/VoD.
| no_new_dataset | 0.951908 |
2101.11003 | Steven Golovkine | Steven Golovkine | FDApy: a Python package for functional data | 18 pages, 11 figures | null | 10.21105/joss.07526 | null | cs.MS cs.LG stat.CO stat.ML | http://creativecommons.org/licenses/by/4.0/ | We introduce FDApy, an open-source Python package for the analysis of
functional data. The package provides tools for the representation of
(multivariate) functional data defined on different dimensional domains and for
functional data that is irregularly sampled. Additionally, dimension reduction
techniques are implemented for multivariate and/or multidimensional functional
data that are regularly or irregularly sampled. A toolbox for generating
functional datasets is also provided. The documentation includes installation
and usage instructions, examples on simulated and real datasets and a complete
description of the API. FDApy is released under the MIT license. The code and
documentation are available at https://github.com/StevenGolovkine/FDApy.
| [
{
"version": "v1",
"created": "Tue, 26 Jan 2021 10:07:33 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Aug 2024 08:43:35 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Golovkine",
"Steven",
""
]
]
| TITLE: FDApy: a Python package for functional data
ABSTRACT: We introduce FDApy, an open-source Python package for the analysis of
functional data. The package provides tools for the representation of
(multivariate) functional data defined on different dimensional domains and for
functional data that is irregularly sampled. Additionally, dimension reduction
techniques are implemented for multivariate and/or multidimensional functional
data that are regularly or irregularly sampled. A toolbox for generating
functional datasets is also provided. The documentation includes installation
and usage instructions, examples on simulated and real datasets and a complete
description of the API. FDApy is released under the MIT license. The code and
documentation are available at https://github.com/StevenGolovkine/FDApy.
| no_new_dataset | 0.945801 |
2204.08027 | Xuejiao Tang | Xuejiao Tang and Wenbin Zhang | Attention Mechanism based Cognition-level Scene Understanding | Published in Information | null | null | null | cs.CV cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Given a question-image input, the Visual Commonsense Reasoning (VCR) model
can predict an answer with the corresponding rationale, which requires
inference ability from the real world. The VCR task, which calls for exploiting
the multi-source information as well as learning different levels of
understanding and extensive commonsense knowledge, is a cognition-level scene
understanding task. The VCR task has aroused researchers' interest due to its
wide range of applications, including visual question answering, automated
vehicle systems, and clinical decision support. Previous approaches to solving
the VCR task generally rely on pre-training or exploiting memory with long
dependency relationship encoded models. However, these approaches suffer from a
lack of generalizability and losing information in long sequences. In this
paper, we propose a parallel attention-based cognitive VCR network PAVCR, which
fuses visual-textual information efficiently and encodes semantic information
in parallel to enable the model to capture rich information for cognition-level
inference. Extensive experiments show that the proposed model yields
significant improvements over existing methods on the benchmark VCR dataset.
Moreover, the proposed model provides intuitive interpretation into visual
commonsense reasoning.
| [
{
"version": "v1",
"created": "Sun, 17 Apr 2022 15:04:44 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Apr 2022 02:40:42 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 02:28:52 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Tang",
"Xuejiao",
""
],
[
"Zhang",
"Wenbin",
""
]
]
| TITLE: Attention Mechanism based Cognition-level Scene Understanding
ABSTRACT: Given a question-image input, the Visual Commonsense Reasoning (VCR) model
can predict an answer with the corresponding rationale, which requires
inference ability from the real world. The VCR task, which calls for exploiting
the multi-source information as well as learning different levels of
understanding and extensive commonsense knowledge, is a cognition-level scene
understanding task. The VCR task has aroused researchers' interest due to its
wide range of applications, including visual question answering, automated
vehicle systems, and clinical decision support. Previous approaches to solving
the VCR task generally rely on pre-training or exploiting memory with long
dependency relationship encoded models. However, these approaches suffer from a
lack of generalizability and losing information in long sequences. In this
paper, we propose a parallel attention-based cognitive VCR network PAVCR, which
fuses visual-textual information efficiently and encodes semantic information
in parallel to enable the model to capture rich information for cognition-level
inference. Extensive experiments show that the proposed model yields
significant improvements over existing methods on the benchmark VCR dataset.
Moreover, the proposed model provides intuitive interpretation into visual
commonsense reasoning.
| no_new_dataset | 0.947721 |
2308.01196 | Jorge Paz-Ruza | Jorge Paz-Ruza, Amparo Alonso-Betanzos, Berta Guijarro-Berdi\~nas,
Brais Cancela, Carlos Eiras-Franco | Sustainable transparency in Recommender Systems: Bayesian Ranking of
Images for Explainability | null | null | 10.1016/j.inffus.2024.102497 | null | cs.IR cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recommender Systems have become crucial in the modern world, commonly guiding
users towards relevant content or products, and having a large influence over
the decisions of users and citizens. However, ensuring transparency and user
trust in these systems remains a challenge; personalized explanations have
emerged as a solution, offering justifications for recommendations. Among the
existing approaches for generating personalized explanations, using existing
visual content created by users is a promising option to maximize transparency
and user trust. State-of-the-art models that follow this approach, despite
leveraging highly optimized architectures, employ surrogate learning tasks that
do not efficiently model the objective of ranking images as explanations for a
given recommendation; this leads to a suboptimal training process with high
computational costs that may not be reduced without affecting model
performance. This work presents BRIE, a novel model where we leverage Bayesian
Pairwise Ranking to enhance the training process, allowing us to consistently
outperform state-of-the-art models in six real-world datasets while reducing
its model size by up to 64 times and its CO2 emissions by up to 75% in training
and inference.
| [
{
"version": "v1",
"created": "Thu, 27 Jul 2023 22:57:55 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Dec 2023 11:27:00 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 12:31:27 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Paz-Ruza",
"Jorge",
""
],
[
"Alonso-Betanzos",
"Amparo",
""
],
[
"Guijarro-Berdiñas",
"Berta",
""
],
[
"Cancela",
"Brais",
""
],
[
"Eiras-Franco",
"Carlos",
""
]
]
| TITLE: Sustainable transparency in Recommender Systems: Bayesian Ranking of
Images for Explainability
ABSTRACT: Recommender Systems have become crucial in the modern world, commonly guiding
users towards relevant content or products, and having a large influence over
the decisions of users and citizens. However, ensuring transparency and user
trust in these systems remains a challenge; personalized explanations have
emerged as a solution, offering justifications for recommendations. Among the
existing approaches for generating personalized explanations, using existing
visual content created by users is a promising option to maximize transparency
and user trust. State-of-the-art models that follow this approach, despite
leveraging highly optimized architectures, employ surrogate learning tasks that
do not efficiently model the objective of ranking images as explanations for a
given recommendation; this leads to a suboptimal training process with high
computational costs that may not be reduced without affecting model
performance. This work presents BRIE, a novel model where we leverage Bayesian
Pairwise Ranking to enhance the training process, allowing us to consistently
outperform state-of-the-art models in six real-world datasets while reducing
its model size by up to 64 times and its CO2 emissions by up to 75% in training
and inference.
| no_new_dataset | 0.947672 |
2309.04145 | Weijian Xie | Weijian Xie, Guanyi Chu, Quanhao Qian, Yihao Yu, Hai Li, Danpeng Chen,
Shangjin Zhai, Nan Wang, Hujun Bao, Guofeng Zhang | Depth Completion with Multiple Balanced Bases and Confidence for Dense
Monocular SLAM | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dense SLAM based on monocular cameras does indeed have immense application
value in the field of AR/VR, especially when it is performed on a mobile
device. In this paper, we propose a novel method that integrates a light-weight
depth completion network into a sparse SLAM system using a multi-basis depth
representation, so that dense mapping can be performed online even on a mobile
phone. Specifically, we present a specifically optimized multi-basis depth
completion network, called BBC-Net, tailored to the characteristics of
traditional sparse SLAM systems. BBC-Net can predict multiple balanced bases
and a confidence map from a monocular image with sparse points generated by
off-the-shelf keypoint-based SLAM systems. The final depth is a linear
combination of predicted depth bases that can be optimized by tuning the
corresponding weights. To seamlessly incorporate the weights into traditional
SLAM optimization and ensure efficiency and robustness, we design a set of
depth weight factors, which makes our network a versatile plug-in module,
facilitating easy integration into various existing sparse SLAM systems and
significantly enhancing global depth consistency through bundle adjustment. To
verify the portability of our method, we integrate BBC-Net into two
representative SLAM systems. The experimental results on various datasets show
that the proposed method achieves better performance in monocular dense mapping
than the state-of-the-art methods. We provide an online demo running on a
mobile phone, which verifies the efficiency and mapping quality of the proposed
method in real-world scenarios.
| [
{
"version": "v1",
"created": "Fri, 8 Sep 2023 06:15:27 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Sep 2023 07:54:04 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 15:46:46 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Xie",
"Weijian",
""
],
[
"Chu",
"Guanyi",
""
],
[
"Qian",
"Quanhao",
""
],
[
"Yu",
"Yihao",
""
],
[
"Li",
"Hai",
""
],
[
"Chen",
"Danpeng",
""
],
[
"Zhai",
"Shangjin",
""
],
[
"Wang",
"Nan",
""
],
[
"Bao",
"Hujun",
""
],
[
"Zhang",
"Guofeng",
""
]
]
| TITLE: Depth Completion with Multiple Balanced Bases and Confidence for Dense
Monocular SLAM
ABSTRACT: Dense SLAM based on monocular cameras does indeed have immense application
value in the field of AR/VR, especially when it is performed on a mobile
device. In this paper, we propose a novel method that integrates a light-weight
depth completion network into a sparse SLAM system using a multi-basis depth
representation, so that dense mapping can be performed online even on a mobile
phone. Specifically, we present a specifically optimized multi-basis depth
completion network, called BBC-Net, tailored to the characteristics of
traditional sparse SLAM systems. BBC-Net can predict multiple balanced bases
and a confidence map from a monocular image with sparse points generated by
off-the-shelf keypoint-based SLAM systems. The final depth is a linear
combination of predicted depth bases that can be optimized by tuning the
corresponding weights. To seamlessly incorporate the weights into traditional
SLAM optimization and ensure efficiency and robustness, we design a set of
depth weight factors, which makes our network a versatile plug-in module,
facilitating easy integration into various existing sparse SLAM systems and
significantly enhancing global depth consistency through bundle adjustment. To
verify the portability of our method, we integrate BBC-Net into two
representative SLAM systems. The experimental results on various datasets show
that the proposed method achieves better performance in monocular dense mapping
than the state-of-the-art methods. We provide an online demo running on a
mobile phone, which verifies the efficiency and mapping quality of the proposed
method in real-world scenarios.
| no_new_dataset | 0.944842 |
2310.08944 | Carel van Niekerk | Carel van Niekerk, Christian Geishauser, Michael Heck, Shutong Feng,
Hsien-chin Lin, Nurul Lubis, Benjamin Ruppik and Renato Vukovic and Milica
Ga\v{s}i\'c | A Confidence-based Acquisition Model for Self-supervised Active Learning
and Label Correction | null | Transactions of the Association for Computational Linguistics 2025
version 13 | 10.1162/tacl_a_00734 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Supervised neural approaches are hindered by their dependence on large,
meticulously annotated datasets, a requirement that is particularly cumbersome
for sequential tasks. The quality of annotations tends to deteriorate with the
transition from expert-based to crowd-sourced labelling. To address these
challenges, we present CAMEL (Confidence-based Acquisition Model for Efficient
self-supervised active Learning), a pool-based active learning framework
tailored to sequential multi-output problems. CAMEL possesses two core
features: (1) it requires expert annotators to label only a fraction of a
chosen sequence, and (2) it facilitates self-supervision for the remainder of
the sequence. By deploying a label correction mechanism, CAMEL can also be
utilised for data cleaning. We evaluate CAMEL on two sequential tasks, with a
special emphasis on dialogue belief tracking, a task plagued by the constraints
of limited and noisy datasets. Our experiments demonstrate that CAMEL
significantly outperforms the baselines in terms of efficiency. Furthermore,
the data corrections suggested by our method contribute to an overall
improvement in the quality of the resulting datasets.
| [
{
"version": "v1",
"created": "Fri, 13 Oct 2023 08:19:31 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Nov 2024 08:50:56 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 11:23:19 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"van Niekerk",
"Carel",
""
],
[
"Geishauser",
"Christian",
""
],
[
"Heck",
"Michael",
""
],
[
"Feng",
"Shutong",
""
],
[
"Lin",
"Hsien-chin",
""
],
[
"Lubis",
"Nurul",
""
],
[
"Ruppik",
"Benjamin",
""
],
[
"Vukovic",
"Renato",
""
],
[
"Gašić",
"Milica",
""
]
]
| TITLE: A Confidence-based Acquisition Model for Self-supervised Active Learning
and Label Correction
ABSTRACT: Supervised neural approaches are hindered by their dependence on large,
meticulously annotated datasets, a requirement that is particularly cumbersome
for sequential tasks. The quality of annotations tends to deteriorate with the
transition from expert-based to crowd-sourced labelling. To address these
challenges, we present CAMEL (Confidence-based Acquisition Model for Efficient
self-supervised active Learning), a pool-based active learning framework
tailored to sequential multi-output problems. CAMEL possesses two core
features: (1) it requires expert annotators to label only a fraction of a
chosen sequence, and (2) it facilitates self-supervision for the remainder of
the sequence. By deploying a label correction mechanism, CAMEL can also be
utilised for data cleaning. We evaluate CAMEL on two sequential tasks, with a
special emphasis on dialogue belief tracking, a task plagued by the constraints
of limited and noisy datasets. Our experiments demonstrate that CAMEL
significantly outperforms the baselines in terms of efficiency. Furthermore,
the data corrections suggested by our method contribute to an overall
improvement in the quality of the resulting datasets.
| no_new_dataset | 0.947817 |
2310.17332 | Christoph Bergmeir | Rakshitha Godahewa, Christoph Bergmeir, Zeynep Erkin Baz, Chengjun
Zhu, Zhangdi Song, Salvador Garc\'ia, Dario Benavides | On Forecast Stability | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Forecasts are typically not produced in a vacuum but in a business context,
where forecasts are generated on a regular basis and interact with each other.
For decisions, it may be important that forecasts do not change arbitrarily,
and are stable in some sense. However, this area has received only limited
attention in the forecasting literature. In this paper, we explore two types of
forecast stability that we call vertical stability and horizontal stability.
The existing works in the literature are only applicable to certain base models
and extending these frameworks to be compatible with any base model is not
straightforward. Furthermore, these frameworks can only stabilise the forecasts
vertically. To fill this gap, we propose a simple linear-interpolation-based
approach that is applicable to stabilise the forecasts provided by any base
model vertically and horizontally. The approach can produce both accurate and
stable forecasts. Using N-BEATS, Pooled Regression and LightGBM as the base
models, in our evaluation on four publicly available datasets, the proposed
framework is able to achieve significantly higher stability and/or accuracy
compared to a set of benchmarks including a state-of-the-art forecast
stabilisation method across three error metrics and six stability metrics.
| [
{
"version": "v1",
"created": "Thu, 26 Oct 2023 11:55:30 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 11:58:06 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Godahewa",
"Rakshitha",
""
],
[
"Bergmeir",
"Christoph",
""
],
[
"Baz",
"Zeynep Erkin",
""
],
[
"Zhu",
"Chengjun",
""
],
[
"Song",
"Zhangdi",
""
],
[
"García",
"Salvador",
""
],
[
"Benavides",
"Dario",
""
]
]
| TITLE: On Forecast Stability
ABSTRACT: Forecasts are typically not produced in a vacuum but in a business context,
where forecasts are generated on a regular basis and interact with each other.
For decisions, it may be important that forecasts do not change arbitrarily,
and are stable in some sense. However, this area has received only limited
attention in the forecasting literature. In this paper, we explore two types of
forecast stability that we call vertical stability and horizontal stability.
The existing works in the literature are only applicable to certain base models
and extending these frameworks to be compatible with any base model is not
straightforward. Furthermore, these frameworks can only stabilise the forecasts
vertically. To fill this gap, we propose a simple linear-interpolation-based
approach that is applicable to stabilise the forecasts provided by any base
model vertically and horizontally. The approach can produce both accurate and
stable forecasts. Using N-BEATS, Pooled Regression and LightGBM as the base
models, in our evaluation on four publicly available datasets, the proposed
framework is able to achieve significantly higher stability and/or accuracy
compared to a set of benchmarks including a state-of-the-art forecast
stabilisation method across three error metrics and six stability metrics.
| no_new_dataset | 0.947284 |
2311.10541 | Isa Inuwa-Dutse | Fatima Muhammad Adam, Abubakar Yakubu Zandam, Isa Inuwa-Dutse | Detection and Analysis of Offensive Online Content in Hausa Language | 21 pages, 4 figures, 7 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Hausa, a major Chadic language spoken by over 100 million people mostly in
West Africa is considered a low-resource language from a computational
linguistic perspective. This classification indicates a scarcity of linguistic
resources and tools necessary for handling various natural language processing
(NLP) tasks, including the detection of offensive content. To address this gap,
we conducted two set of studies (1) a user study (n=101) to explore
cyberbullying in Hausa and (2) an empirical study that led to the creation of
the first dataset of offensive terms in the Hausa language. We developed
detection systems trained on this dataset and compared their performance
against relevant multilingual models, including Google Translate. Our detection
system successfully identified over 70% of offensive, whereas baseline models
frequently mistranslated such terms. We attribute this discrepancy to the
nuanced nature of the Hausa language and the reliance of baseline models on
direct or literal translation due to limited data to build purposive detection
systems. These findings highlight the importance of incorporating cultural
context and linguistic nuances when developing NLP models for low-resource
languages such as Hausa. A post hoc analysis further revealed that offensive
language is particularly prevalent in discussions related to religion and
politics. To foster a safer online environment, we recommend involving diverse
stakeholders with expertise in local contexts and demographics. Their insights
will be crucial in developing more accurate detection systems and targeted
moderation strategies that align with cultural sensitivities.
| [
{
"version": "v1",
"created": "Fri, 17 Nov 2023 14:08:44 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 01:18:37 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Adam",
"Fatima Muhammad",
""
],
[
"Zandam",
"Abubakar Yakubu",
""
],
[
"Inuwa-Dutse",
"Isa",
""
]
]
| TITLE: Detection and Analysis of Offensive Online Content in Hausa Language
ABSTRACT: Hausa, a major Chadic language spoken by over 100 million people mostly in
West Africa is considered a low-resource language from a computational
linguistic perspective. This classification indicates a scarcity of linguistic
resources and tools necessary for handling various natural language processing
(NLP) tasks, including the detection of offensive content. To address this gap,
we conducted two set of studies (1) a user study (n=101) to explore
cyberbullying in Hausa and (2) an empirical study that led to the creation of
the first dataset of offensive terms in the Hausa language. We developed
detection systems trained on this dataset and compared their performance
against relevant multilingual models, including Google Translate. Our detection
system successfully identified over 70% of offensive, whereas baseline models
frequently mistranslated such terms. We attribute this discrepancy to the
nuanced nature of the Hausa language and the reliance of baseline models on
direct or literal translation due to limited data to build purposive detection
systems. These findings highlight the importance of incorporating cultural
context and linguistic nuances when developing NLP models for low-resource
languages such as Hausa. A post hoc analysis further revealed that offensive
language is particularly prevalent in discussions related to religion and
politics. To foster a safer online environment, we recommend involving diverse
stakeholders with expertise in local contexts and demographics. Their insights
will be crucial in developing more accurate detection systems and targeted
moderation strategies that align with cultural sensitivities.
| new_dataset | 0.962638 |
2401.05535 | Albert Dorador-Chalar | Albert Dorador | Theoretical and Empirical Advances in Forest Pruning | To be published in Proceedings of Machine Learning Research (PMLR) | null | null | null | stat.ML cs.AI cs.LG math.OC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Regression forests have long delivered state-of-the-art accuracy, often
outperforming regression trees and even neural networks, but they suffer from
limited interpretability as ensemble methods. In this work, we revisit forest
pruning, an approach that aims to have the best of both worlds: the accuracy of
regression forests and the interpretability of regression trees. This pursuit,
whose foundation lies at the core of random forest theory, has seen vast
success in empirical studies. In this paper, we contribute theoretical results
that support and qualify those empirical findings; namely, we prove the
asymptotic advantage of a Lasso-pruned forest over its unpruned counterpart
under weak assumptions, as well as high-probability finite-sample
generalization bounds for regression forests pruned according to the main
methods, which we then validate by way of simulation. Then, we test the
accuracy of pruned regression forests against their unpruned counterparts on 19
different datasets (16 synthetic, 3 real). We find that in the vast majority of
scenarios tested, there is at least one forest-pruning method that yields equal
or better accuracy than the original full forest (in expectation), while just
using a small fraction of the trees. We show that, in some cases, the reduction
in the size of the forest is so dramatic that the resulting sub-forest can be
meaningfully merged into a single tree, obtaining a level of interpretability
that is qualitatively superior to that of the original regression forest, which
remains a black box.
| [
{
"version": "v1",
"created": "Wed, 10 Jan 2024 20:02:47 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Jan 2024 02:58:54 GMT"
},
{
"version": "v3",
"created": "Sun, 22 Sep 2024 16:55:11 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Mar 2025 19:11:43 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Dorador",
"Albert",
""
]
]
| TITLE: Theoretical and Empirical Advances in Forest Pruning
ABSTRACT: Regression forests have long delivered state-of-the-art accuracy, often
outperforming regression trees and even neural networks, but they suffer from
limited interpretability as ensemble methods. In this work, we revisit forest
pruning, an approach that aims to have the best of both worlds: the accuracy of
regression forests and the interpretability of regression trees. This pursuit,
whose foundation lies at the core of random forest theory, has seen vast
success in empirical studies. In this paper, we contribute theoretical results
that support and qualify those empirical findings; namely, we prove the
asymptotic advantage of a Lasso-pruned forest over its unpruned counterpart
under weak assumptions, as well as high-probability finite-sample
generalization bounds for regression forests pruned according to the main
methods, which we then validate by way of simulation. Then, we test the
accuracy of pruned regression forests against their unpruned counterparts on 19
different datasets (16 synthetic, 3 real). We find that in the vast majority of
scenarios tested, there is at least one forest-pruning method that yields equal
or better accuracy than the original full forest (in expectation), while just
using a small fraction of the trees. We show that, in some cases, the reduction
in the size of the forest is so dramatic that the resulting sub-forest can be
meaningfully merged into a single tree, obtaining a level of interpretability
that is qualitatively superior to that of the original regression forest, which
remains a black box.
| no_new_dataset | 0.949106 |
2402.02034 | George Kesidis | Guangmingmei Yang, Xi Li, Hang Wang, David J. Miller and George
Kesidis | CEPA: Consensus Embedded Perturbation for Agnostic Detection and
Inversion of Backdoors | null | null | null | null | cs.CR cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A variety of defenses have been proposed against Trojans planted in (backdoor
attacks on) deep neural network (DNN) classifiers. Backdoor-agnostic methods
seek to reliably detect and/or to mitigate backdoors irrespective of the
incorporation mechanism used by the attacker, while inversion methods
explicitly assume one. In this paper, we describe a new detector that: relies
on embedded feature representations to estimate (invert) the backdoor and to
identify its target class; can operate without access to the training dataset;
and is highly effective for various incorporation mechanisms (i.e., is backdoor
agnostic). Our detection approach is evaluated -- and found to be favorable -
in comparison with an array of published defenses for a variety of different
attacks on the CIFAR-10 and CIFAR-100 image-classification domains.
| [
{
"version": "v1",
"created": "Sat, 3 Feb 2024 05:15:19 GMT"
},
{
"version": "v2",
"created": "Thu, 23 May 2024 01:36:52 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Mar 2025 20:00:04 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Yang",
"Guangmingmei",
""
],
[
"Li",
"Xi",
""
],
[
"Wang",
"Hang",
""
],
[
"Miller",
"David J.",
""
],
[
"Kesidis",
"George",
""
]
]
| TITLE: CEPA: Consensus Embedded Perturbation for Agnostic Detection and
Inversion of Backdoors
ABSTRACT: A variety of defenses have been proposed against Trojans planted in (backdoor
attacks on) deep neural network (DNN) classifiers. Backdoor-agnostic methods
seek to reliably detect and/or to mitigate backdoors irrespective of the
incorporation mechanism used by the attacker, while inversion methods
explicitly assume one. In this paper, we describe a new detector that: relies
on embedded feature representations to estimate (invert) the backdoor and to
identify its target class; can operate without access to the training dataset;
and is highly effective for various incorporation mechanisms (i.e., is backdoor
agnostic). Our detection approach is evaluated -- and found to be favorable -
in comparison with an array of published defenses for a variety of different
attacks on the CIFAR-10 and CIFAR-100 image-classification domains.
| no_new_dataset | 0.9455 |
2402.10457 | Samson Zhou | Chunkai Fu, Brandon G. Nguyen, Jung Hoon Seo, Ryan Zesch, Samson Zhou | Learning-Augmented Search Data Structures | ICLR 2025 | null | null | null | cs.DS cs.LG | http://creativecommons.org/licenses/by/4.0/ | We study the integration of machine learning advice to improve upon
traditional data structure designed for efficient search queries. Although
there has been recent effort in improving the performance of binary search
trees using machine learning advice, e.g., Lin et. al. (ICML 2022), the
resulting constructions nevertheless suffer from inherent weaknesses of binary
search trees, such as complexity of maintaining balance across multiple updates
and the inability to handle partially-ordered or high-dimensional datasets. For
these reasons, we focus on skip lists and KD trees in this work. Given access
to a possibly erroneous oracle that outputs estimated fractional frequencies
for search queries on a set of items, we construct skip lists and KD trees that
provably provides the optimal expected search time, within nearly a factor of
two. In fact, our learning-augmented skip lists and KD trees are still optimal
up to a constant factor, even if the oracle is only accurate within a constant
factor. We also demonstrate robustness by showing that our data structures
achieves an expected search time that is within a constant factor of an
oblivious skip list/KD tree construction even when the predictions are
arbitrarily incorrect. Finally, we empirically show that our learning-augmented
search data structures outperforms their corresponding traditional analogs on
both synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Fri, 16 Feb 2024 05:27:13 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 16:10:36 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Fu",
"Chunkai",
""
],
[
"Nguyen",
"Brandon G.",
""
],
[
"Seo",
"Jung Hoon",
""
],
[
"Zesch",
"Ryan",
""
],
[
"Zhou",
"Samson",
""
]
]
| TITLE: Learning-Augmented Search Data Structures
ABSTRACT: We study the integration of machine learning advice to improve upon
traditional data structure designed for efficient search queries. Although
there has been recent effort in improving the performance of binary search
trees using machine learning advice, e.g., Lin et. al. (ICML 2022), the
resulting constructions nevertheless suffer from inherent weaknesses of binary
search trees, such as complexity of maintaining balance across multiple updates
and the inability to handle partially-ordered or high-dimensional datasets. For
these reasons, we focus on skip lists and KD trees in this work. Given access
to a possibly erroneous oracle that outputs estimated fractional frequencies
for search queries on a set of items, we construct skip lists and KD trees that
provably provides the optimal expected search time, within nearly a factor of
two. In fact, our learning-augmented skip lists and KD trees are still optimal
up to a constant factor, even if the oracle is only accurate within a constant
factor. We also demonstrate robustness by showing that our data structures
achieves an expected search time that is within a constant factor of an
oblivious skip list/KD tree construction even when the predictions are
arbitrarily incorrect. Finally, we empirically show that our learning-augmented
search data structures outperforms their corresponding traditional analogs on
both synthetic and real-world datasets.
| no_new_dataset | 0.947186 |
2403.15107 | Nick Heppert | Adrian R\"ofer, Nick Heppert, Abdallah Ayad, Eugenio Chisari, Abhinav
Valada | PseudoTouch: Efficiently Imaging the Surface Feel of Objects for Robotic
Manipulation | 7 pages, 5 figures, 2 tables, accepted at ICRA 2025 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tactile sensing is vital for human dexterous manipulation, however, it has
not been widely used in robotics. Compact, low-cost sensing platforms can
facilitate a change, but unlike their popular optical counterparts, they are
difficult to deploy in high-fidelity tasks due to their low signal
dimensionality and lack of a simulation model. To overcome these challenges, we
introduce PseudoTouch which links high-dimensional structural information to
low-dimensional sensor signals. It does so by learning a low-dimensional
visual-tactile embedding, wherein we encode a depth patch from which we decode
the tactile signal. We collect and train PseudoTouch on a dataset comprising
aligned tactile and visual data pairs obtained through random touching of eight
basic geometric shapes. We demonstrate the utility of our trained PseudoTouch
model in two downstream tasks: object recognition and grasp stability
prediction. In the object recognition task, we evaluate the learned embedding's
performance on a set of five basic geometric shapes and five household objects.
Using PseudoTouch, we achieve an object recognition accuracy 84% after just ten
touches, surpassing a proprioception baseline. For the grasp stability task, we
use ACRONYM labels to train and evaluate a grasp success predictor using
PseudoTouch's predictions derived from virtual depth information. Our approach
yields a 32% absolute improvement in accuracy compared to the baseline relying
on partial point cloud data. We make the data, code, and trained models
publicly available at https://pseudotouch.cs.uni-freiburg.de.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 10:51:31 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 09:18:19 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Röfer",
"Adrian",
""
],
[
"Heppert",
"Nick",
""
],
[
"Ayad",
"Abdallah",
""
],
[
"Chisari",
"Eugenio",
""
],
[
"Valada",
"Abhinav",
""
]
]
| TITLE: PseudoTouch: Efficiently Imaging the Surface Feel of Objects for Robotic
Manipulation
ABSTRACT: Tactile sensing is vital for human dexterous manipulation, however, it has
not been widely used in robotics. Compact, low-cost sensing platforms can
facilitate a change, but unlike their popular optical counterparts, they are
difficult to deploy in high-fidelity tasks due to their low signal
dimensionality and lack of a simulation model. To overcome these challenges, we
introduce PseudoTouch which links high-dimensional structural information to
low-dimensional sensor signals. It does so by learning a low-dimensional
visual-tactile embedding, wherein we encode a depth patch from which we decode
the tactile signal. We collect and train PseudoTouch on a dataset comprising
aligned tactile and visual data pairs obtained through random touching of eight
basic geometric shapes. We demonstrate the utility of our trained PseudoTouch
model in two downstream tasks: object recognition and grasp stability
prediction. In the object recognition task, we evaluate the learned embedding's
performance on a set of five basic geometric shapes and five household objects.
Using PseudoTouch, we achieve an object recognition accuracy 84% after just ten
touches, surpassing a proprioception baseline. For the grasp stability task, we
use ACRONYM labels to train and evaluate a grasp success predictor using
PseudoTouch's predictions derived from virtual depth information. Our approach
yields a 32% absolute improvement in accuracy compared to the baseline relying
on partial point cloud data. We make the data, code, and trained models
publicly available at https://pseudotouch.cs.uni-freiburg.de.
| new_dataset | 0.93233 |
2404.02289 | Tiberiu-Ioan Szatmari | Tiberiu-Ioan Szatmari and Abhishek Cauligi | Federated Multi-Agent Mapping for Planetary Exploration | 7 pages, 6 figures | null | null | null | cs.RO cs.LG cs.MA | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multi-agent robotic exploration stands to play an important role in space
exploration as the next generation of robotic systems ventures to far-flung
environments. A key challenge in this new paradigm will be to effectively share
and utilize the vast amount of data generated onboard while operating in
bandwidth-constrained regimes typical of space missions. Federated learning
(FL) is a promising tool for bridging this gap. Drawing inspiration from the
upcoming CADRE Lunar rover mission, we propose a federated multi-agent mapping
approach that jointly trains a global map model across agents without
transmitting raw data. Our method leverages implicit neural mapping to generate
parsimonious, adaptable representations, reducing data transmission by up to
93.8% compared to raw maps. Furthermore, we enhance this approach with
meta-initialization on Earth-based traversability datasets to significantly
accelerate map convergence; reducing iterations required to reach target
performance by 80% compared to random initialization. We demonstrate the
efficacy of our approach on Martian terrains and glacier datasets, achieving
downstream path planning F1 scores as high as 0.95 while outperforming on map
reconstruction losses.
| [
{
"version": "v1",
"created": "Tue, 2 Apr 2024 20:32:32 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Sep 2024 12:50:46 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Mar 2025 22:11:55 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Szatmari",
"Tiberiu-Ioan",
""
],
[
"Cauligi",
"Abhishek",
""
]
]
| TITLE: Federated Multi-Agent Mapping for Planetary Exploration
ABSTRACT: Multi-agent robotic exploration stands to play an important role in space
exploration as the next generation of robotic systems ventures to far-flung
environments. A key challenge in this new paradigm will be to effectively share
and utilize the vast amount of data generated onboard while operating in
bandwidth-constrained regimes typical of space missions. Federated learning
(FL) is a promising tool for bridging this gap. Drawing inspiration from the
upcoming CADRE Lunar rover mission, we propose a federated multi-agent mapping
approach that jointly trains a global map model across agents without
transmitting raw data. Our method leverages implicit neural mapping to generate
parsimonious, adaptable representations, reducing data transmission by up to
93.8% compared to raw maps. Furthermore, we enhance this approach with
meta-initialization on Earth-based traversability datasets to significantly
accelerate map convergence; reducing iterations required to reach target
performance by 80% compared to random initialization. We demonstrate the
efficacy of our approach on Martian terrains and glacier datasets, achieving
downstream path planning F1 scores as high as 0.95 while outperforming on map
reconstruction losses.
| no_new_dataset | 0.948442 |
2404.05779 | Kaveen Hiniduma | Kaveen Hiniduma, Suren Byna and Jean Luca Bez | Data Readiness for AI: A 360-Degree Survey | 36 pages, 3 figures, 2 tables, submitted to ACM Computing Surveys | null | 10.1145/3722214 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Artificial Intelligence (AI) applications critically depend on data. Poor
quality data produces inaccurate and ineffective AI models that may lead to
incorrect or unsafe use. Evaluation of data readiness is a crucial step in
improving the quality and appropriateness of data usage for AI. R&D efforts
have been spent on improving data quality. However, standardized metrics for
evaluating data readiness for use in AI training are still evolving. In this
study, we perform a comprehensive survey of metrics used to verify data
readiness for AI training. This survey examines more than 140 papers published
by ACM Digital Library, IEEE Xplore, journals such as Nature, Springer, and
Science Direct, and online articles published by prominent AI experts. This
survey aims to propose a taxonomy of data readiness for AI (DRAI) metrics for
structured and unstructured datasets. We anticipate that this taxonomy will
lead to new standards for DRAI metrics that will be used for enhancing the
quality, accuracy, and fairness of AI training and inference.
| [
{
"version": "v1",
"created": "Mon, 8 Apr 2024 15:19:57 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 18:44:07 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Hiniduma",
"Kaveen",
""
],
[
"Byna",
"Suren",
""
],
[
"Bez",
"Jean Luca",
""
]
]
| TITLE: Data Readiness for AI: A 360-Degree Survey
ABSTRACT: Artificial Intelligence (AI) applications critically depend on data. Poor
quality data produces inaccurate and ineffective AI models that may lead to
incorrect or unsafe use. Evaluation of data readiness is a crucial step in
improving the quality and appropriateness of data usage for AI. R&D efforts
have been spent on improving data quality. However, standardized metrics for
evaluating data readiness for use in AI training are still evolving. In this
study, we perform a comprehensive survey of metrics used to verify data
readiness for AI training. This survey examines more than 140 papers published
by ACM Digital Library, IEEE Xplore, journals such as Nature, Springer, and
Science Direct, and online articles published by prominent AI experts. This
survey aims to propose a taxonomy of data readiness for AI (DRAI) metrics for
structured and unstructured datasets. We anticipate that this taxonomy will
lead to new standards for DRAI metrics that will be used for enhancing the
quality, accuracy, and fairness of AI training and inference.
| no_new_dataset | 0.951953 |
2404.07220 | Kunal Sawarkar | Kunal Sawarkar, Abhilasha Mangal, Shivam Raj Solanki | Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy
with Semantic Search and Hybrid Query-Based Retrievers | Paper accepted by MIPR and presented at The 7th IEEE International
Conference on Multimedia Information. Processing and Retrieval (IEEE-MIPR
2024) | IEEE 15 October 2024 | 10.1109/MIPR62202.2024.00031 | null | cs.IR cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Retrieval-Augmented Generation (RAG) is a prevalent approach to infuse a
private knowledge base of documents with Large Language Models (LLM) to build
Generative Q\&A (Question-Answering) systems. However, RAG accuracy becomes
increasingly challenging as the corpus of documents scales up, with Retrievers
playing an outsized role in the overall RAG accuracy by extracting the most
relevant document from the corpus to provide context to the LLM. In this paper,
we propose the 'Blended RAG' method of leveraging semantic search techniques,
such as Dense Vector indexes and Sparse Encoder indexes, blended with hybrid
query strategies. Our study achieves better retrieval results and sets new
benchmarks for IR (Information Retrieval) datasets like NQ and TREC-COVID
datasets. We further extend such a 'Blended Retriever' to the RAG system to
demonstrate far superior results on Generative Q\&A datasets like SQUAD, even
surpassing fine-tuning performance.
| [
{
"version": "v1",
"created": "Fri, 22 Mar 2024 17:13:46 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Aug 2024 15:32:37 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Sawarkar",
"Kunal",
""
],
[
"Mangal",
"Abhilasha",
""
],
[
"Solanki",
"Shivam Raj",
""
]
]
| TITLE: Blended RAG: Improving RAG (Retriever-Augmented Generation) Accuracy
with Semantic Search and Hybrid Query-Based Retrievers
ABSTRACT: Retrieval-Augmented Generation (RAG) is a prevalent approach to infuse a
private knowledge base of documents with Large Language Models (LLM) to build
Generative Q\&A (Question-Answering) systems. However, RAG accuracy becomes
increasingly challenging as the corpus of documents scales up, with Retrievers
playing an outsized role in the overall RAG accuracy by extracting the most
relevant document from the corpus to provide context to the LLM. In this paper,
we propose the 'Blended RAG' method of leveraging semantic search techniques,
such as Dense Vector indexes and Sparse Encoder indexes, blended with hybrid
query strategies. Our study achieves better retrieval results and sets new
benchmarks for IR (Information Retrieval) datasets like NQ and TREC-COVID
datasets. We further extend such a 'Blended Retriever' to the RAG system to
demonstrate far superior results on Generative Q\&A datasets like SQUAD, even
surpassing fine-tuning performance.
| no_new_dataset | 0.949623 |
2404.07785 | Fei Xue | Fei Xue and Ignas Budvytis and Roberto Cipolla | PRAM: Place Recognition Anywhere Model for Efficient Visual Localization | project page: https://feixue94.github.io/pram-project/ | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Visual localization is a key technique to a variety of applications, e.g.,
autonomous driving, AR/VR, and robotics. For these real applications, both
efficiency and accuracy are important especially on edge devices with limited
computing resources. However, previous frameworks, e.g., absolute pose
regression (APR), scene coordinate regression (SCR), and the hierarchical
method (HM), have limited either accuracy or efficiency in both indoor and
outdoor environments. In this paper, we propose the place recognition anywhere
model (PRAM), a new framework, to perform visual localization efficiently and
accurately by recognizing 3D landmarks. Specifically, PRAM first generates
landmarks directly in 3D space in a self-supervised manner. Without relying on
commonly used classic semantic labels, these 3D landmarks can be defined in any
place in indoor and outdoor scenes with higher generalization ability.
Representing the map with 3D landmarks, PRAM discards global descriptors,
repetitive local descriptors, and redundant 3D points, increasing the memory
efficiency significantly. Then, sparse keypoints, rather than dense pixels, are
utilized as the input tokens to a transformer-based recognition module for
landmark recognition, which enables PRAM to recognize hundreds of landmarks
with high time and memory efficiency. At test time, sparse keypoints and
predicted landmark labels are utilized for outlier removal and landmark-wise
2D-3D matching as opposed to exhaustive 2D-2D matching, which further increases
the time efficiency. A comprehensive evaluation of APRs, SCRs, HMs, and PRAM on
both indoor and outdoor datasets demonstrates that PRAM outperforms ARPs and
SCRs in large-scale scenes with a large margin and gives competitive accuracy
to HMs but reduces over 90\% memory cost and runs 2.4 times faster, leading to
a better balance between efficiency and accuracy.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2024 14:28:04 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 14:51:06 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Xue",
"Fei",
""
],
[
"Budvytis",
"Ignas",
""
],
[
"Cipolla",
"Roberto",
""
]
]
| TITLE: PRAM: Place Recognition Anywhere Model for Efficient Visual Localization
ABSTRACT: Visual localization is a key technique to a variety of applications, e.g.,
autonomous driving, AR/VR, and robotics. For these real applications, both
efficiency and accuracy are important especially on edge devices with limited
computing resources. However, previous frameworks, e.g., absolute pose
regression (APR), scene coordinate regression (SCR), and the hierarchical
method (HM), have limited either accuracy or efficiency in both indoor and
outdoor environments. In this paper, we propose the place recognition anywhere
model (PRAM), a new framework, to perform visual localization efficiently and
accurately by recognizing 3D landmarks. Specifically, PRAM first generates
landmarks directly in 3D space in a self-supervised manner. Without relying on
commonly used classic semantic labels, these 3D landmarks can be defined in any
place in indoor and outdoor scenes with higher generalization ability.
Representing the map with 3D landmarks, PRAM discards global descriptors,
repetitive local descriptors, and redundant 3D points, increasing the memory
efficiency significantly. Then, sparse keypoints, rather than dense pixels, are
utilized as the input tokens to a transformer-based recognition module for
landmark recognition, which enables PRAM to recognize hundreds of landmarks
with high time and memory efficiency. At test time, sparse keypoints and
predicted landmark labels are utilized for outlier removal and landmark-wise
2D-3D matching as opposed to exhaustive 2D-2D matching, which further increases
the time efficiency. A comprehensive evaluation of APRs, SCRs, HMs, and PRAM on
both indoor and outdoor datasets demonstrates that PRAM outperforms ARPs and
SCRs in large-scale scenes with a large margin and gives competitive accuracy
to HMs but reduces over 90\% memory cost and runs 2.4 times faster, leading to
a better balance between efficiency and accuracy.
| no_new_dataset | 0.950915 |
2404.12229 | Jaume Baixeries | Jaume Baixeries and Amedeo Napoli | A minimal base or a direct base? That is the question! | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we revisit the problem of computing the closure of a set of
attributes given a basis of dependencies or implications. This problem is of
main interest in logics, in the relational database model, in lattice theory,
and in Formal Concept Analysis as well. A basis of dependencies may have
different characteristics, among which being ``minimal'', e.g., the
Duquenne-Guigues Basis, or being ``direct'', e.g., the the Canonical Basis and
the D-basis. Here we propose an extensive and experimental study of the impacts
of minimality and directness on the closure algorithms. The results of the
experiments performed on real and synthetic datasets are analyzed in depth, and
suggest a different and fresh look at computing the closure of a set of
attributes w.r.t. a basis of dependencies.
This paper has been submitted to the International Journal of Approximate
Reasoning.
| [
{
"version": "v1",
"created": "Thu, 18 Apr 2024 14:44:23 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 17:15:15 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Baixeries",
"Jaume",
""
],
[
"Napoli",
"Amedeo",
""
]
]
| TITLE: A minimal base or a direct base? That is the question!
ABSTRACT: In this paper we revisit the problem of computing the closure of a set of
attributes given a basis of dependencies or implications. This problem is of
main interest in logics, in the relational database model, in lattice theory,
and in Formal Concept Analysis as well. A basis of dependencies may have
different characteristics, among which being ``minimal'', e.g., the
Duquenne-Guigues Basis, or being ``direct'', e.g., the the Canonical Basis and
the D-basis. Here we propose an extensive and experimental study of the impacts
of minimality and directness on the closure algorithms. The results of the
experiments performed on real and synthetic datasets are analyzed in depth, and
suggest a different and fresh look at computing the closure of a set of
attributes w.r.t. a basis of dependencies.
This paper has been submitted to the International Journal of Approximate
Reasoning.
| no_new_dataset | 0.949482 |
2404.18567 | Md Imran Hossen | Md Imran Hossen, Sai Venkatesh Chilukoti, Liqun Shan, Sheng Chen,
Yinzhi Cao, Xiali Hei | Double Backdoored: Converting Code Large Language Model Backdoors to
Traditional Malware via Adversarial Instruction Tuning Attacks | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Instruction-tuned Large Language Models designed for coding tasks are
increasingly employed as AI coding assistants. However, the cybersecurity
vulnerabilities and implications arising from the widespread integration of
these models are not yet fully understood due to limited research in this
domain. This work investigates novel techniques for transitioning backdoors
from the AI/ML domain to traditional computer malware, shedding light on the
critical intersection of AI and cyber/software security. To explore this
intersection, we present MalInstructCoder, a framework designed to
comprehensively assess the cybersecurity vulnerabilities of instruction-tuned
Code LLMs. MalInstructCoder introduces an automated data poisoning pipeline to
inject malicious code snippets into benign code, poisoning instruction
fine-tuning data while maintaining functional validity. It presents two
practical adversarial instruction tuning attacks with real-world security
implications: the clean prompt poisoning attack and the backdoor attack. These
attacks aim to manipulate Code LLMs to generate code incorporating malicious or
harmful functionality under specific attack scenarios while preserving intended
functionality. We conduct a comprehensive investigation into the exploitability
of the code-specific instruction tuning process involving three
state-of-the-art Code LLMs: CodeLlama, DeepSeek-Coder, and StarCoder2. Our
findings reveal that these models are highly vulnerable to our attacks.
Specifically, the clean prompt poisoning attack achieves the ASR@1 ranging from
over 75% to 86% by poisoning only 1% (162 samples) of the instruction
fine-tuning dataset. Similarly, the backdoor attack achieves the ASR@1 ranging
from 76% to 86% with a 0.5% poisoning rate. Our study sheds light on the
critical cybersecurity risks posed by instruction-tuned Code LLMs and
highlights the urgent need for robust defense mechanisms.
| [
{
"version": "v1",
"created": "Mon, 29 Apr 2024 10:14:58 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 00:46:35 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Hossen",
"Md Imran",
""
],
[
"Chilukoti",
"Sai Venkatesh",
""
],
[
"Shan",
"Liqun",
""
],
[
"Chen",
"Sheng",
""
],
[
"Cao",
"Yinzhi",
""
],
[
"Hei",
"Xiali",
""
]
]
| TITLE: Double Backdoored: Converting Code Large Language Model Backdoors to
Traditional Malware via Adversarial Instruction Tuning Attacks
ABSTRACT: Instruction-tuned Large Language Models designed for coding tasks are
increasingly employed as AI coding assistants. However, the cybersecurity
vulnerabilities and implications arising from the widespread integration of
these models are not yet fully understood due to limited research in this
domain. This work investigates novel techniques for transitioning backdoors
from the AI/ML domain to traditional computer malware, shedding light on the
critical intersection of AI and cyber/software security. To explore this
intersection, we present MalInstructCoder, a framework designed to
comprehensively assess the cybersecurity vulnerabilities of instruction-tuned
Code LLMs. MalInstructCoder introduces an automated data poisoning pipeline to
inject malicious code snippets into benign code, poisoning instruction
fine-tuning data while maintaining functional validity. It presents two
practical adversarial instruction tuning attacks with real-world security
implications: the clean prompt poisoning attack and the backdoor attack. These
attacks aim to manipulate Code LLMs to generate code incorporating malicious or
harmful functionality under specific attack scenarios while preserving intended
functionality. We conduct a comprehensive investigation into the exploitability
of the code-specific instruction tuning process involving three
state-of-the-art Code LLMs: CodeLlama, DeepSeek-Coder, and StarCoder2. Our
findings reveal that these models are highly vulnerable to our attacks.
Specifically, the clean prompt poisoning attack achieves the ASR@1 ranging from
over 75% to 86% by poisoning only 1% (162 samples) of the instruction
fine-tuning dataset. Similarly, the backdoor attack achieves the ASR@1 ranging
from 76% to 86% with a 0.5% poisoning rate. Our study sheds light on the
critical cybersecurity risks posed by instruction-tuned Code LLMs and
highlights the urgent need for robust defense mechanisms.
| no_new_dataset | 0.94545 |
2405.01614 | Christian Marius Lillelund | Christian Marius Lillelund, Fernando Pannullo, Morten Opprud Jakobsen,
Manuel Morante, Christian Fischer Pedersen | RULSurv: A probabilistic survival-based method for early censoring-aware
prediction of remaining useful life in ball bearings | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Censored data refers to situations where the full information about a
particular event or process is only partially known. In survival analysis,
censoring plays an important role, as ignoring such observations can bias the
model parameters and overestimate the probability of when the event is likely
to occur. There has been a renewed interest in using data-driven methods to
predict the remaining useful life (RUL) of ball bearings for predictive
maintenance. However, few studies have explicitly addressed the challenge of
handling censored data. To address this issue, we introduce a novel and
flexible method for early fault detection using Kullback-Leibler (KL)
divergence and RUL estimation using survival analysis that naturally supports
censored data. We demonstrate our approach in the XJTU-SY dataset using a
5-fold cross-validation across three different operating conditions. When
predicting the time to failure for bearings under the highest load (C1, 12.0 kN
and 2100 RPM) with 25\% random censoring, our approach achieves a mean absolute
error (MAE) of 14.7 minutes (95\% CI 13.6-15.8) using a linear CoxPH model, and
an MAE of 12.6 minutes (95\% CI 11.8-13.4) using a nonlinear Random Survival
Forests model, compared to an MAE of 18.5 minutes (95\% 17.4-19.6) using a
linear LASSO model that does not support censoring. Moreover, our approach
achieves a mean cumulative relative accuracy (CRA) of 0.7586 over 5 bearings
under the highest load, which improves over several state-of-the-art baselines.
Our work highlights the importance of considering censored observations as part
of the model design when building predictive models for early fault detection
and RUL estimation.
| [
{
"version": "v1",
"created": "Thu, 2 May 2024 16:17:29 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 12:31:27 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Lillelund",
"Christian Marius",
""
],
[
"Pannullo",
"Fernando",
""
],
[
"Jakobsen",
"Morten Opprud",
""
],
[
"Morante",
"Manuel",
""
],
[
"Pedersen",
"Christian Fischer",
""
]
]
| TITLE: RULSurv: A probabilistic survival-based method for early censoring-aware
prediction of remaining useful life in ball bearings
ABSTRACT: Censored data refers to situations where the full information about a
particular event or process is only partially known. In survival analysis,
censoring plays an important role, as ignoring such observations can bias the
model parameters and overestimate the probability of when the event is likely
to occur. There has been a renewed interest in using data-driven methods to
predict the remaining useful life (RUL) of ball bearings for predictive
maintenance. However, few studies have explicitly addressed the challenge of
handling censored data. To address this issue, we introduce a novel and
flexible method for early fault detection using Kullback-Leibler (KL)
divergence and RUL estimation using survival analysis that naturally supports
censored data. We demonstrate our approach in the XJTU-SY dataset using a
5-fold cross-validation across three different operating conditions. When
predicting the time to failure for bearings under the highest load (C1, 12.0 kN
and 2100 RPM) with 25\% random censoring, our approach achieves a mean absolute
error (MAE) of 14.7 minutes (95\% CI 13.6-15.8) using a linear CoxPH model, and
an MAE of 12.6 minutes (95\% CI 11.8-13.4) using a nonlinear Random Survival
Forests model, compared to an MAE of 18.5 minutes (95\% 17.4-19.6) using a
linear LASSO model that does not support censoring. Moreover, our approach
achieves a mean cumulative relative accuracy (CRA) of 0.7586 over 5 bearings
under the highest load, which improves over several state-of-the-art baselines.
Our work highlights the importance of considering censored observations as part
of the model design when building predictive models for early fault detection
and RUL estimation.
| no_new_dataset | 0.948489 |
2405.06124 | Yigitcan Kaya | Yigitcan Kaya, Yizheng Chen, Marcus Botacin, Shoumik Saha, Fabio
Pierazzi, Lorenzo Cavallaro, David Wagner, Tudor Dumitras | ML-Based Behavioral Malware Detection Is Far From a Solved Problem | Accepted to SaTML 2025 (https://satml.org/). Visit
https://malwaredetectioninthewild.github.io for the leaderboard and data
release | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Malware detection is a ubiquitous application of Machine Learning (ML) in
security. In behavioral malware analysis, the detector relies on features
extracted from program execution traces. The research literature has focused on
detectors trained with features collected from sandbox environments and
evaluated on samples also analyzed in a sandbox. However, in deployment, a
malware detector at endpoint hosts often must rely on traces captured from
endpoint hosts, not from a sandbox. Thus, there is a gap between the literature
and real-world needs.
We present the first measurement study of the performance of ML-based malware
detectors at real-world endpoints. Leveraging a dataset of sandbox traces and a
dataset of in-the-wild program traces, we evaluate two scenarios: (i) an
endpoint detector trained on sandbox traces (convenient and easy to train), and
(ii) an endpoint detector trained on endpoint traces (more challenging to
train, since we need to collect telemetry data). We discover a wide gap between
the performance as measured using prior evaluation methods in the literature --
over 90% -- vs. expected performance in endpoint detection -- about 20%
(scenario (i)) to 50% (scenario (ii)). We characterize the ML challenges that
arise in this domain and contribute to this gap, including label noise,
distribution shift, and spurious features. Moreover, we show several techniques
that achieve 5--30% relative performance improvements over the baselines. Our
evidence suggests that applying detectors trained on sandbox data to endpoint
detection is challenging. The most promising direction is training detectors
directly on endpoint data, which marks a departure from current practice. To
promote progress, we will facilitate researchers to perform realistic detector
evaluations against our real-world dataset.
| [
{
"version": "v1",
"created": "Thu, 9 May 2024 22:04:55 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 20:40:57 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Kaya",
"Yigitcan",
""
],
[
"Chen",
"Yizheng",
""
],
[
"Botacin",
"Marcus",
""
],
[
"Saha",
"Shoumik",
""
],
[
"Pierazzi",
"Fabio",
""
],
[
"Cavallaro",
"Lorenzo",
""
],
[
"Wagner",
"David",
""
],
[
"Dumitras",
"Tudor",
""
]
]
| TITLE: ML-Based Behavioral Malware Detection Is Far From a Solved Problem
ABSTRACT: Malware detection is a ubiquitous application of Machine Learning (ML) in
security. In behavioral malware analysis, the detector relies on features
extracted from program execution traces. The research literature has focused on
detectors trained with features collected from sandbox environments and
evaluated on samples also analyzed in a sandbox. However, in deployment, a
malware detector at endpoint hosts often must rely on traces captured from
endpoint hosts, not from a sandbox. Thus, there is a gap between the literature
and real-world needs.
We present the first measurement study of the performance of ML-based malware
detectors at real-world endpoints. Leveraging a dataset of sandbox traces and a
dataset of in-the-wild program traces, we evaluate two scenarios: (i) an
endpoint detector trained on sandbox traces (convenient and easy to train), and
(ii) an endpoint detector trained on endpoint traces (more challenging to
train, since we need to collect telemetry data). We discover a wide gap between
the performance as measured using prior evaluation methods in the literature --
over 90% -- vs. expected performance in endpoint detection -- about 20%
(scenario (i)) to 50% (scenario (ii)). We characterize the ML challenges that
arise in this domain and contribute to this gap, including label noise,
distribution shift, and spurious features. Moreover, we show several techniques
that achieve 5--30% relative performance improvements over the baselines. Our
evidence suggests that applying detectors trained on sandbox data to endpoint
detection is challenging. The most promising direction is training detectors
directly on endpoint data, which marks a departure from current practice. To
promote progress, we will facilitate researchers to perform realistic detector
evaluations against our real-world dataset.
| no_new_dataset | 0.946843 |
2405.09787 | Dominic LaBella M.D. | Dominic LaBella, Ujjwal Baid, Omaditya Khanna, Shan McBurney-Lin, Ryan
McLean, Pierre Nedelec, Arif Rashid, Nourel Hoda Tahon, Talissa Altes,
Radhika Bhalerao, Yaseen Dhemesh, Devon Godfrey, Fathi Hilal, Scott Floyd,
Anastasia Janas, Anahita Fathi Kazerooni, John Kirkpatrick, Collin Kent,
Florian Kofler, Kevin Leu, Nazanin Maleki, Bjoern Menze, Maxence Pajot,
Zachary J. Reitman, Jeffrey D. Rudie, Rachit Saluja, Yury Velichko, Chunhao
Wang, Pranav Warman, Maruf Adewole, Jake Albrecht, Udunna Anazodo, Syed
Muhammad Anwar, Timothy Bergquist, Sully Francis Chen, Verena Chung, Rong
Chai, Gian-Marco Conte, Farouk Dako, James Eddy, Ivan Ezhov, Nastaran
Khalili, Juan Eugenio Iglesias, Zhifan Jiang, Elaine Johanson, Koen Van
Leemput, Hongwei Bran Li, Marius George Linguraru, Xinyang Liu, Aria
Mahtabfar, Zeke Meier, Ahmed W. Moawad, John Mongan, Marie Piraud, Russell
Takeshi Shinohara, Walter F. Wiggins, Aly H. Abayazeed, Rachel Akinola,
Andr\'as Jakab, Michel Bilello, Maria Correia de Verdier, Priscila
Crivellaro, Christos Davatzikos, Keyvan Farahani, John Freymann, Christopher
Hess, Raymond Huang, Philipp Lohmann, Mana Moassefi, Matthew W. Pease,
Phillipp Vollmuth, Nico Sollmann, David Diffley, Khanak K. Nandolia, Daniel
I. Warren, Ali Hussain, Pascal Fehringer, Yulia Bronstein, Lisa Deptula, Evan
G. Stein, Mahsa Taherzadeh, Eduardo Portela de Oliveira, Aoife Haughey,
Marinos Kontzialis, Luca Saba, Benjamin Turner, Melanie M. T. Br\"u{\ss}eler,
Shehbaz Ansari, Athanasios Gkampenis, David Maximilian Weiss, Aya Mansour,
Islam H. Shawali, Nikolay Yordanov, Joel M. Stein, Roula Hourani, Mohammed
Yahya Moshebah, Ahmed Magdy Abouelatta, Tanvir Rizvi, Klara Willms, Dann C.
Martin, Abdullah Okar, Gennaro D'Anna, Ahmed Taha, Yasaman Sharifi, Shahriar
Faghani, Dominic Kite, Marco Pinho, Muhammad Ammar Haider, Alejandro
Aristizabal, Alexandros Karargyris, Hasan Kassem, Sarthak Pati, Micah
Sheller, Michelle Alonso-Basanta, Javier Villanueva-Meyer, Andreas M.
Rauschecker, Ayman Nada, Mariam Aboian, Adam E. Flanders, Benedikt Wiestler,
Spyridon Bakas, Evan Calabrese | Analysis of the BraTS 2023 Intracranial Meningioma Segmentation
Challenge | Accepted for publication at the Journal of Machine Learning for
Biomedical Imaging (MELBA) https://melba-journal.org/2025:003 22 pages, 6
tables, 12 figures, MICCAI, MELBA | Machine.Learning.for.Biomedical.Imaging. 3 (2025) | 10.59275/j.melba.2025-bea1 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We describe the design and results from the BraTS 2023 Intracranial
Meningioma Segmentation Challenge. The BraTS Meningioma Challenge differed from
prior BraTS Glioma challenges in that it focused on meningiomas, which are
typically benign extra-axial tumors with diverse radiologic and anatomical
presentation and a propensity for multiplicity. Nine participating teams each
developed deep-learning automated segmentation models using image data from the
largest multi-institutional systematically expert annotated multilabel
multi-sequence meningioma MRI dataset to date, which included 1000 training set
cases, 141 validation set cases, and 283 hidden test set cases. Each case
included T2, FLAIR, T1, and T1Gd brain MRI sequences with associated tumor
compartment labels delineating enhancing tumor, non-enhancing tumor, and
surrounding non-enhancing FLAIR hyperintensity. Participant automated
segmentation models were evaluated and ranked based on a scoring system
evaluating lesion-wise metrics including dice similarity coefficient (DSC) and
95% Hausdorff Distance. The top ranked team had a lesion-wise median dice
similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor,
tumor core, and whole tumor, respectively and a corresponding average DSC of
0.899, 0.904, and 0.871, respectively. These results serve as state-of-the-art
benchmarks for future pre-operative meningioma automated segmentation
algorithms. Additionally, we found that 1286 of 1424 cases (90.3%) had at least
1 compartment voxel abutting the edge of the skull-stripped image edge, which
requires further investigation into optimal pre-processing face anonymization
steps.
| [
{
"version": "v1",
"created": "Thu, 16 May 2024 03:23:57 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 13:25:18 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"LaBella",
"Dominic",
""
],
[
"Baid",
"Ujjwal",
""
],
[
"Khanna",
"Omaditya",
""
],
[
"McBurney-Lin",
"Shan",
""
],
[
"McLean",
"Ryan",
""
],
[
"Nedelec",
"Pierre",
""
],
[
"Rashid",
"Arif",
""
],
[
"Tahon",
"Nourel Hoda",
""
],
[
"Altes",
"Talissa",
""
],
[
"Bhalerao",
"Radhika",
""
],
[
"Dhemesh",
"Yaseen",
""
],
[
"Godfrey",
"Devon",
""
],
[
"Hilal",
"Fathi",
""
],
[
"Floyd",
"Scott",
""
],
[
"Janas",
"Anastasia",
""
],
[
"Kazerooni",
"Anahita Fathi",
""
],
[
"Kirkpatrick",
"John",
""
],
[
"Kent",
"Collin",
""
],
[
"Kofler",
"Florian",
""
],
[
"Leu",
"Kevin",
""
],
[
"Maleki",
"Nazanin",
""
],
[
"Menze",
"Bjoern",
""
],
[
"Pajot",
"Maxence",
""
],
[
"Reitman",
"Zachary J.",
""
],
[
"Rudie",
"Jeffrey D.",
""
],
[
"Saluja",
"Rachit",
""
],
[
"Velichko",
"Yury",
""
],
[
"Wang",
"Chunhao",
""
],
[
"Warman",
"Pranav",
""
],
[
"Adewole",
"Maruf",
""
],
[
"Albrecht",
"Jake",
""
],
[
"Anazodo",
"Udunna",
""
],
[
"Anwar",
"Syed Muhammad",
""
],
[
"Bergquist",
"Timothy",
""
],
[
"Chen",
"Sully Francis",
""
],
[
"Chung",
"Verena",
""
],
[
"Chai",
"Rong",
""
],
[
"Conte",
"Gian-Marco",
""
],
[
"Dako",
"Farouk",
""
],
[
"Eddy",
"James",
""
],
[
"Ezhov",
"Ivan",
""
],
[
"Khalili",
"Nastaran",
""
],
[
"Iglesias",
"Juan Eugenio",
""
],
[
"Jiang",
"Zhifan",
""
],
[
"Johanson",
"Elaine",
""
],
[
"Van Leemput",
"Koen",
""
],
[
"Li",
"Hongwei Bran",
""
],
[
"Linguraru",
"Marius George",
""
],
[
"Liu",
"Xinyang",
""
],
[
"Mahtabfar",
"Aria",
""
],
[
"Meier",
"Zeke",
""
],
[
"Moawad",
"Ahmed W.",
""
],
[
"Mongan",
"John",
""
],
[
"Piraud",
"Marie",
""
],
[
"Shinohara",
"Russell Takeshi",
""
],
[
"Wiggins",
"Walter F.",
""
],
[
"Abayazeed",
"Aly H.",
""
],
[
"Akinola",
"Rachel",
""
],
[
"Jakab",
"András",
""
],
[
"Bilello",
"Michel",
""
],
[
"de Verdier",
"Maria Correia",
""
],
[
"Crivellaro",
"Priscila",
""
],
[
"Davatzikos",
"Christos",
""
],
[
"Farahani",
"Keyvan",
""
],
[
"Freymann",
"John",
""
],
[
"Hess",
"Christopher",
""
],
[
"Huang",
"Raymond",
""
],
[
"Lohmann",
"Philipp",
""
],
[
"Moassefi",
"Mana",
""
],
[
"Pease",
"Matthew W.",
""
],
[
"Vollmuth",
"Phillipp",
""
],
[
"Sollmann",
"Nico",
""
],
[
"Diffley",
"David",
""
],
[
"Nandolia",
"Khanak K.",
""
],
[
"Warren",
"Daniel I.",
""
],
[
"Hussain",
"Ali",
""
],
[
"Fehringer",
"Pascal",
""
],
[
"Bronstein",
"Yulia",
""
],
[
"Deptula",
"Lisa",
""
],
[
"Stein",
"Evan G.",
""
],
[
"Taherzadeh",
"Mahsa",
""
],
[
"de Oliveira",
"Eduardo Portela",
""
],
[
"Haughey",
"Aoife",
""
],
[
"Kontzialis",
"Marinos",
""
],
[
"Saba",
"Luca",
""
],
[
"Turner",
"Benjamin",
""
],
[
"Brüßeler",
"Melanie M. T.",
""
],
[
"Ansari",
"Shehbaz",
""
],
[
"Gkampenis",
"Athanasios",
""
],
[
"Weiss",
"David Maximilian",
""
],
[
"Mansour",
"Aya",
""
],
[
"Shawali",
"Islam H.",
""
],
[
"Yordanov",
"Nikolay",
""
],
[
"Stein",
"Joel M.",
""
],
[
"Hourani",
"Roula",
""
],
[
"Moshebah",
"Mohammed Yahya",
""
],
[
"Abouelatta",
"Ahmed Magdy",
""
],
[
"Rizvi",
"Tanvir",
""
],
[
"Willms",
"Klara",
""
],
[
"Martin",
"Dann C.",
""
],
[
"Okar",
"Abdullah",
""
],
[
"D'Anna",
"Gennaro",
""
],
[
"Taha",
"Ahmed",
""
],
[
"Sharifi",
"Yasaman",
""
],
[
"Faghani",
"Shahriar",
""
],
[
"Kite",
"Dominic",
""
],
[
"Pinho",
"Marco",
""
],
[
"Haider",
"Muhammad Ammar",
""
],
[
"Aristizabal",
"Alejandro",
""
],
[
"Karargyris",
"Alexandros",
""
],
[
"Kassem",
"Hasan",
""
],
[
"Pati",
"Sarthak",
""
],
[
"Sheller",
"Micah",
""
],
[
"Alonso-Basanta",
"Michelle",
""
],
[
"Villanueva-Meyer",
"Javier",
""
],
[
"Rauschecker",
"Andreas M.",
""
],
[
"Nada",
"Ayman",
""
],
[
"Aboian",
"Mariam",
""
],
[
"Flanders",
"Adam E.",
""
],
[
"Wiestler",
"Benedikt",
""
],
[
"Bakas",
"Spyridon",
""
],
[
"Calabrese",
"Evan",
""
]
]
| TITLE: Analysis of the BraTS 2023 Intracranial Meningioma Segmentation
Challenge
ABSTRACT: We describe the design and results from the BraTS 2023 Intracranial
Meningioma Segmentation Challenge. The BraTS Meningioma Challenge differed from
prior BraTS Glioma challenges in that it focused on meningiomas, which are
typically benign extra-axial tumors with diverse radiologic and anatomical
presentation and a propensity for multiplicity. Nine participating teams each
developed deep-learning automated segmentation models using image data from the
largest multi-institutional systematically expert annotated multilabel
multi-sequence meningioma MRI dataset to date, which included 1000 training set
cases, 141 validation set cases, and 283 hidden test set cases. Each case
included T2, FLAIR, T1, and T1Gd brain MRI sequences with associated tumor
compartment labels delineating enhancing tumor, non-enhancing tumor, and
surrounding non-enhancing FLAIR hyperintensity. Participant automated
segmentation models were evaluated and ranked based on a scoring system
evaluating lesion-wise metrics including dice similarity coefficient (DSC) and
95% Hausdorff Distance. The top ranked team had a lesion-wise median dice
similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor,
tumor core, and whole tumor, respectively and a corresponding average DSC of
0.899, 0.904, and 0.871, respectively. These results serve as state-of-the-art
benchmarks for future pre-operative meningioma automated segmentation
algorithms. Additionally, we found that 1286 of 1424 cases (90.3%) had at least
1 compartment voxel abutting the edge of the skull-stripped image edge, which
requires further investigation into optimal pre-processing face anonymization
steps.
| no_new_dataset | 0.933794 |
2405.20446 | Maya Anderson | Maya Anderson, Guy Amit, Abigail Goldsteen | Is My Data in Your Retrieval Database? Membership Inference Attacks
Against Retrieval Augmented Generation | 12 pages, 4 figures | Proceedings of the 11th International Conference on Information
Systems Security and Privacy - Volume 2: ICISSP 2025; ISBN 978-989-758-735-1;
ISSN 2184-4356, SciTePress, pages 474-485 | 10.5220/0013108300003899 | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Retrieval Augmented Generation (RAG) systems have shown great promise in
natural language processing. However, their reliance on data stored in a
retrieval database, which may contain proprietary or sensitive information,
introduces new privacy concerns. Specifically, an attacker may be able to infer
whether a certain text passage appears in the retrieval database by observing
the outputs of the RAG system, an attack known as a Membership Inference Attack
(MIA). Despite the significance of this threat, MIAs against RAG systems have
yet remained under-explored. This study addresses this gap by introducing an
efficient and easy-to-use method for conducting MIA against RAG systems. We
demonstrate the effectiveness of our attack using two benchmark datasets and
multiple generative models, showing that the membership of a document in the
retrieval database can be efficiently determined through the creation of an
appropriate prompt in both black-box and gray-box settings. Moreover, we
introduce an initial defense strategy based on adding instructions to the RAG
template, which shows high effectiveness for some datasets and models. Our
findings highlight the importance of implementing security countermeasures in
deployed RAG systems and developing more advanced defenses to protect the
privacy and security of retrieval databases.
| [
{
"version": "v1",
"created": "Thu, 30 May 2024 19:46:36 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jun 2024 09:39:39 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Feb 2025 14:35:38 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Anderson",
"Maya",
""
],
[
"Amit",
"Guy",
""
],
[
"Goldsteen",
"Abigail",
""
]
]
| TITLE: Is My Data in Your Retrieval Database? Membership Inference Attacks
Against Retrieval Augmented Generation
ABSTRACT: Retrieval Augmented Generation (RAG) systems have shown great promise in
natural language processing. However, their reliance on data stored in a
retrieval database, which may contain proprietary or sensitive information,
introduces new privacy concerns. Specifically, an attacker may be able to infer
whether a certain text passage appears in the retrieval database by observing
the outputs of the RAG system, an attack known as a Membership Inference Attack
(MIA). Despite the significance of this threat, MIAs against RAG systems have
yet remained under-explored. This study addresses this gap by introducing an
efficient and easy-to-use method for conducting MIA against RAG systems. We
demonstrate the effectiveness of our attack using two benchmark datasets and
multiple generative models, showing that the membership of a document in the
retrieval database can be efficiently determined through the creation of an
appropriate prompt in both black-box and gray-box settings. Moreover, we
introduce an initial defense strategy based on adding instructions to the RAG
template, which shows high effectiveness for some datasets and models. Our
findings highlight the importance of implementing security countermeasures in
deployed RAG systems and developing more advanced defenses to protect the
privacy and security of retrieval databases.
| no_new_dataset | 0.944228 |
2406.00965 | Xinglin Chen | Yishuai Cai, Xinglin Chen, Yunxin Mao, Minglong Li, Shaowu Yang,
Wenjing Yang, Ji Wang | HBTP: Heuristic Behavior Tree Planning with Large Language Model
Reasoning | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Behavior Trees (BTs) are increasingly becoming a popular control structure in
robotics due to their modularity, reactivity, and robustness. In terms of BT
generation methods, BT planning shows promise for generating reliable BTs.
However, the scalability of BT planning is often constrained by prolonged
planning times in complex scenarios, largely due to a lack of domain knowledge.
In contrast, pre-trained Large Language Models (LLMs) have demonstrated task
reasoning capabilities across various domains, though the correctness and
safety of their planning remain uncertain. This paper proposes integrating BT
planning with LLM reasoning, introducing Heuristic Behavior Tree Planning
(HBTP)-a reliable and efficient framework for BT generation. The key idea in
HBTP is to leverage LLMs for task-specific reasoning to generate a heuristic
path, which BT planning can then follow to expand efficiently. We first
introduce the heuristic BT expansion process, along with two heuristic variants
designed for optimal planning and satisficing planning, respectively. Then, we
propose methods to address the inaccuracies of LLM reasoning, including action
space pruning and reflective feedback, to further enhance both reasoning
accuracy and planning efficiency. Experiments demonstrate the theoretical
bounds of HBTP, and results from four datasets confirm its practical
effectiveness in everyday service robot applications.
| [
{
"version": "v1",
"created": "Mon, 3 Jun 2024 03:38:56 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jun 2024 01:41:24 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Oct 2024 08:55:21 GMT"
},
{
"version": "v4",
"created": "Thu, 10 Oct 2024 02:36:53 GMT"
},
{
"version": "v5",
"created": "Fri, 7 Mar 2025 08:27:32 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Cai",
"Yishuai",
""
],
[
"Chen",
"Xinglin",
""
],
[
"Mao",
"Yunxin",
""
],
[
"Li",
"Minglong",
""
],
[
"Yang",
"Shaowu",
""
],
[
"Yang",
"Wenjing",
""
],
[
"Wang",
"Ji",
""
]
]
| TITLE: HBTP: Heuristic Behavior Tree Planning with Large Language Model
Reasoning
ABSTRACT: Behavior Trees (BTs) are increasingly becoming a popular control structure in
robotics due to their modularity, reactivity, and robustness. In terms of BT
generation methods, BT planning shows promise for generating reliable BTs.
However, the scalability of BT planning is often constrained by prolonged
planning times in complex scenarios, largely due to a lack of domain knowledge.
In contrast, pre-trained Large Language Models (LLMs) have demonstrated task
reasoning capabilities across various domains, though the correctness and
safety of their planning remain uncertain. This paper proposes integrating BT
planning with LLM reasoning, introducing Heuristic Behavior Tree Planning
(HBTP)-a reliable and efficient framework for BT generation. The key idea in
HBTP is to leverage LLMs for task-specific reasoning to generate a heuristic
path, which BT planning can then follow to expand efficiently. We first
introduce the heuristic BT expansion process, along with two heuristic variants
designed for optimal planning and satisficing planning, respectively. Then, we
propose methods to address the inaccuracies of LLM reasoning, including action
space pruning and reflective feedback, to further enhance both reasoning
accuracy and planning efficiency. Experiments demonstrate the theoretical
bounds of HBTP, and results from four datasets confirm its practical
effectiveness in everyday service robot applications.
| no_new_dataset | 0.950041 |
2406.05053 | Adish Singla | Nachiket Kotalwar, Alkis Gotovos, Adish Singla | Hints-In-Browser: Benchmarking Language Models for Programming Feedback
Generation | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative AI and large language models hold great promise in enhancing
programming education by generating individualized feedback and hints for
learners. Recent works have primarily focused on improving the quality of
generated feedback to achieve human tutors' quality. While quality is an
important performance criterion, it is not the only criterion to optimize for
real-world educational deployments. In this paper, we benchmark language models
for programming feedback generation across several performance criteria,
including quality, cost, time, and data privacy. The key idea is to leverage
recent advances in the new paradigm of in-browser inference that allow running
these models directly in the browser, thereby providing direct benefits across
cost and data privacy. To boost the feedback quality of small models compatible
with in-browser inference engines, we develop a fine-tuning pipeline based on
GPT-4 generated synthetic data. We showcase the efficacy of fine-tuned
Llama3-8B and Phi3-3.8B 4-bit quantized models using WebLLM's in-browser
inference engine on three different Python programming datasets. We will
release the full implementation along with a web app and datasets to facilitate
further research on in-browser language models.
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2024 16:22:51 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 12:46:14 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Kotalwar",
"Nachiket",
""
],
[
"Gotovos",
"Alkis",
""
],
[
"Singla",
"Adish",
""
]
]
| TITLE: Hints-In-Browser: Benchmarking Language Models for Programming Feedback
Generation
ABSTRACT: Generative AI and large language models hold great promise in enhancing
programming education by generating individualized feedback and hints for
learners. Recent works have primarily focused on improving the quality of
generated feedback to achieve human tutors' quality. While quality is an
important performance criterion, it is not the only criterion to optimize for
real-world educational deployments. In this paper, we benchmark language models
for programming feedback generation across several performance criteria,
including quality, cost, time, and data privacy. The key idea is to leverage
recent advances in the new paradigm of in-browser inference that allow running
these models directly in the browser, thereby providing direct benefits across
cost and data privacy. To boost the feedback quality of small models compatible
with in-browser inference engines, we develop a fine-tuning pipeline based on
GPT-4 generated synthetic data. We showcase the efficacy of fine-tuned
Llama3-8B and Phi3-3.8B 4-bit quantized models using WebLLM's in-browser
inference engine on three different Python programming datasets. We will
release the full implementation along with a web app and datasets to facilitate
further research on in-browser language models.
| no_new_dataset | 0.936518 |
2406.09367 | Zijia Zhao | Zijia Zhao, Haoyu Lu, Yuqi Huo, Yifan Du, Tongtian Yue, Longteng Guo,
Bingning Wang, Weipeng Chen, Jing Liu | Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video
MLLMs | ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Video understanding is a crucial next step for multimodal large language
models (MLLMs). Various benchmarks are introduced for better evaluating the
MLLMs. Nevertheless, current video benchmarks are still inefficient for
evaluating video models during iterative development due to the high cost of
constructing datasets and the difficulty in isolating specific skills. In this
paper, we propose VideoNIAH (Video Needle In A Haystack), a benchmark
construction framework through synthetic video generation. VideoNIAH decouples
video content from their query-responses by inserting unrelated visual
'needles' into original videos. The framework automates the generation of
query-response pairs using predefined rules, minimizing manual labor. The
queries focus on specific aspects of video understanding, enabling more
skill-specific evaluations. The separation between video content and the
queries also allow for increased video variety and evaluations across different
lengths. Utilizing VideoNIAH, we compile a video benchmark VNBench, which
includes tasks such as retrieval, ordering, and counting to evaluate three key
aspects of video understanding: temporal perception, chronological ordering,
and spatio-temporal coherence. We conduct a comprehensive evaluation of both
proprietary and open-source models, uncovering significant differences in their
video understanding capabilities across various tasks. Additionally, we perform
an in-depth analysis of the test results and model configurations. Based on
these findings, we provide some advice for improving video MLLM training,
offering valuable insights to guide future research and model development. The
code and data are available at https://github.com/joez17/VideoNIAH.
| [
{
"version": "v1",
"created": "Thu, 13 Jun 2024 17:50:05 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Oct 2024 14:12:49 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 09:40:34 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zhao",
"Zijia",
""
],
[
"Lu",
"Haoyu",
""
],
[
"Huo",
"Yuqi",
""
],
[
"Du",
"Yifan",
""
],
[
"Yue",
"Tongtian",
""
],
[
"Guo",
"Longteng",
""
],
[
"Wang",
"Bingning",
""
],
[
"Chen",
"Weipeng",
""
],
[
"Liu",
"Jing",
""
]
]
| TITLE: Needle In A Video Haystack: A Scalable Synthetic Evaluator for Video
MLLMs
ABSTRACT: Video understanding is a crucial next step for multimodal large language
models (MLLMs). Various benchmarks are introduced for better evaluating the
MLLMs. Nevertheless, current video benchmarks are still inefficient for
evaluating video models during iterative development due to the high cost of
constructing datasets and the difficulty in isolating specific skills. In this
paper, we propose VideoNIAH (Video Needle In A Haystack), a benchmark
construction framework through synthetic video generation. VideoNIAH decouples
video content from their query-responses by inserting unrelated visual
'needles' into original videos. The framework automates the generation of
query-response pairs using predefined rules, minimizing manual labor. The
queries focus on specific aspects of video understanding, enabling more
skill-specific evaluations. The separation between video content and the
queries also allow for increased video variety and evaluations across different
lengths. Utilizing VideoNIAH, we compile a video benchmark VNBench, which
includes tasks such as retrieval, ordering, and counting to evaluate three key
aspects of video understanding: temporal perception, chronological ordering,
and spatio-temporal coherence. We conduct a comprehensive evaluation of both
proprietary and open-source models, uncovering significant differences in their
video understanding capabilities across various tasks. Additionally, we perform
an in-depth analysis of the test results and model configurations. Based on
these findings, we provide some advice for improving video MLLM training,
offering valuable insights to guide future research and model development. The
code and data are available at https://github.com/joez17/VideoNIAH.
| no_new_dataset | 0.935051 |
2406.09760 | Changyu Chen | Changyu Chen, Zichen Liu, Chao Du, Tianyu Pang, Qian Liu, Arunesh
Sinha, Pradeep Varakantham, Min Lin | Bootstrapping Language Models with DPO Implicit Rewards | Accepted in ICLR 2025 | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Human alignment in large language models (LLMs) is an active area of
research. A recent groundbreaking work, direct preference optimization (DPO),
has greatly simplified the process from past work in reinforcement learning
from human feedback (RLHF) by bypassing the reward learning stage in RLHF. DPO,
after training, provides an implicit reward model. In this work, we make a
novel observation that this implicit reward model can by itself be used in a
bootstrapping fashion to further align the LLM. Our approach is to use the
rewards from a current LLM to construct a preference dataset, which is then
used in subsequent DPO rounds. We incorporate two refinements to further
improve our approach: 1) length-regularized reward shaping to make the
preference dataset length-unbiased; 2) experience replay to enhance the quality
of the preference dataset. Our approach, named self-alignment with DPO ImpliCit
rEwards (DICE), shows great improvements in alignment. It achieves an increase
of more than 8$\\%$ in lengthcontrolled win rate on AlpacaEval 2 for all the
different base models that we tried, without relying on external feedback. Our
code is available at https://github.com/sail-sg/dice.
| [
{
"version": "v1",
"created": "Fri, 14 Jun 2024 06:57:18 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 15:26:03 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Chen",
"Changyu",
""
],
[
"Liu",
"Zichen",
""
],
[
"Du",
"Chao",
""
],
[
"Pang",
"Tianyu",
""
],
[
"Liu",
"Qian",
""
],
[
"Sinha",
"Arunesh",
""
],
[
"Varakantham",
"Pradeep",
""
],
[
"Lin",
"Min",
""
]
]
| TITLE: Bootstrapping Language Models with DPO Implicit Rewards
ABSTRACT: Human alignment in large language models (LLMs) is an active area of
research. A recent groundbreaking work, direct preference optimization (DPO),
has greatly simplified the process from past work in reinforcement learning
from human feedback (RLHF) by bypassing the reward learning stage in RLHF. DPO,
after training, provides an implicit reward model. In this work, we make a
novel observation that this implicit reward model can by itself be used in a
bootstrapping fashion to further align the LLM. Our approach is to use the
rewards from a current LLM to construct a preference dataset, which is then
used in subsequent DPO rounds. We incorporate two refinements to further
improve our approach: 1) length-regularized reward shaping to make the
preference dataset length-unbiased; 2) experience replay to enhance the quality
of the preference dataset. Our approach, named self-alignment with DPO ImpliCit
rEwards (DICE), shows great improvements in alignment. It achieves an increase
of more than 8$\\%$ in lengthcontrolled win rate on AlpacaEval 2 for all the
different base models that we tried, without relying on external feedback. Our
code is available at https://github.com/sail-sg/dice.
| no_new_dataset | 0.943086 |
2406.17975 | Matthieu Meeus | Matthieu Meeus, Igor Shilov, Shubham Jain, Manuel Faysse, Marek Rei,
Yves-Alexandre de Montjoye | SoK: Membership Inference Attacks on LLMs are Rushing Nowhere (and How
to Fix It) | IEEE Conference on Secure and Trustworthy Machine Learning (SaTML
2025) | null | null | null | cs.CL cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Whether LLMs memorize their training data and what this means, from measuring
privacy leakage to detecting copyright violations, has become a rapidly growing
area of research. In the last few months, more than 10 new methods have been
proposed to perform Membership Inference Attacks (MIAs) against LLMs. Contrary
to traditional MIAs which rely on fixed-but randomized-records or models, these
methods are mostly trained and tested on datasets collected post-hoc. Sets of
members and non-members, used to evaluate the MIA, are constructed using
informed guesses after the release of a model. This lack of randomization
raises concerns of a distribution shift between members and non-members. In
this work, we first extensively review the literature on MIAs against LLMs and
show that, while most work focuses on sequence-level MIAs evaluated in post-hoc
setups, a range of target models, motivations and units of interest are
considered. We then quantify distribution shifts present in 6 datasets used in
the literature using a model-less bag of word classifier and show that all
datasets constructed post-hoc suffer from strong distribution shifts. These
shifts invalidate the claims of LLMs memorizing strongly in real-world
scenarios and, potentially, also the methodological contributions of the recent
papers based on these datasets. Yet, all hope might not be lost. We introduce
important considerations to properly evaluate MIAs against LLMs and discuss, in
turn, potential ways forwards: randomized test splits, injections of randomized
(unique) sequences, randomized fine-tuning, and several post-hoc control
methods. While each option comes with its advantages and limitations, we
believe they collectively provide solid grounds to guide MIA development and
study LLM memorization. We conclude with an overview of recommended approaches
to benchmark sequence-level and document-level MIAs against LLMs.
| [
{
"version": "v1",
"created": "Tue, 25 Jun 2024 23:12:07 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Oct 2024 17:49:13 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 16:30:07 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Meeus",
"Matthieu",
""
],
[
"Shilov",
"Igor",
""
],
[
"Jain",
"Shubham",
""
],
[
"Faysse",
"Manuel",
""
],
[
"Rei",
"Marek",
""
],
[
"de Montjoye",
"Yves-Alexandre",
""
]
]
| TITLE: SoK: Membership Inference Attacks on LLMs are Rushing Nowhere (and How
to Fix It)
ABSTRACT: Whether LLMs memorize their training data and what this means, from measuring
privacy leakage to detecting copyright violations, has become a rapidly growing
area of research. In the last few months, more than 10 new methods have been
proposed to perform Membership Inference Attacks (MIAs) against LLMs. Contrary
to traditional MIAs which rely on fixed-but randomized-records or models, these
methods are mostly trained and tested on datasets collected post-hoc. Sets of
members and non-members, used to evaluate the MIA, are constructed using
informed guesses after the release of a model. This lack of randomization
raises concerns of a distribution shift between members and non-members. In
this work, we first extensively review the literature on MIAs against LLMs and
show that, while most work focuses on sequence-level MIAs evaluated in post-hoc
setups, a range of target models, motivations and units of interest are
considered. We then quantify distribution shifts present in 6 datasets used in
the literature using a model-less bag of word classifier and show that all
datasets constructed post-hoc suffer from strong distribution shifts. These
shifts invalidate the claims of LLMs memorizing strongly in real-world
scenarios and, potentially, also the methodological contributions of the recent
papers based on these datasets. Yet, all hope might not be lost. We introduce
important considerations to properly evaluate MIAs against LLMs and discuss, in
turn, potential ways forwards: randomized test splits, injections of randomized
(unique) sequences, randomized fine-tuning, and several post-hoc control
methods. While each option comes with its advantages and limitations, we
believe they collectively provide solid grounds to guide MIA development and
study LLM memorization. We conclude with an overview of recommended approaches
to benchmark sequence-level and document-level MIAs against LLMs.
| no_new_dataset | 0.946695 |
2407.01194 | Amitoz Azad | Amitoz Azad and Yuan Fang | A Learned Generalized Geodesic Distance Function-Based Approach for Node
Feature Augmentation on Graphs | Accepted at KDD 2024 Research Track | null | 10.1145/3637528.3671858 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Geodesic distances on manifolds have numerous applications in image
processing, computer graphics and computer vision. In this work, we introduce
an approach called `LGGD' (Learned Generalized Geodesic Distances). This method
involves generating node features by learning a generalized geodesic distance
function through a training pipeline that incorporates training data, graph
topology and the node content features. The strength of this method lies in the
proven robustness of the generalized geodesic distances to noise and outliers.
Our contributions encompass improved performance in node classification tasks,
competitive results with state-of-the-art methods on real-world graph datasets,
the demonstration of the learnability of parameters within the generalized
geodesic equation on graph, and dynamic inclusion of new labels.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2024 11:39:15 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 07:47:19 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Azad",
"Amitoz",
""
],
[
"Fang",
"Yuan",
""
]
]
| TITLE: A Learned Generalized Geodesic Distance Function-Based Approach for Node
Feature Augmentation on Graphs
ABSTRACT: Geodesic distances on manifolds have numerous applications in image
processing, computer graphics and computer vision. In this work, we introduce
an approach called `LGGD' (Learned Generalized Geodesic Distances). This method
involves generating node features by learning a generalized geodesic distance
function through a training pipeline that incorporates training data, graph
topology and the node content features. The strength of this method lies in the
proven robustness of the generalized geodesic distances to noise and outliers.
Our contributions encompass improved performance in node classification tasks,
competitive results with state-of-the-art methods on real-world graph datasets,
the demonstration of the learnability of parameters within the generalized
geodesic equation on graph, and dynamic inclusion of new labels.
| no_new_dataset | 0.955319 |
2407.01888 | Xue-Yu Du | Xueyu Du, Lilian Zhang, Ruochen Liu, Maosong Wang, Wenqi Wu and Jun
Mao | PO-MSCKF: An Efficient Visual-Inertial Odometry by Reconstructing the
Multi-State Constrained Kalman Filter with the Pose-only Theory | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient Visual-Inertial Odometry (VIO) is crucial for payload-constrained
robots. Though modern optimization-based algorithms have achieved superior
accuracy, the MSCKF-based VIO algorithms are still widely demanded for their
efficient and consistent performance. As MSCKF is built upon the conventional
multi-view geometry, the measured residuals are not only related to the state
errors but also related to the feature position errors. To apply EKF fusion, a
projection process is required to remove the feature position error from the
observation model, which can lead to model and accuracy degradation. To obtain
an efficient visual-inertial fusion model, while also preserving the model
consistency, we propose to reconstruct the MSCKF VIO with the novel Pose-Only
(PO) multi-view geometry description. In the newly constructed filter, we have
modeled PO reprojection residuals, which are solely related to the motion
states and thus overcome the requirements of space projection. Moreover, the
new filter does not require any feature position information, which removes the
computational cost and linearization errors brought in by the 3D reconstruction
procedure. We have conducted comprehensive experiments on multiple datasets,
where the proposed method has shown accuracy improvements and consistent
performance in challenging sequences.
| [
{
"version": "v1",
"created": "Tue, 2 Jul 2024 02:18:35 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Du",
"Xueyu",
""
],
[
"Zhang",
"Lilian",
""
],
[
"Liu",
"Ruochen",
""
],
[
"Wang",
"Maosong",
""
],
[
"Wu",
"Wenqi",
""
],
[
"Mao",
"Jun",
""
]
]
| TITLE: PO-MSCKF: An Efficient Visual-Inertial Odometry by Reconstructing the
Multi-State Constrained Kalman Filter with the Pose-only Theory
ABSTRACT: Efficient Visual-Inertial Odometry (VIO) is crucial for payload-constrained
robots. Though modern optimization-based algorithms have achieved superior
accuracy, the MSCKF-based VIO algorithms are still widely demanded for their
efficient and consistent performance. As MSCKF is built upon the conventional
multi-view geometry, the measured residuals are not only related to the state
errors but also related to the feature position errors. To apply EKF fusion, a
projection process is required to remove the feature position error from the
observation model, which can lead to model and accuracy degradation. To obtain
an efficient visual-inertial fusion model, while also preserving the model
consistency, we propose to reconstruct the MSCKF VIO with the novel Pose-Only
(PO) multi-view geometry description. In the newly constructed filter, we have
modeled PO reprojection residuals, which are solely related to the motion
states and thus overcome the requirements of space projection. Moreover, the
new filter does not require any feature position information, which removes the
computational cost and linearization errors brought in by the 3D reconstruction
procedure. We have conducted comprehensive experiments on multiple datasets,
where the proposed method has shown accuracy improvements and consistent
performance in challenging sequences.
| no_new_dataset | 0.947137 |
2407.02235 | Cheng-Yi Li | Cheng-Yi Li, Kao-Jung Chang, Cheng-Fu Yang, Hsin-Yu Wu, Wenting Chen,
Hritik Bansal, Ling Chen, Yi-Ping Yang, Yu-Chun Chen, Shih-Pin Chen,
Jiing-Feng Lirng, Kai-Wei Chang, Shih-Hwa Chiou | Towards a Holistic Framework for Multimodal Large Language Models in
Three-dimensional Brain CT Report Generation | 6 figures, 5 supplementary figures, 8 supplementary tables | Nature Communications 16, 2258 (2025) | 10.1038/s41467-025-57426-0 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Multi-modal large language models (MLLMs) have been given free rein to
explore exciting medical applications with a primary focus on radiology report
generation. Nevertheless, the preliminary success in 2D radiology captioning is
incompetent to reflect the real-world diagnostic challenge in the volumetric 3D
anatomy. To mitigate three crucial limitation aspects in the existing
literature, including (1) data complexity, (2) model capacity, and (3)
evaluation metric fidelity, we collected an 18,885 text-scan pairs 3D-BrainCT
dataset and applied clinical visual instruction tuning (CVIT) to train BrainGPT
models to generate radiology-adherent 3D brain CT reports. Statistically, our
BrainGPT scored BLEU-1 = 44.35, BLEU-4 = 20.38, METEOR = 30.13, ROUGE-L = 47.6,
and CIDEr-R = 211.77 during internal testing and demonstrated an accuracy of
0.91 in captioning midline shifts on the external validation CQ500 dataset. By
further inspecting the captioned report, we reported that the traditional
metrics appeared to measure only the surface text similarity and failed to
gauge the information density of the diagnostic purpose. To close this gap, we
proposed a novel Feature-Oriented Radiology Task Evaluation (FORTE) to estimate
the report's clinical relevance (lesion feature and landmarks). Notably, the
BrainGPT model scored an average FORTE F1-score of 0.71 (degree=0.661;
landmark=0.706; feature=0.693; impression=0.779). To demonstrate that BrainGPT
models possess objective readiness to generate human-like radiology reports, we
conducted a Turing test that enrolled 11 physician evaluators, and around 74%
of the BrainGPT-generated captions were indistinguishable from those written by
humans. Our work embodies a holistic framework that showcased the first-hand
experience of curating a 3D brain CT dataset, fine-tuning anatomy-sensible
language models, and proposing robust radiology evaluation metrics.
| [
{
"version": "v1",
"created": "Tue, 2 Jul 2024 12:58:35 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Li",
"Cheng-Yi",
""
],
[
"Chang",
"Kao-Jung",
""
],
[
"Yang",
"Cheng-Fu",
""
],
[
"Wu",
"Hsin-Yu",
""
],
[
"Chen",
"Wenting",
""
],
[
"Bansal",
"Hritik",
""
],
[
"Chen",
"Ling",
""
],
[
"Yang",
"Yi-Ping",
""
],
[
"Chen",
"Yu-Chun",
""
],
[
"Chen",
"Shih-Pin",
""
],
[
"Lirng",
"Jiing-Feng",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Chiou",
"Shih-Hwa",
""
]
]
| TITLE: Towards a Holistic Framework for Multimodal Large Language Models in
Three-dimensional Brain CT Report Generation
ABSTRACT: Multi-modal large language models (MLLMs) have been given free rein to
explore exciting medical applications with a primary focus on radiology report
generation. Nevertheless, the preliminary success in 2D radiology captioning is
incompetent to reflect the real-world diagnostic challenge in the volumetric 3D
anatomy. To mitigate three crucial limitation aspects in the existing
literature, including (1) data complexity, (2) model capacity, and (3)
evaluation metric fidelity, we collected an 18,885 text-scan pairs 3D-BrainCT
dataset and applied clinical visual instruction tuning (CVIT) to train BrainGPT
models to generate radiology-adherent 3D brain CT reports. Statistically, our
BrainGPT scored BLEU-1 = 44.35, BLEU-4 = 20.38, METEOR = 30.13, ROUGE-L = 47.6,
and CIDEr-R = 211.77 during internal testing and demonstrated an accuracy of
0.91 in captioning midline shifts on the external validation CQ500 dataset. By
further inspecting the captioned report, we reported that the traditional
metrics appeared to measure only the surface text similarity and failed to
gauge the information density of the diagnostic purpose. To close this gap, we
proposed a novel Feature-Oriented Radiology Task Evaluation (FORTE) to estimate
the report's clinical relevance (lesion feature and landmarks). Notably, the
BrainGPT model scored an average FORTE F1-score of 0.71 (degree=0.661;
landmark=0.706; feature=0.693; impression=0.779). To demonstrate that BrainGPT
models possess objective readiness to generate human-like radiology reports, we
conducted a Turing test that enrolled 11 physician evaluators, and around 74%
of the BrainGPT-generated captions were indistinguishable from those written by
humans. Our work embodies a holistic framework that showcased the first-hand
experience of curating a 3D brain CT dataset, fine-tuning anatomy-sensible
language models, and proposing robust radiology evaluation metrics.
| no_new_dataset | 0.927822 |
2407.08693 | William Chen | Micha{\l} Zawalski and William Chen and Karl Pertsch and Oier Mees and
Chelsea Finn and Sergey Levine | Robotic Control via Embodied Chain-of-Thought Reasoning | Project Website: https://embodied-cot.github.io. Updated funding
information | null | null | null | cs.RO cs.LG | http://creativecommons.org/licenses/by/4.0/ | A key limitation of learned robot control policies is their inability to
generalize outside their training data. Recent works on vision-language-action
models (VLAs) have shown that the use of large, internet pre-trained
vision-language models as the backbone of learned robot policies can
substantially improve their robustness and generalization ability. Yet, one of
the most exciting capabilities of large vision-language models in other domains
is their ability to reason iteratively through complex problems. Can that same
capability be brought into robotics to allow policies to improve performance by
reasoning about a given task before acting? Naive use of "chain-of-thought"
(CoT) style prompting is significantly less effective with standard VLAs
because of the relatively simple training examples that are available to them.
Additionally, purely semantic reasoning about sub-tasks, as is common in
regular CoT, is insufficient for robot policies that need to ground their
reasoning in sensory observations and the robot state. To this end, we
introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we
train VLAs to perform multiple steps of reasoning about plans, sub-tasks,
motions, and visually grounded features like object bounding boxes and end
effector positions, before predicting the robot action. We design a scalable
pipeline for generating synthetic training data for ECoT on large robot
datasets. We demonstrate, that ECoT increases the absolute success rate of
OpenVLA, the current strongest open-source VLA policy, by 28% across
challenging generalization tasks, without any additional robot training data.
Additionally, ECoT makes it easier for humans to interpret a policy's failures
and correct its behavior using natural language.
| [
{
"version": "v1",
"created": "Thu, 11 Jul 2024 17:31:01 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Jul 2024 19:19:34 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Mar 2025 19:29:03 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zawalski",
"Michał",
""
],
[
"Chen",
"William",
""
],
[
"Pertsch",
"Karl",
""
],
[
"Mees",
"Oier",
""
],
[
"Finn",
"Chelsea",
""
],
[
"Levine",
"Sergey",
""
]
]
| TITLE: Robotic Control via Embodied Chain-of-Thought Reasoning
ABSTRACT: A key limitation of learned robot control policies is their inability to
generalize outside their training data. Recent works on vision-language-action
models (VLAs) have shown that the use of large, internet pre-trained
vision-language models as the backbone of learned robot policies can
substantially improve their robustness and generalization ability. Yet, one of
the most exciting capabilities of large vision-language models in other domains
is their ability to reason iteratively through complex problems. Can that same
capability be brought into robotics to allow policies to improve performance by
reasoning about a given task before acting? Naive use of "chain-of-thought"
(CoT) style prompting is significantly less effective with standard VLAs
because of the relatively simple training examples that are available to them.
Additionally, purely semantic reasoning about sub-tasks, as is common in
regular CoT, is insufficient for robot policies that need to ground their
reasoning in sensory observations and the robot state. To this end, we
introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we
train VLAs to perform multiple steps of reasoning about plans, sub-tasks,
motions, and visually grounded features like object bounding boxes and end
effector positions, before predicting the robot action. We design a scalable
pipeline for generating synthetic training data for ECoT on large robot
datasets. We demonstrate, that ECoT increases the absolute success rate of
OpenVLA, the current strongest open-source VLA policy, by 28% across
challenging generalization tasks, without any additional robot training data.
Additionally, ECoT makes it easier for humans to interpret a policy's failures
and correct its behavior using natural language.
| no_new_dataset | 0.945851 |
2407.12282 | Vint Lee | Vint Lee, Minh Nguyen, Leena Elzeiny, Chun Deng, Pieter Abbeel, John
Wawrzynek | Chip Placement with Diffusion Models | null | null | null | null | cs.LG cs.AI cs.AR | http://creativecommons.org/licenses/by/4.0/ | Macro placement is a vital step in digital circuit design that defines the
physical location of large collections of components, known as macros, on a 2D
chip. Because key performance metrics of the chip are determined by the
placement, optimizing it is crucial. Existing learning-based methods typically
fall short because of their reliance on reinforcement learning (RL), which is
slow and struggles to generalize, requiring online training on each new
circuit. Instead, we train a diffusion model capable of placing new circuits
zero-shot, using guided sampling in lieu of RL to optimize placement quality.
To enable such models to train at scale, we designed a capable yet efficient
architecture for the denoising model, and propose a novel algorithm to generate
large synthetic datasets for pre-training. To allow zero-shot transfer to real
circuits, we empirically study the design decisions of our dataset generation
algorithm, and identify several key factors enabling generalization. When
trained on our synthetic data, our models generate high-quality placements on
unseen, realistic circuits, achieving competitive performance on placement
benchmarks compared to state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 17 Jul 2024 03:02:24 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 05:47:20 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Lee",
"Vint",
""
],
[
"Nguyen",
"Minh",
""
],
[
"Elzeiny",
"Leena",
""
],
[
"Deng",
"Chun",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Wawrzynek",
"John",
""
]
]
| TITLE: Chip Placement with Diffusion Models
ABSTRACT: Macro placement is a vital step in digital circuit design that defines the
physical location of large collections of components, known as macros, on a 2D
chip. Because key performance metrics of the chip are determined by the
placement, optimizing it is crucial. Existing learning-based methods typically
fall short because of their reliance on reinforcement learning (RL), which is
slow and struggles to generalize, requiring online training on each new
circuit. Instead, we train a diffusion model capable of placing new circuits
zero-shot, using guided sampling in lieu of RL to optimize placement quality.
To enable such models to train at scale, we designed a capable yet efficient
architecture for the denoising model, and propose a novel algorithm to generate
large synthetic datasets for pre-training. To allow zero-shot transfer to real
circuits, we empirically study the design decisions of our dataset generation
algorithm, and identify several key factors enabling generalization. When
trained on our synthetic data, our models generate high-quality placements on
unseen, realistic circuits, achieving competitive performance on placement
benchmarks compared to state-of-the-art methods.
| no_new_dataset | 0.911967 |
2407.21604 | JongWoo Kim | JongWoo Kim, Bryan Wong, Huazhu Fu, Willmer Rafell Qui\~nones and
MunYong Yi | MicroMIL: Graph-based Contextual Multiple Instance Learning for Patient
Diagnosis Using Microscopy Images | The first two authors contributed equally to this work | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cancer diagnosis has greatly benefited from the integration of whole-slide
images (WSIs) with multiple instance learning (MIL), enabling high-resolution
analysis of tissue morphology. Graph-based MIL (GNN-MIL) approaches have
emerged as powerful solutions for capturing spatial and relational structures
in WSIs, thereby improving diagnostic accuracy. However, despite their
effectiveness, WSIs require significant computational and infrastructural
resources, limiting accessibility in resource-constrained settings. Microscopy
imaging provides a cost-effective alternative, but applying GNN-MIL to
microscopy imaging is challenging due to the absence of spatial coordinates and
the high redundancy in pathologist-acquired images. To address these issues, we
introduce MicroMIL, the first weakly-supervised MIL framework specifically
designed for microscopy imaging. MicroMIL leverages a representative image
extractor (RIE) that employs deep cluster embedding (DCE) and hard
Gumbel-Softmax to dynamically reduce redundancy and select representative
images. These selected images serve as graph nodes, with edges determined by
cosine similarity, eliminating the need for spatial coordinates while
preserving relational structure. Extensive experiments on a real-world colon
cancer dataset and the BreakHis dataset demonstrate that MicroMIL achieves
state-of-the-art performance, improving both diagnostic accuracy and robustness
to redundancy. The code is available at
https://anonymous.4open.science/r/MicroMIL-6C7C
| [
{
"version": "v1",
"created": "Wed, 31 Jul 2024 13:38:47 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 15:44:36 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Kim",
"JongWoo",
""
],
[
"Wong",
"Bryan",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Quiñones",
"Willmer Rafell",
""
],
[
"Yi",
"MunYong",
""
]
]
| TITLE: MicroMIL: Graph-based Contextual Multiple Instance Learning for Patient
Diagnosis Using Microscopy Images
ABSTRACT: Cancer diagnosis has greatly benefited from the integration of whole-slide
images (WSIs) with multiple instance learning (MIL), enabling high-resolution
analysis of tissue morphology. Graph-based MIL (GNN-MIL) approaches have
emerged as powerful solutions for capturing spatial and relational structures
in WSIs, thereby improving diagnostic accuracy. However, despite their
effectiveness, WSIs require significant computational and infrastructural
resources, limiting accessibility in resource-constrained settings. Microscopy
imaging provides a cost-effective alternative, but applying GNN-MIL to
microscopy imaging is challenging due to the absence of spatial coordinates and
the high redundancy in pathologist-acquired images. To address these issues, we
introduce MicroMIL, the first weakly-supervised MIL framework specifically
designed for microscopy imaging. MicroMIL leverages a representative image
extractor (RIE) that employs deep cluster embedding (DCE) and hard
Gumbel-Softmax to dynamically reduce redundancy and select representative
images. These selected images serve as graph nodes, with edges determined by
cosine similarity, eliminating the need for spatial coordinates while
preserving relational structure. Extensive experiments on a real-world colon
cancer dataset and the BreakHis dataset demonstrate that MicroMIL achieves
state-of-the-art performance, improving both diagnostic accuracy and robustness
to redundancy. The code is available at
https://anonymous.4open.science/r/MicroMIL-6C7C
| no_new_dataset | 0.928018 |
2408.01167 | Bryan Wong | Bryan Wong, Sungrae Hong, Mun Yong Yi | Rethinking Pre-Trained Feature Extractor Selection in Multiple Instance
Learning for Whole Slide Image Classification | Accepted to IEEE International Symposium on Biomedical Imaging (ISBI)
2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple instance learning (MIL) has become a preferred method for gigapixel
whole slide image (WSI) classification without requiring patch-level
annotations. Current MIL research primarily relies on embedding-based
approaches, which extract patch features using a pre-trained feature extractor
and aggregate them for slide-level prediction. Despite the critical role of
feature extraction, there is limited guidance on selecting optimal feature
extractors to maximize WSI performance. This study addresses this gap by
systematically evaluating MIL feature extractors across three dimensions:
pre-training dataset, backbone model, and pre-training method. Extensive
experiments were conducted on two public WSI datasets (TCGA-NSCLC and
Camelyon16) using four state-of-the-art (SOTA) MIL models. Our findings reveal
that: 1) selecting a robust self-supervised learning (SSL) method has a greater
impact on performance than relying solely on an in-domain pre-training dataset;
2) prioritizing Transformer-based backbones with deeper architectures over
CNN-based models; and 3) using larger, more diverse pre-training datasets
significantly enhances classification outcomes. We hope that these insights can
provide practical guidance for optimizing WSI classification and explain the
reasons behind the performance advantages of the current SOTA pathology
foundation models. Furthermore, this work may inform the development of more
effective pathology foundation models. Our code is publicly available at
https://github.com/bryanwong17/MIL-Feature-Extractor-Selection
| [
{
"version": "v1",
"created": "Fri, 2 Aug 2024 10:34:23 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Nov 2024 02:01:00 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Jan 2025 02:09:15 GMT"
},
{
"version": "v4",
"created": "Thu, 23 Jan 2025 06:30:53 GMT"
},
{
"version": "v5",
"created": "Fri, 7 Mar 2025 03:46:48 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Wong",
"Bryan",
""
],
[
"Hong",
"Sungrae",
""
],
[
"Yi",
"Mun Yong",
""
]
]
| TITLE: Rethinking Pre-Trained Feature Extractor Selection in Multiple Instance
Learning for Whole Slide Image Classification
ABSTRACT: Multiple instance learning (MIL) has become a preferred method for gigapixel
whole slide image (WSI) classification without requiring patch-level
annotations. Current MIL research primarily relies on embedding-based
approaches, which extract patch features using a pre-trained feature extractor
and aggregate them for slide-level prediction. Despite the critical role of
feature extraction, there is limited guidance on selecting optimal feature
extractors to maximize WSI performance. This study addresses this gap by
systematically evaluating MIL feature extractors across three dimensions:
pre-training dataset, backbone model, and pre-training method. Extensive
experiments were conducted on two public WSI datasets (TCGA-NSCLC and
Camelyon16) using four state-of-the-art (SOTA) MIL models. Our findings reveal
that: 1) selecting a robust self-supervised learning (SSL) method has a greater
impact on performance than relying solely on an in-domain pre-training dataset;
2) prioritizing Transformer-based backbones with deeper architectures over
CNN-based models; and 3) using larger, more diverse pre-training datasets
significantly enhances classification outcomes. We hope that these insights can
provide practical guidance for optimizing WSI classification and explain the
reasons behind the performance advantages of the current SOTA pathology
foundation models. Furthermore, this work may inform the development of more
effective pathology foundation models. Our code is publicly available at
https://github.com/bryanwong17/MIL-Feature-Extractor-Selection
| no_new_dataset | 0.949669 |
2408.02361 | Renato Vukovic | Renato Vukovic, David Arps, Carel van Niekerk, Benjamin Matthias
Ruppik, Hsien-Chin Lin, Michael Heck, Milica Ga\v{s}i\'c | Dialogue Ontology Relation Extraction via Constrained Chain-of-Thought
Decoding | Accepted to appear at SIGDIAL 2024. 9 pages, 4 figures | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | State-of-the-art task-oriented dialogue systems typically rely on
task-specific ontologies for fulfilling user queries. The majority of
task-oriented dialogue data, such as customer service recordings, comes without
ontology and annotation. Such ontologies are normally built manually, limiting
the application of specialised systems. Dialogue ontology construction is an
approach for automating that process and typically consists of two steps: term
extraction and relation extraction. In this work, we focus on relation
extraction in a transfer learning set-up. To improve the generalisation, we
propose an extension to the decoding mechanism of large language models. We
adapt Chain-of-Thought (CoT) decoding, recently developed for reasoning
problems, to generative relation extraction. Here, we generate multiple
branches in the decoding space and select the relations based on a confidence
threshold. By constraining the decoding to ontology terms and relations, we aim
to decrease the risk of hallucination. We conduct extensive experimentation on
two widely used datasets and find improvements in performance on target
ontology for source fine-tuned and one-shot prompted large language models.
| [
{
"version": "v1",
"created": "Mon, 5 Aug 2024 10:10:01 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 11:12:17 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Vukovic",
"Renato",
""
],
[
"Arps",
"David",
""
],
[
"van Niekerk",
"Carel",
""
],
[
"Ruppik",
"Benjamin Matthias",
""
],
[
"Lin",
"Hsien-Chin",
""
],
[
"Heck",
"Michael",
""
],
[
"Gašić",
"Milica",
""
]
]
| TITLE: Dialogue Ontology Relation Extraction via Constrained Chain-of-Thought
Decoding
ABSTRACT: State-of-the-art task-oriented dialogue systems typically rely on
task-specific ontologies for fulfilling user queries. The majority of
task-oriented dialogue data, such as customer service recordings, comes without
ontology and annotation. Such ontologies are normally built manually, limiting
the application of specialised systems. Dialogue ontology construction is an
approach for automating that process and typically consists of two steps: term
extraction and relation extraction. In this work, we focus on relation
extraction in a transfer learning set-up. To improve the generalisation, we
propose an extension to the decoding mechanism of large language models. We
adapt Chain-of-Thought (CoT) decoding, recently developed for reasoning
problems, to generative relation extraction. Here, we generate multiple
branches in the decoding space and select the relations based on a confidence
threshold. By constraining the decoding to ontology terms and relations, we aim
to decrease the risk of hallucination. We conduct extensive experimentation on
two widely used datasets and find improvements in performance on target
ontology for source fine-tuned and one-shot prompted large language models.
| no_new_dataset | 0.948442 |
2408.11963 | Santiago Calder\'on-Pe\~na | Santiago Calder\'on-Pe\~na, Hana Chockler, David A. Kelly | Real-Time Incremental Explanations for Object Detectors in Autonomous
Driving | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Object detectors are widely used in safety-critical real-time applications
such as autonomous driving. Explainability is especially important for
safety-critical applications, and due to the variety of object detectors and
their often proprietary nature, black-box explainability tools are needed.
However, existing black-box explainability tools for AI models rely on multiple
model calls, rendering them impractical for real-time use.
In this paper, we introduce IncX, an algorithm and a tool for real-time
black-box explainability for object detectors. The algorithm is based on linear
transformations of saliency maps, producing sufficient explanations. We
evaluate our implementation on four widely used video datasets of autonomous
driving and demonstrate that IncX's explanations are comparable in quality to
the state-of-the-art and are computed two orders of magnitude faster than the
state-of-the-art, making them usable in real time.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 19:31:39 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 17:38:59 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Calderón-Peña",
"Santiago",
""
],
[
"Chockler",
"Hana",
""
],
[
"Kelly",
"David A.",
""
]
]
| TITLE: Real-Time Incremental Explanations for Object Detectors in Autonomous
Driving
ABSTRACT: Object detectors are widely used in safety-critical real-time applications
such as autonomous driving. Explainability is especially important for
safety-critical applications, and due to the variety of object detectors and
their often proprietary nature, black-box explainability tools are needed.
However, existing black-box explainability tools for AI models rely on multiple
model calls, rendering them impractical for real-time use.
In this paper, we introduce IncX, an algorithm and a tool for real-time
black-box explainability for object detectors. The algorithm is based on linear
transformations of saliency maps, producing sufficient explanations. We
evaluate our implementation on four widely used video datasets of autonomous
driving and demonstrate that IncX's explanations are comparable in quality to
the state-of-the-art and are computed two orders of magnitude faster than the
state-of-the-art, making them usable in real time.
| no_new_dataset | 0.947088 |
2408.11992 | Eyal Hanania | Eyal Hanania, Adi Zehavi-Lenz, Ilya Volovik, Daphna Link-Sourani,
Israel Cohen, Moti Freiman | MBSS-T1: Model-Based Subject-Specific Self-Supervised Motion Correction
for Robust Cardiac T1 Mapping | Accepted and published in Medical Image Analysis | Medical Image Analysis, Volume 102, May 2025, 103495 Medical Image
Analysis, Volume 102, May 2025, 103495 | 10.1016/j.media.2025.103495 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cardiac T1 mapping is a valuable quantitative MRI technique for diagnosing
diffuse myocardial diseases. Traditional methods, relying on breath-hold
sequences and cardiac triggering based on an ECG signal, face challenges with
patient compliance, limiting their effectiveness. Image registration can enable
motion-robust cardiac T1 mapping, but inherent intensity differences between
time points pose a challenge. We present MBSS-T1, a subject-specific
self-supervised model for motion correction in cardiac T1 mapping. Physical
constraints, implemented through a loss function comparing synthesized and
motion-corrected images, enforce signal decay behavior, while anatomical
constraints, applied via a Dice loss, ensure realistic deformations. The unique
combination of these constraints results in motion-robust cardiac T1 mapping
along the longitudinal relaxation axis. In a 5-fold experiment on a public
dataset of 210 patients (STONE sequence) and an internal dataset of 19 patients
(MOLLI sequence), MBSS-T1 outperformed baseline deep-learning registration
methods. It achieved superior model fitting quality ($R^2$: 0.975 vs. 0.941,
0.946 for STONE; 0.987 vs. 0.982, 0.965 for MOLLI free-breathing; 0.994 vs.
0.993, 0.991 for MOLLI breath-hold), anatomical alignment (Dice: 0.89 vs. 0.84,
0.88 for STONE; 0.963 vs. 0.919, 0.851 for MOLLI free-breathing; 0.954 vs.
0.924, 0.871 for MOLLI breath-hold), and visual quality (4.33 vs. 3.38, 3.66
for STONE; 4.1 vs. 3.5, 3.28 for MOLLI free-breathing; 3.79 vs. 3.15, 2.84 for
MOLLI breath-hold). MBSS-T1 enables motion-robust T1 mapping for broader
patient populations, overcoming challenges such as suboptimal compliance, and
facilitates free-breathing cardiac T1 mapping without requiring large annotated
datasets. Our code is available at
https://github.com/TechnionComputationalMRILab/MBSS-T1.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 21:03:36 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Sep 2024 07:04:56 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Mar 2025 20:55:40 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Hanania",
"Eyal",
""
],
[
"Zehavi-Lenz",
"Adi",
""
],
[
"Volovik",
"Ilya",
""
],
[
"Link-Sourani",
"Daphna",
""
],
[
"Cohen",
"Israel",
""
],
[
"Freiman",
"Moti",
""
]
]
| TITLE: MBSS-T1: Model-Based Subject-Specific Self-Supervised Motion Correction
for Robust Cardiac T1 Mapping
ABSTRACT: Cardiac T1 mapping is a valuable quantitative MRI technique for diagnosing
diffuse myocardial diseases. Traditional methods, relying on breath-hold
sequences and cardiac triggering based on an ECG signal, face challenges with
patient compliance, limiting their effectiveness. Image registration can enable
motion-robust cardiac T1 mapping, but inherent intensity differences between
time points pose a challenge. We present MBSS-T1, a subject-specific
self-supervised model for motion correction in cardiac T1 mapping. Physical
constraints, implemented through a loss function comparing synthesized and
motion-corrected images, enforce signal decay behavior, while anatomical
constraints, applied via a Dice loss, ensure realistic deformations. The unique
combination of these constraints results in motion-robust cardiac T1 mapping
along the longitudinal relaxation axis. In a 5-fold experiment on a public
dataset of 210 patients (STONE sequence) and an internal dataset of 19 patients
(MOLLI sequence), MBSS-T1 outperformed baseline deep-learning registration
methods. It achieved superior model fitting quality ($R^2$: 0.975 vs. 0.941,
0.946 for STONE; 0.987 vs. 0.982, 0.965 for MOLLI free-breathing; 0.994 vs.
0.993, 0.991 for MOLLI breath-hold), anatomical alignment (Dice: 0.89 vs. 0.84,
0.88 for STONE; 0.963 vs. 0.919, 0.851 for MOLLI free-breathing; 0.954 vs.
0.924, 0.871 for MOLLI breath-hold), and visual quality (4.33 vs. 3.38, 3.66
for STONE; 4.1 vs. 3.5, 3.28 for MOLLI free-breathing; 3.79 vs. 3.15, 2.84 for
MOLLI breath-hold). MBSS-T1 enables motion-robust T1 mapping for broader
patient populations, overcoming challenges such as suboptimal compliance, and
facilitates free-breathing cardiac T1 mapping without requiring large annotated
datasets. Our code is available at
https://github.com/TechnionComputationalMRILab/MBSS-T1.
| no_new_dataset | 0.945651 |
2408.12481 | Manuele Rusci Mr. | Manuele Rusci, Francesco Paci, Marco Fariselli, Eric Flamand, Tinne
Tuytelaars | Self-Learning for Personalized Keyword Spotting on Ultra-Low-Power Audio
Sensors | Published on IEEE IoT Journal | null | 10.1109/JIOT.2024.3515143 | null | cs.SD cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a self-learning method to incrementally train (fine-tune)
a personalized Keyword Spotting (KWS) model after the deployment on ultra-low
power smart audio sensors. We address the fundamental problem of the absence of
labeled training data by assigning pseudo-labels to the new recorded audio
frames based on a similarity score with respect to few user recordings. By
experimenting with multiple KWS models with a number of parameters up to 0.5M
on two public datasets, we show an accuracy improvement of up to +19.2% and
+16.0% vs. the initial models pretrained on a large set of generic keywords.
The labeling task is demonstrated on a sensor system composed of a low-power
microphone and an energy-efficient Microcontroller (MCU). By efficiently
exploiting the heterogeneous processing engines of the MCU, the always-on
labeling task runs in real-time with an average power cost of up to 8.2 mW. On
the same platform, we estimate an energy cost for on-device training 10x lower
than the labeling energy if sampling a new utterance every 6.1 s or 18.8 s with
a DS-CNN-S or a DS-CNN-M model. Our empirical result paves the way to
self-adaptive personalized KWS sensors at the extreme edge.
| [
{
"version": "v1",
"created": "Thu, 22 Aug 2024 15:17:02 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 14:46:22 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Rusci",
"Manuele",
""
],
[
"Paci",
"Francesco",
""
],
[
"Fariselli",
"Marco",
""
],
[
"Flamand",
"Eric",
""
],
[
"Tuytelaars",
"Tinne",
""
]
]
| TITLE: Self-Learning for Personalized Keyword Spotting on Ultra-Low-Power Audio
Sensors
ABSTRACT: This paper proposes a self-learning method to incrementally train (fine-tune)
a personalized Keyword Spotting (KWS) model after the deployment on ultra-low
power smart audio sensors. We address the fundamental problem of the absence of
labeled training data by assigning pseudo-labels to the new recorded audio
frames based on a similarity score with respect to few user recordings. By
experimenting with multiple KWS models with a number of parameters up to 0.5M
on two public datasets, we show an accuracy improvement of up to +19.2% and
+16.0% vs. the initial models pretrained on a large set of generic keywords.
The labeling task is demonstrated on a sensor system composed of a low-power
microphone and an energy-efficient Microcontroller (MCU). By efficiently
exploiting the heterogeneous processing engines of the MCU, the always-on
labeling task runs in real-time with an average power cost of up to 8.2 mW. On
the same platform, we estimate an energy cost for on-device training 10x lower
than the labeling energy if sampling a new utterance every 6.1 s or 18.8 s with
a DS-CNN-S or a DS-CNN-M model. Our empirical result paves the way to
self-adaptive personalized KWS sensors at the extreme edge.
| no_new_dataset | 0.949295 |
2408.16504 | Nedyalko Prisadnikov | Nedyalko Prisadnikov, Wouter Van Gansbeke, Danda Pani Paudel, Luc Van
Gool | A Simple and Generalist Approach for Panoptic Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Panoptic segmentation is an important computer vision task, where the current
state-of-the-art solutions require specialized components to perform well. We
propose a simple generalist framework based on a deep encoder - shallow decoder
architecture with per-pixel prediction. Essentially fine-tuning a massively
pretrained image model with minimal additional components. Naively this method
does not yield good results. We show that this is due to imbalance during
training and propose a novel method for reducing it - centroid regression in
the space of spectral positional embeddings. Our method achieves panoptic
quality (PQ) of 55.1 on the challenging MS-COCO dataset, state-of-the-art
performance among generalist methods.
| [
{
"version": "v1",
"created": "Thu, 29 Aug 2024 13:02:12 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 13:26:50 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Prisadnikov",
"Nedyalko",
""
],
[
"Van Gansbeke",
"Wouter",
""
],
[
"Paudel",
"Danda Pani",
""
],
[
"Van Gool",
"Luc",
""
]
]
| TITLE: A Simple and Generalist Approach for Panoptic Segmentation
ABSTRACT: Panoptic segmentation is an important computer vision task, where the current
state-of-the-art solutions require specialized components to perform well. We
propose a simple generalist framework based on a deep encoder - shallow decoder
architecture with per-pixel prediction. Essentially fine-tuning a massively
pretrained image model with minimal additional components. Naively this method
does not yield good results. We show that this is due to imbalance during
training and propose a novel method for reducing it - centroid regression in
the space of spectral positional embeddings. Our method achieves panoptic
quality (PQ) of 55.1 on the challenging MS-COCO dataset, state-of-the-art
performance among generalist methods.
| no_new_dataset | 0.94868 |
2409.00926 | Zhuolin Tan | Zhuolin Tan, Chenqiang Gao, Anyong Qin, Ruixin Chen, Tiecheng Song,
Feng Yang, Deyu Meng | Towards Student Actions in Classroom Scenes: New Dataset and Baseline | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analyzing student actions is an important and challenging task in educational
research. Existing efforts have been hampered by the lack of accessible
datasets to capture the nuanced action dynamics in classrooms. In this paper,
we present a new multi-label Student Action Video (SAV) dataset, specifically
designed for action detection in classroom settings. The SAV dataset consists
of 4,324 carefully trimmed video clips from 758 different classrooms, annotated
with 15 distinct student actions. Compared to existing action detection
datasets, the SAV dataset stands out by providing a wide range of real
classroom scenarios, high-quality video data, and unique challenges, including
subtle movement differences, dense object engagement, significant scale
differences, varied shooting angles, and visual occlusion. These complexities
introduce new opportunities and challenges to advance action detection methods.
To benchmark this, we propose a novel baseline method based on a visual
transformer, designed to enhance attention to key local details within small
and dense object regions. Our method demonstrates excellent performance with a
mean Average Precision (mAP) of 67.9% and 27.4% on the SAV and AVA datasets,
respectively. This paper not only provides the dataset but also calls for
further research into AI-driven educational tools that may transform teaching
methodologies and learning outcomes. The code and dataset are released at
https://github.com/Ritatanz/SAV.
| [
{
"version": "v1",
"created": "Mon, 2 Sep 2024 03:44:24 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 07:00:24 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Tan",
"Zhuolin",
""
],
[
"Gao",
"Chenqiang",
""
],
[
"Qin",
"Anyong",
""
],
[
"Chen",
"Ruixin",
""
],
[
"Song",
"Tiecheng",
""
],
[
"Yang",
"Feng",
""
],
[
"Meng",
"Deyu",
""
]
]
| TITLE: Towards Student Actions in Classroom Scenes: New Dataset and Baseline
ABSTRACT: Analyzing student actions is an important and challenging task in educational
research. Existing efforts have been hampered by the lack of accessible
datasets to capture the nuanced action dynamics in classrooms. In this paper,
we present a new multi-label Student Action Video (SAV) dataset, specifically
designed for action detection in classroom settings. The SAV dataset consists
of 4,324 carefully trimmed video clips from 758 different classrooms, annotated
with 15 distinct student actions. Compared to existing action detection
datasets, the SAV dataset stands out by providing a wide range of real
classroom scenarios, high-quality video data, and unique challenges, including
subtle movement differences, dense object engagement, significant scale
differences, varied shooting angles, and visual occlusion. These complexities
introduce new opportunities and challenges to advance action detection methods.
To benchmark this, we propose a novel baseline method based on a visual
transformer, designed to enhance attention to key local details within small
and dense object regions. Our method demonstrates excellent performance with a
mean Average Precision (mAP) of 67.9% and 27.4% on the SAV and AVA datasets,
respectively. This paper not only provides the dataset but also calls for
further research into AI-driven educational tools that may transform teaching
methodologies and learning outcomes. The code and dataset are released at
https://github.com/Ritatanz/SAV.
| new_dataset | 0.960878 |
2409.03463 | Lorenzo Bini | Lorenzo Bini, Marco Sorbi, Stephane Marchand-Maillet | Massive Activations in Graph Neural Networks: Decoding Attention for
Domain-Dependent Interpretability | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) have become increasingly popular for effectively
modeling graph-structured data, and attention mechanisms have been pivotal in
enabling these models to capture complex patterns. In our study, we reveal a
critical yet underexplored consequence of integrating attention into
edge-featured GNNs: the emergence of Massive Activations (MAs) within attention
layers. By developing a novel method for detecting MAs on edge features, we
show that these extreme activations are not only activation anomalies but
encode domain-relevant signals. Our post-hoc interpretability analysis
demonstrates that, in molecular graphs, MAs aggregate predominantly on common
bond types (e.g., single and double bonds) while sparing more informative ones
(e.g., triple bonds). Furthermore, our ablation studies confirm that MAs can
serve as natural attribution indicators, reallocating to less informative
edges. Our study assesses various edge-featured attention-based GNN models
using benchmark datasets, including ZINC, TOX21, and PROTEINS. Key
contributions include (1) establishing the direct link between attention
mechanisms and MAs generation in edge-featured GNNs, (2) developing a robust
definition and detection method for MAs enabling reliable post-hoc
interpretability. Overall, our study reveals the complex interplay between
attention mechanisms, edge-featured GNNs model, and MAs emergence, providing
crucial insights for relating GNNs internals to domain knowledge.
| [
{
"version": "v1",
"created": "Thu, 5 Sep 2024 12:19:07 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Sep 2024 09:13:41 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 15:17:02 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Bini",
"Lorenzo",
""
],
[
"Sorbi",
"Marco",
""
],
[
"Marchand-Maillet",
"Stephane",
""
]
]
| TITLE: Massive Activations in Graph Neural Networks: Decoding Attention for
Domain-Dependent Interpretability
ABSTRACT: Graph Neural Networks (GNNs) have become increasingly popular for effectively
modeling graph-structured data, and attention mechanisms have been pivotal in
enabling these models to capture complex patterns. In our study, we reveal a
critical yet underexplored consequence of integrating attention into
edge-featured GNNs: the emergence of Massive Activations (MAs) within attention
layers. By developing a novel method for detecting MAs on edge features, we
show that these extreme activations are not only activation anomalies but
encode domain-relevant signals. Our post-hoc interpretability analysis
demonstrates that, in molecular graphs, MAs aggregate predominantly on common
bond types (e.g., single and double bonds) while sparing more informative ones
(e.g., triple bonds). Furthermore, our ablation studies confirm that MAs can
serve as natural attribution indicators, reallocating to less informative
edges. Our study assesses various edge-featured attention-based GNN models
using benchmark datasets, including ZINC, TOX21, and PROTEINS. Key
contributions include (1) establishing the direct link between attention
mechanisms and MAs generation in edge-featured GNNs, (2) developing a robust
definition and detection method for MAs enabling reliable post-hoc
interpretability. Overall, our study reveals the complex interplay between
attention mechanisms, edge-featured GNNs model, and MAs emergence, providing
crucial insights for relating GNNs internals to domain knowledge.
| no_new_dataset | 0.952353 |
2409.08936 | Paloma Rabaey | Paloma Rabaey, Henri Arno, Stefan Heytens, Thomas Demeester | SynSUM -- Synthetic Benchmark with Structured and Unstructured Medical
Records | The dataset can be downloaded from https://github.com/prabaey/synsum.
Presented at the GenAI4Health workshop at AAAI 2025 | null | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present the SynSUM benchmark, a synthetic dataset linking unstructured
clinical notes to structured background variables. The dataset consists of
10,000 artificial patient records containing tabular variables (like symptoms,
diagnoses and underlying conditions) and related notes describing the fictional
patient encounter in the domain of respiratory diseases. The tabular portion of
the data is generated through a Bayesian network, where both the causal
structure between the variables and the conditional probabilities are proposed
by an expert based on domain knowledge. We then prompt a large language model
(GPT-4o) to generate a clinical note related to this patient encounter,
describing the patient symptoms and additional context. We conduct both an
expert evaluation study to assess the quality of the generated notes, as well
as running some simple predictor models on both the tabular and text portions
of the dataset, forming a baseline for further research. The SynSUM dataset is
primarily designed to facilitate research on clinical information extraction in
the presence of tabular background variables, which can be linked through
domain knowledge to concepts of interest to be extracted from the text - the
symptoms, in the case of SynSUM. Secondary uses include research on the
automation of clinical reasoning over both tabular data and text, causal effect
estimation in the presence of tabular and/or textual confounders, and
multi-modal synthetic data generation.
| [
{
"version": "v1",
"created": "Fri, 13 Sep 2024 15:55:15 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 17:09:02 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Rabaey",
"Paloma",
""
],
[
"Arno",
"Henri",
""
],
[
"Heytens",
"Stefan",
""
],
[
"Demeester",
"Thomas",
""
]
]
| TITLE: SynSUM -- Synthetic Benchmark with Structured and Unstructured Medical
Records
ABSTRACT: We present the SynSUM benchmark, a synthetic dataset linking unstructured
clinical notes to structured background variables. The dataset consists of
10,000 artificial patient records containing tabular variables (like symptoms,
diagnoses and underlying conditions) and related notes describing the fictional
patient encounter in the domain of respiratory diseases. The tabular portion of
the data is generated through a Bayesian network, where both the causal
structure between the variables and the conditional probabilities are proposed
by an expert based on domain knowledge. We then prompt a large language model
(GPT-4o) to generate a clinical note related to this patient encounter,
describing the patient symptoms and additional context. We conduct both an
expert evaluation study to assess the quality of the generated notes, as well
as running some simple predictor models on both the tabular and text portions
of the dataset, forming a baseline for further research. The SynSUM dataset is
primarily designed to facilitate research on clinical information extraction in
the presence of tabular background variables, which can be linked through
domain knowledge to concepts of interest to be extracted from the text - the
symptoms, in the case of SynSUM. Secondary uses include research on the
automation of clinical reasoning over both tabular data and text, causal effect
estimation in the presence of tabular and/or textual confounders, and
multi-modal synthetic data generation.
| new_dataset | 0.969871 |
2409.12051 | Jaehyung Jung | Jaehyung Jung, Simon Boche, Sebasti\'an Barbas Laina, Stefan
Leutenegger | Uncertainty-Aware Visual-Inertial SLAM with Volumetric Occupancy Mapping | 7 pages, 4 figures, 5 tables, accepted in ICRA 2025 | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose visual-inertial simultaneous localization and mapping that tightly
couples sparse reprojection errors, inertial measurement unit pre-integrals,
and relative pose factors with dense volumetric occupancy mapping. Hereby depth
predictions from a deep neural network are fused in a fully probabilistic
manner. Specifically, our method is rigorously uncertainty-aware: first, we use
depth and uncertainty predictions from a deep network not only from the robot's
stereo rig, but we further probabilistically fuse motion stereo that provides
depth information across a range of baselines, therefore drastically increasing
mapping accuracy. Next, predicted and fused depth uncertainty propagates not
only into occupancy probabilities but also into alignment factors between
generated dense submaps that enter the probabilistic nonlinear least squares
estimator. This submap representation offers globally consistent geometry at
scale. Our method is thoroughly evaluated in two benchmark datasets, resulting
in localization and mapping accuracy that exceeds the state of the art, while
simultaneously offering volumetric occupancy directly usable for downstream
robotic planning and control in real-time.
| [
{
"version": "v1",
"created": "Wed, 18 Sep 2024 15:24:03 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Sep 2024 12:08:18 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 16:41:17 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Jung",
"Jaehyung",
""
],
[
"Boche",
"Simon",
""
],
[
"Laina",
"Sebastián Barbas",
""
],
[
"Leutenegger",
"Stefan",
""
]
]
| TITLE: Uncertainty-Aware Visual-Inertial SLAM with Volumetric Occupancy Mapping
ABSTRACT: We propose visual-inertial simultaneous localization and mapping that tightly
couples sparse reprojection errors, inertial measurement unit pre-integrals,
and relative pose factors with dense volumetric occupancy mapping. Hereby depth
predictions from a deep neural network are fused in a fully probabilistic
manner. Specifically, our method is rigorously uncertainty-aware: first, we use
depth and uncertainty predictions from a deep network not only from the robot's
stereo rig, but we further probabilistically fuse motion stereo that provides
depth information across a range of baselines, therefore drastically increasing
mapping accuracy. Next, predicted and fused depth uncertainty propagates not
only into occupancy probabilities but also into alignment factors between
generated dense submaps that enter the probabilistic nonlinear least squares
estimator. This submap representation offers globally consistent geometry at
scale. Our method is thoroughly evaluated in two benchmark datasets, resulting
in localization and mapping accuracy that exceeds the state of the art, while
simultaneously offering volumetric occupancy directly usable for downstream
robotic planning and control in real-time.
| no_new_dataset | 0.956391 |
2409.15505 | Angelos Mavrogiannis | Angelos Mavrogiannis, Dehao Yuan, Yiannis Aloimonos | Discovering Object Attributes by Prompting Large Language Models with
Perception-Action APIs | ICRA 2025 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been a lot of interest in grounding natural language to physical
entities through visual context. While Vision Language Models (VLMs) can ground
linguistic instructions to visual sensory information, they struggle with
grounding non-visual attributes, like the weight of an object. Our key insight
is that non-visual attribute detection can be effectively achieved by active
perception guided by visual reasoning. To this end, we present a
perception-action API that consists of VLMs and Large Language Models (LLMs) as
backbones, together with a set of robot control functions. When prompted with
this API and a natural language query, an LLM generates a program to actively
identify attributes given an input image. Offline testing on the Odd-One-Out
dataset demonstrates that our framework outperforms vanilla VLMs in detecting
attributes like relative object location, size, and weight. Online testing in
realistic household scenes on AI2-THOR and a real robot demonstration on a DJI
RoboMaster EP robot highlight the efficacy of our approach.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 19:50:33 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 01:34:14 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Mavrogiannis",
"Angelos",
""
],
[
"Yuan",
"Dehao",
""
],
[
"Aloimonos",
"Yiannis",
""
]
]
| TITLE: Discovering Object Attributes by Prompting Large Language Models with
Perception-Action APIs
ABSTRACT: There has been a lot of interest in grounding natural language to physical
entities through visual context. While Vision Language Models (VLMs) can ground
linguistic instructions to visual sensory information, they struggle with
grounding non-visual attributes, like the weight of an object. Our key insight
is that non-visual attribute detection can be effectively achieved by active
perception guided by visual reasoning. To this end, we present a
perception-action API that consists of VLMs and Large Language Models (LLMs) as
backbones, together with a set of robot control functions. When prompted with
this API and a natural language query, an LLM generates a program to actively
identify attributes given an input image. Offline testing on the Odd-One-Out
dataset demonstrates that our framework outperforms vanilla VLMs in detecting
attributes like relative object location, size, and weight. Online testing in
realistic household scenes on AI2-THOR and a real robot demonstration on a DJI
RoboMaster EP robot highlight the efficacy of our approach.
| no_new_dataset | 0.944893 |
2409.17095 | Raphael Baena | Raphael Baena, Syrine Kalleli, Mathieu Aubry | General Detection-based Text Line Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We introduce a general detection-based approach to text line recognition, be
it printed (OCR) or handwritten (HTR), with Latin, Chinese, or ciphered
characters. Detection-based approaches have until now been largely discarded
for HTR because reading characters separately is often challenging, and
character-level annotation is difficult and expensive. We overcome these
challenges thanks to three main insights: (i) synthetic pre-training with
sufficiently diverse data enables learning reasonable character localization
for any script; (ii) modern transformer-based detectors can jointly detect a
large number of instances, and, if trained with an adequate masking strategy,
leverage consistency between the different detections; (iii) once a pre-trained
detection model with approximate character localization is available, it is
possible to fine-tune it with line-level annotation on real data, even with a
different alphabet. Our approach, dubbed DTLR, builds on a completely different
paradigm than state-of-the-art HTR methods, which rely on autoregressive
decoding, predicting character values one by one, while we treat a complete
line in parallel. Remarkably, we demonstrate good performance on a large range
of scripts, usually tackled with specialized approaches. In particular, we
improve state-of-the-art performances for Chinese script recognition on the
CASIA v2 dataset, and for cipher recognition on the Borg and Copiale datasets.
Our code and models are available at https://github.com/raphael-baena/DTLR.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 17:05:55 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 11:47:28 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Baena",
"Raphael",
""
],
[
"Kalleli",
"Syrine",
""
],
[
"Aubry",
"Mathieu",
""
]
]
| TITLE: General Detection-based Text Line Recognition
ABSTRACT: We introduce a general detection-based approach to text line recognition, be
it printed (OCR) or handwritten (HTR), with Latin, Chinese, or ciphered
characters. Detection-based approaches have until now been largely discarded
for HTR because reading characters separately is often challenging, and
character-level annotation is difficult and expensive. We overcome these
challenges thanks to three main insights: (i) synthetic pre-training with
sufficiently diverse data enables learning reasonable character localization
for any script; (ii) modern transformer-based detectors can jointly detect a
large number of instances, and, if trained with an adequate masking strategy,
leverage consistency between the different detections; (iii) once a pre-trained
detection model with approximate character localization is available, it is
possible to fine-tune it with line-level annotation on real data, even with a
different alphabet. Our approach, dubbed DTLR, builds on a completely different
paradigm than state-of-the-art HTR methods, which rely on autoregressive
decoding, predicting character values one by one, while we treat a complete
line in parallel. Remarkably, we demonstrate good performance on a large range
of scripts, usually tackled with specialized approaches. In particular, we
improve state-of-the-art performances for Chinese script recognition on the
CASIA v2 dataset, and for cipher recognition on the Borg and Copiale datasets.
Our code and models are available at https://github.com/raphael-baena/DTLR.
| no_new_dataset | 0.949248 |
2409.18862 | Sacha Huriot | Sacha Huriot and Hussein Sibai | Safe Decentralized Multi-Agent Control using Black-Box Predictors,
Conformal Decision Policies, and Control Barrier Functions | 6 pages, 1 figure, accepted for presentation at ICRA 2025 | null | null | null | eess.SY cs.MA cs.RO cs.SY | http://creativecommons.org/licenses/by-sa/4.0/ | We address the challenge of safe control in decentralized multi-agent robotic
settings, where agents use uncertain black-box models to predict other agents'
trajectories. We use the recently proposed conformal decision theory to adapt
the restrictiveness of control barrier functions-based safety constraints based
on observed prediction errors. We use these constraints to synthesize
controllers that balance between the objectives of safety and task
accomplishment, despite the prediction errors. We provide an upper bound on the
average over time of the value of a monotonic function of the difference
between the safety constraint based on the predicted trajectories and the
constraint based on the ground truth ones. We validate our theory through
experimental results showing the performance of our controllers when navigating
a robot in the multi-agent scenes in the Stanford Drone Dataset.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2024 15:57:52 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Oct 2024 20:23:47 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Nov 2024 19:00:11 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Mar 2025 16:42:01 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Huriot",
"Sacha",
""
],
[
"Sibai",
"Hussein",
""
]
]
| TITLE: Safe Decentralized Multi-Agent Control using Black-Box Predictors,
Conformal Decision Policies, and Control Barrier Functions
ABSTRACT: We address the challenge of safe control in decentralized multi-agent robotic
settings, where agents use uncertain black-box models to predict other agents'
trajectories. We use the recently proposed conformal decision theory to adapt
the restrictiveness of control barrier functions-based safety constraints based
on observed prediction errors. We use these constraints to synthesize
controllers that balance between the objectives of safety and task
accomplishment, despite the prediction errors. We provide an upper bound on the
average over time of the value of a monotonic function of the difference
between the safety constraint based on the predicted trajectories and the
constraint based on the ground truth ones. We validate our theory through
experimental results showing the performance of our controllers when navigating
a robot in the multi-agent scenes in the Stanford Drone Dataset.
| no_new_dataset | 0.945551 |
2410.04209 | Thieu Vo | Viet-Hoang Tran, Thieu N. Vo, An Nguyen The, Tho Tran Huu, Minh-Khoi
Nguyen-Nhat, Thanh Tran, Duy-Tung Pham, Tan Minh Nguyen | Equivariant Neural Functional Networks for Transformers | Accepted in ICLR 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper systematically explores neural functional networks (NFN) for
transformer architectures. NFN are specialized neural networks that treat the
weights, gradients, or sparsity patterns of a deep neural network (DNN) as
input data and have proven valuable for tasks such as learnable optimizers,
implicit data representations, and weight editing. While NFN have been
extensively developed for MLP and CNN, no prior work has addressed their design
for transformers, despite the importance of transformers in modern deep
learning. This paper aims to address this gap by providing a systematic study
of NFN for transformers. We first determine the maximal symmetric group of the
weights in a multi-head attention module as well as a necessary and sufficient
condition under which two sets of hyperparameters of the multi-head attention
module define the same function. We then define the weight space of transformer
architectures and its associated group action, which leads to the design
principles for NFN in transformers. Based on these, we introduce
Transformer-NFN, an NFN that is equivariant under this group action.
Additionally, we release a dataset of more than 125,000 Transformers model
checkpoints trained on two datasets with two different tasks, providing a
benchmark for evaluating Transformer-NFN and encouraging further research on
transformer training and performance.
| [
{
"version": "v1",
"created": "Sat, 5 Oct 2024 15:56:57 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 14:32:12 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Tran",
"Viet-Hoang",
""
],
[
"Vo",
"Thieu N.",
""
],
[
"The",
"An Nguyen",
""
],
[
"Huu",
"Tho Tran",
""
],
[
"Nguyen-Nhat",
"Minh-Khoi",
""
],
[
"Tran",
"Thanh",
""
],
[
"Pham",
"Duy-Tung",
""
],
[
"Nguyen",
"Tan Minh",
""
]
]
| TITLE: Equivariant Neural Functional Networks for Transformers
ABSTRACT: This paper systematically explores neural functional networks (NFN) for
transformer architectures. NFN are specialized neural networks that treat the
weights, gradients, or sparsity patterns of a deep neural network (DNN) as
input data and have proven valuable for tasks such as learnable optimizers,
implicit data representations, and weight editing. While NFN have been
extensively developed for MLP and CNN, no prior work has addressed their design
for transformers, despite the importance of transformers in modern deep
learning. This paper aims to address this gap by providing a systematic study
of NFN for transformers. We first determine the maximal symmetric group of the
weights in a multi-head attention module as well as a necessary and sufficient
condition under which two sets of hyperparameters of the multi-head attention
module define the same function. We then define the weight space of transformer
architectures and its associated group action, which leads to the design
principles for NFN in transformers. Based on these, we introduce
Transformer-NFN, an NFN that is equivariant under this group action.
Additionally, we release a dataset of more than 125,000 Transformers model
checkpoints trained on two datasets with two different tasks, providing a
benchmark for evaluating Transformer-NFN and encouraging further research on
transformer training and performance.
| new_dataset | 0.964954 |
2410.04263 | Manuel Madeira | Yiming Qin, Manuel Madeira, Dorina Thanou, Pascal Frossard | DeFoG: Discrete Flow Matching for Graph Generation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph generative models are essential across diverse scientific domains by
capturing complex distributions over relational data. Among them, graph
diffusion models achieve superior performance but face inefficient sampling and
limited flexibility due to the tight coupling between training and sampling
stages. We introduce DeFoG, a novel graph generative framework that
disentangles sampling from training, enabling a broader design space for more
effective and efficient model optimization. DeFoG employs a discrete
flow-matching formulation that respects the inherent symmetries of graphs. We
theoretically ground this disentangled formulation by explicitly relating the
training loss to the sampling algorithm and showing that DeFoG faithfully
replicates the ground truth graph distribution. Building on these foundations,
we thoroughly investigate DeFoG's design space and propose novel sampling
methods that significantly enhance performance and reduce the required number
of refinement steps. Extensive experiments demonstrate state-of-the-art
performance across synthetic, molecular, and digital pathology datasets,
covering both unconditional and conditional generation settings. It also
outperforms most diffusion-based models with just 5-10% of their sampling
steps.
| [
{
"version": "v1",
"created": "Sat, 5 Oct 2024 18:52:54 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 12:18:32 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Qin",
"Yiming",
""
],
[
"Madeira",
"Manuel",
""
],
[
"Thanou",
"Dorina",
""
],
[
"Frossard",
"Pascal",
""
]
]
| TITLE: DeFoG: Discrete Flow Matching for Graph Generation
ABSTRACT: Graph generative models are essential across diverse scientific domains by
capturing complex distributions over relational data. Among them, graph
diffusion models achieve superior performance but face inefficient sampling and
limited flexibility due to the tight coupling between training and sampling
stages. We introduce DeFoG, a novel graph generative framework that
disentangles sampling from training, enabling a broader design space for more
effective and efficient model optimization. DeFoG employs a discrete
flow-matching formulation that respects the inherent symmetries of graphs. We
theoretically ground this disentangled formulation by explicitly relating the
training loss to the sampling algorithm and showing that DeFoG faithfully
replicates the ground truth graph distribution. Building on these foundations,
we thoroughly investigate DeFoG's design space and propose novel sampling
methods that significantly enhance performance and reduce the required number
of refinement steps. Extensive experiments demonstrate state-of-the-art
performance across synthetic, molecular, and digital pathology datasets,
covering both unconditional and conditional generation settings. It also
outperforms most diffusion-based models with just 5-10% of their sampling
steps.
| no_new_dataset | 0.943034 |
2410.07191 | Ehsan Ahmadi | Ehsan Ahmadi, Ray Mercurius, Soheil Alizadeh, Kasra Rezaee, Amir
Rasouli | Curb Your Attention: Causal Attention Gating for Robust Trajectory
Prediction in Autonomous Driving | Accepted ICRA 2025 | null | null | null | cs.RO cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trajectory prediction models in autonomous driving are vulnerable to
perturbations from non-causal agents whose actions should not affect the
ego-agent's behavior. Such perturbations can lead to incorrect predictions of
other agents' trajectories, potentially compromising the safety and efficiency
of the ego-vehicle's decision-making process. Motivated by this challenge, we
propose $\textit{Causal tRajecTory predICtion}$ $\textbf{(CRiTIC)}$, a novel
model that utilizes a $\textit{Causal Discovery Network}$ to identify
inter-agent causal relations over a window of past time steps. To incorporate
discovered causal relationships, we propose a novel $\textit{Causal Attention
Gating}$ mechanism to selectively filter information in the proposed
Transformer-based architecture. We conduct extensive experiments on two
autonomous driving benchmark datasets to evaluate the robustness of our model
against non-causal perturbations and its generalization capacity. Our results
indicate that the robustness of predictions can be improved by up to
$\textbf{54%}$ without a significant detriment to prediction accuracy. Lastly,
we demonstrate the superior domain generalizability of the proposed model,
which achieves up to $\textbf{29%}$ improvement in cross-domain performance.
These results underscore the potential of our model to enhance both robustness
and generalization capacity for trajectory prediction in diverse autonomous
driving domains. Further details can be found on our project page:
https://ehsan-ami.github.io/critic.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 20:01:20 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 23:13:01 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Ahmadi",
"Ehsan",
""
],
[
"Mercurius",
"Ray",
""
],
[
"Alizadeh",
"Soheil",
""
],
[
"Rezaee",
"Kasra",
""
],
[
"Rasouli",
"Amir",
""
]
]
| TITLE: Curb Your Attention: Causal Attention Gating for Robust Trajectory
Prediction in Autonomous Driving
ABSTRACT: Trajectory prediction models in autonomous driving are vulnerable to
perturbations from non-causal agents whose actions should not affect the
ego-agent's behavior. Such perturbations can lead to incorrect predictions of
other agents' trajectories, potentially compromising the safety and efficiency
of the ego-vehicle's decision-making process. Motivated by this challenge, we
propose $\textit{Causal tRajecTory predICtion}$ $\textbf{(CRiTIC)}$, a novel
model that utilizes a $\textit{Causal Discovery Network}$ to identify
inter-agent causal relations over a window of past time steps. To incorporate
discovered causal relationships, we propose a novel $\textit{Causal Attention
Gating}$ mechanism to selectively filter information in the proposed
Transformer-based architecture. We conduct extensive experiments on two
autonomous driving benchmark datasets to evaluate the robustness of our model
against non-causal perturbations and its generalization capacity. Our results
indicate that the robustness of predictions can be improved by up to
$\textbf{54%}$ without a significant detriment to prediction accuracy. Lastly,
we demonstrate the superior domain generalizability of the proposed model,
which achieves up to $\textbf{29%}$ improvement in cross-domain performance.
These results underscore the potential of our model to enhance both robustness
and generalization capacity for trajectory prediction in diverse autonomous
driving domains. Further details can be found on our project page:
https://ehsan-ami.github.io/critic.
| no_new_dataset | 0.946101 |
2410.14677 | Anastasia Voznyuk | German Gritsai and Anastasia Voznyuk and Andrey Grabovoy and Yury
Chekhovich | Are AI Detectors Good Enough? A Survey on Quality of Datasets With
Machine-Generated Texts | Presented at Preventing and Detecting LLM Misinformation (PDLM) at
AAAI 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The rapid development of autoregressive Large Language Models (LLMs) has
significantly improved the quality of generated texts, necessitating reliable
machine-generated text detectors. A huge number of detectors and collections
with AI fragments have emerged, and several detection methods even showed
recognition quality up to 99.9% according to the target metrics in such
collections. However, the quality of such detectors tends to drop dramatically
in the wild, posing a question: Are detectors actually highly trustworthy or do
their high benchmark scores come from the poor quality of evaluation datasets?
In this paper, we emphasise the need for robust and qualitative methods for
evaluating generated data to be secure against bias and low generalising
ability of future model. We present a systematic review of datasets from
competitions dedicated to AI-generated content detection and propose methods
for evaluating the quality of datasets containing AI-generated fragments. In
addition, we discuss the possibility of using high-quality generated data to
achieve two goals: improving the training of detection models and improving the
training datasets themselves. Our contribution aims to facilitate a better
understanding of the dynamics between human and machine text, which will
ultimately support the integrity of information in an increasingly automated
world. The code is available at
https://github.com/Advacheck-OU/ai-dataset-analysing.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 17:59:57 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jan 2025 10:00:02 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 10:17:34 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Gritsai",
"German",
""
],
[
"Voznyuk",
"Anastasia",
""
],
[
"Grabovoy",
"Andrey",
""
],
[
"Chekhovich",
"Yury",
""
]
]
| TITLE: Are AI Detectors Good Enough? A Survey on Quality of Datasets With
Machine-Generated Texts
ABSTRACT: The rapid development of autoregressive Large Language Models (LLMs) has
significantly improved the quality of generated texts, necessitating reliable
machine-generated text detectors. A huge number of detectors and collections
with AI fragments have emerged, and several detection methods even showed
recognition quality up to 99.9% according to the target metrics in such
collections. However, the quality of such detectors tends to drop dramatically
in the wild, posing a question: Are detectors actually highly trustworthy or do
their high benchmark scores come from the poor quality of evaluation datasets?
In this paper, we emphasise the need for robust and qualitative methods for
evaluating generated data to be secure against bias and low generalising
ability of future model. We present a systematic review of datasets from
competitions dedicated to AI-generated content detection and propose methods
for evaluating the quality of datasets containing AI-generated fragments. In
addition, we discuss the possibility of using high-quality generated data to
achieve two goals: improving the training of detection models and improving the
training datasets themselves. Our contribution aims to facilitate a better
understanding of the dynamics between human and machine text, which will
ultimately support the integrity of information in an increasingly automated
world. The code is available at
https://github.com/Advacheck-OU/ai-dataset-analysing.
| no_new_dataset | 0.942401 |
2410.20495 | Advik Raj Basani | Advik Raj Basani, Siddharth Chaitra Vivek, Advaith Krishna, Arnab K.
Paul | When Less is More: Achieving Faster Convergence in Distributed Edge
Machine Learning | 11 pages, 19 figures, 3 tables; code:
https://github.com/DaSH-Lab-CSIS/Hermes | null | 10.1109/HiPC62374.2024.00034 | null | cs.DC cs.LG cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed Machine Learning (DML) on resource-constrained edge devices holds
immense potential for real-world applications. However, achieving fast
convergence in DML in these heterogeneous environments remains a significant
challenge. Traditional frameworks like Bulk Synchronous Parallel and
Asynchronous Stochastic Parallel rely on frequent, small updates that incur
substantial communication overhead and hinder convergence speed. Furthermore,
these frameworks often employ static dataset sizes, neglecting the
heterogeneity of edge devices and potentially leading to straggler nodes that
slow down the entire training process. The straggler nodes, i.e., edge devices
that take significantly longer to process their assigned data chunk, hinder the
overall training speed. To address these limitations, this paper proposes
Hermes, a novel probabilistic framework for efficient DML on edge devices. This
framework leverages a dynamic threshold based on recent test loss behavior to
identify statistically significant improvements in the model's generalization
capability, hence transmitting updates only when major improvements are
detected, thereby significantly reducing communication overhead. Additionally,
Hermes employs dynamic dataset allocation to optimize resource utilization and
prevents performance degradation caused by straggler nodes. Our evaluations on
a real-world heterogeneous resource-constrained environment demonstrate that
Hermes achieves faster convergence compared to state-of-the-art methods,
resulting in a remarkable $13.22$x reduction in training time and a $62.1\%$
decrease in communication overhead.
| [
{
"version": "v1",
"created": "Sun, 27 Oct 2024 16:17:03 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Basani",
"Advik Raj",
""
],
[
"Vivek",
"Siddharth Chaitra",
""
],
[
"Krishna",
"Advaith",
""
],
[
"Paul",
"Arnab K.",
""
]
]
| TITLE: When Less is More: Achieving Faster Convergence in Distributed Edge
Machine Learning
ABSTRACT: Distributed Machine Learning (DML) on resource-constrained edge devices holds
immense potential for real-world applications. However, achieving fast
convergence in DML in these heterogeneous environments remains a significant
challenge. Traditional frameworks like Bulk Synchronous Parallel and
Asynchronous Stochastic Parallel rely on frequent, small updates that incur
substantial communication overhead and hinder convergence speed. Furthermore,
these frameworks often employ static dataset sizes, neglecting the
heterogeneity of edge devices and potentially leading to straggler nodes that
slow down the entire training process. The straggler nodes, i.e., edge devices
that take significantly longer to process their assigned data chunk, hinder the
overall training speed. To address these limitations, this paper proposes
Hermes, a novel probabilistic framework for efficient DML on edge devices. This
framework leverages a dynamic threshold based on recent test loss behavior to
identify statistically significant improvements in the model's generalization
capability, hence transmitting updates only when major improvements are
detected, thereby significantly reducing communication overhead. Additionally,
Hermes employs dynamic dataset allocation to optimize resource utilization and
prevents performance degradation caused by straggler nodes. Our evaluations on
a real-world heterogeneous resource-constrained environment demonstrate that
Hermes achieves faster convergence compared to state-of-the-art methods,
resulting in a remarkable $13.22$x reduction in training time and a $62.1\%$
decrease in communication overhead.
| no_new_dataset | 0.949809 |
2411.01223 | Teresa Head-Gordon | Yingze Wang, Kunyang Sun, Jie Li, Xingyi Guan, Oufan Zhang, Dorian
Bagni, and Teresa Head-Gordon | A Workflow to Create a High-Quality Protein-Ligand Binding Dataset for
Training, Validation, and Prediction Tasks | null | null | null | null | physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Development of scoring functions (SFs) used to predict protein-ligand binding
energies requires high-quality 3D structures and binding assay data for
training and testing their parameters. In this work, we show that one of the
widely-used datasets, PDBbind, suffers from several common structural artifacts
of both proteins and ligands, which may compromise the accuracy, reliability,
and generalizability of the resulting SFs. Therefore, we have developed a
series of algorithms organized in a semi-automated workflow, HiQBind-WF, that
curates non-covalent protein-ligand datasets to fix these problems. We also
used this workflow to create an independent data set, HiQBind, by matching
binding free energies from various sources including BioLiP, Binding MOAD and
BindingDB with co-crystalized ligand-protein complexes from the PDB. The
resulting HiQBind workflow and dataset are designed to ensure reproducibility
and to minimize human intervention, while also being open-source to foster
transparency in the improvements made to this important resource for the
biology and drug discovery communities.
| [
{
"version": "v1",
"created": "Sat, 2 Nov 2024 12:06:00 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 17:22:48 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Wang",
"Yingze",
""
],
[
"Sun",
"Kunyang",
""
],
[
"Li",
"Jie",
""
],
[
"Guan",
"Xingyi",
""
],
[
"Zhang",
"Oufan",
""
],
[
"Bagni",
"Dorian",
""
],
[
"Head-Gordon",
"Teresa",
""
]
]
| TITLE: A Workflow to Create a High-Quality Protein-Ligand Binding Dataset for
Training, Validation, and Prediction Tasks
ABSTRACT: Development of scoring functions (SFs) used to predict protein-ligand binding
energies requires high-quality 3D structures and binding assay data for
training and testing their parameters. In this work, we show that one of the
widely-used datasets, PDBbind, suffers from several common structural artifacts
of both proteins and ligands, which may compromise the accuracy, reliability,
and generalizability of the resulting SFs. Therefore, we have developed a
series of algorithms organized in a semi-automated workflow, HiQBind-WF, that
curates non-covalent protein-ligand datasets to fix these problems. We also
used this workflow to create an independent data set, HiQBind, by matching
binding free energies from various sources including BioLiP, Binding MOAD and
BindingDB with co-crystalized ligand-protein complexes from the PDB. The
resulting HiQBind workflow and dataset are designed to ensure reproducibility
and to minimize human intervention, while also being open-source to foster
transparency in the improvements made to this important resource for the
biology and drug discovery communities.
| new_dataset | 0.705633 |
2411.01952 | Mike Thelwall Prof | Mike Thelwall, Xiaorui Jiang, Peter A. Bath | Evaluating the quality of published medical research with ChatGPT | Information Processing & Management (2025) | Information Processing & Management, Volume 62, Issue 4, July
2025, 104123 | 10.1016/j.ipm.2025.104123 | null | cs.DL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Estimating the quality of published research is important for evaluations of
departments, researchers, and job candidates. Citation-based indicators
sometimes support these tasks, but do not work for new articles and have low or
moderate accuracy. Previous research has shown that ChatGPT can estimate the
quality of research articles, with its scores correlating positively with an
expert scores proxy in all fields, and often more strongly than citation-based
indicators, except for clinical medicine. ChatGPT scores may therefore replace
citation-based indicators for some applications. This article investigates the
clinical medicine anomaly with the largest dataset yet and a more detailed
analysis. The results showed that ChatGPT 4o-mini scores for articles submitted
to the UK's Research Excellence Framework (REF) 2021 Unit of Assessment (UoA) 1
Clinical Medicine correlated positively (r=0.134, n=9872) with departmental
mean REF scores, against a theoretical maximum correlation of r=0.226. ChatGPT
4o and 3.5 turbo also gave positive correlations. At the departmental level,
mean ChatGPT scores correlated more strongly with departmental mean REF scores
(r=0.395, n=31). For the 100 journals with the most articles in UoA 1, their
mean ChatGPT score correlated strongly with their REF score (r=0.495) but
negatively with their citation rate (r=-0.148). Journal and departmental
anomalies in these results point to ChatGPT being ineffective at assessing the
quality of research in prestigious medical journals or research directly
affecting human health, or both. Nevertheless, the results give evidence of
ChatGPT's ability to assess research quality overall for Clinical Medicine,
where it might replace citation-based indicators for new research.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 10:24:36 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 15:46:33 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Thelwall",
"Mike",
""
],
[
"Jiang",
"Xiaorui",
""
],
[
"Bath",
"Peter A.",
""
]
]
| TITLE: Evaluating the quality of published medical research with ChatGPT
ABSTRACT: Estimating the quality of published research is important for evaluations of
departments, researchers, and job candidates. Citation-based indicators
sometimes support these tasks, but do not work for new articles and have low or
moderate accuracy. Previous research has shown that ChatGPT can estimate the
quality of research articles, with its scores correlating positively with an
expert scores proxy in all fields, and often more strongly than citation-based
indicators, except for clinical medicine. ChatGPT scores may therefore replace
citation-based indicators for some applications. This article investigates the
clinical medicine anomaly with the largest dataset yet and a more detailed
analysis. The results showed that ChatGPT 4o-mini scores for articles submitted
to the UK's Research Excellence Framework (REF) 2021 Unit of Assessment (UoA) 1
Clinical Medicine correlated positively (r=0.134, n=9872) with departmental
mean REF scores, against a theoretical maximum correlation of r=0.226. ChatGPT
4o and 3.5 turbo also gave positive correlations. At the departmental level,
mean ChatGPT scores correlated more strongly with departmental mean REF scores
(r=0.395, n=31). For the 100 journals with the most articles in UoA 1, their
mean ChatGPT score correlated strongly with their REF score (r=0.495) but
negatively with their citation rate (r=-0.148). Journal and departmental
anomalies in these results point to ChatGPT being ineffective at assessing the
quality of research in prestigious medical journals or research directly
affecting human health, or both. Nevertheless, the results give evidence of
ChatGPT's ability to assess research quality overall for Clinical Medicine,
where it might replace citation-based indicators for new research.
| no_new_dataset | 0.935287 |
2411.02126 | Santiago Acevedo | Santiago Acevedo, Alex Rodriguez and Alessandro Laio | Unsupervised detection of semantic correlations in big data | null | null | null | null | cs.LG cs.AI physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real-world data, information is stored in extremely large feature vectors.
These variables are typically correlated due to complex interactions involving
many features simultaneously. Such correlations qualitatively correspond to
semantic roles and are naturally recognized by both the human brain and
artificial neural networks. This recognition enables, for instance, the
prediction of missing parts of an image or text based on their context. We
present a method to detect these correlations in high-dimensional data
represented as binary numbers. We estimate the binary intrinsic dimension of a
dataset, which quantifies the minimum number of independent coordinates needed
to describe the data, and is therefore a proxy of semantic complexity. The
proposed algorithm is largely insensitive to the so-called curse of
dimensionality, and can therefore be used in big data analysis. We test this
approach identifying phase transitions in model magnetic systems and we then
apply it to the detection of semantic correlations of images and text inside
deep neural networks.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 14:37:07 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 15:21:42 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Acevedo",
"Santiago",
""
],
[
"Rodriguez",
"Alex",
""
],
[
"Laio",
"Alessandro",
""
]
]
| TITLE: Unsupervised detection of semantic correlations in big data
ABSTRACT: In real-world data, information is stored in extremely large feature vectors.
These variables are typically correlated due to complex interactions involving
many features simultaneously. Such correlations qualitatively correspond to
semantic roles and are naturally recognized by both the human brain and
artificial neural networks. This recognition enables, for instance, the
prediction of missing parts of an image or text based on their context. We
present a method to detect these correlations in high-dimensional data
represented as binary numbers. We estimate the binary intrinsic dimension of a
dataset, which quantifies the minimum number of independent coordinates needed
to describe the data, and is therefore a proxy of semantic complexity. The
proposed algorithm is largely insensitive to the so-called curse of
dimensionality, and can therefore be used in big data analysis. We test this
approach identifying phase transitions in model magnetic systems and we then
apply it to the detection of semantic correlations of images and text inside
deep neural networks.
| no_new_dataset | 0.947624 |
2411.02482 | Eric Zhu | Eric Zhu, Mara Levy, Matthew Gwilliam, Abhinav Shrivastava | NeRF-Aug: Data Augmentation for Robotics with Neural Radiance Fields | null | null | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training a policy that can generalize to unknown objects is a long standing
challenge within the field of robotics. The performance of a policy often drops
significantly in situations where an object in the scene was not seen during
training. To solve this problem, we present NeRF-Aug, a novel method that is
capable of teaching a policy to interact with objects that are not present in
the dataset. This approach differs from existing approaches by leveraging the
speed, photorealism, and 3D consistency of a neural radiance field for
augmentation. NeRF-Aug both creates more photorealistic data and runs 63%
faster than existing methods. We demonstrate the effectiveness of our method on
5 tasks with 9 novel objects that are not present in the expert demonstrations.
We achieve an average performance boost of 55.6% when comparing our method to
the next best method. You can see video results at https://nerf-aug.github.io.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 18:59:36 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 18:20:38 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Zhu",
"Eric",
""
],
[
"Levy",
"Mara",
""
],
[
"Gwilliam",
"Matthew",
""
],
[
"Shrivastava",
"Abhinav",
""
]
]
| TITLE: NeRF-Aug: Data Augmentation for Robotics with Neural Radiance Fields
ABSTRACT: Training a policy that can generalize to unknown objects is a long standing
challenge within the field of robotics. The performance of a policy often drops
significantly in situations where an object in the scene was not seen during
training. To solve this problem, we present NeRF-Aug, a novel method that is
capable of teaching a policy to interact with objects that are not present in
the dataset. This approach differs from existing approaches by leveraging the
speed, photorealism, and 3D consistency of a neural radiance field for
augmentation. NeRF-Aug both creates more photorealistic data and runs 63%
faster than existing methods. We demonstrate the effectiveness of our method on
5 tasks with 9 novel objects that are not present in the expert demonstrations.
We achieve an average performance boost of 55.6% when comparing our method to
the next best method. You can see video results at https://nerf-aug.github.io.
| no_new_dataset | 0.953794 |
2411.03315 | Erik Helmut | Erik Helmut, Luca Dziarski, Niklas Funk, Boris Belousov, Jan Peters | Learning Force Distribution Estimation for the GelSight Mini Optical
Tactile Sensor Based on Finite Element Analysis | null | null | null | null | cs.RO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contact-rich manipulation remains a major challenge in robotics. Optical
tactile sensors like GelSight Mini offer a low-cost solution for contact
sensing by capturing soft-body deformations of the silicone gel. However,
accurately inferring shear and normal force distributions from these gel
deformations has yet to be fully addressed. In this work, we propose a machine
learning approach using a U-net architecture to predict force distributions
directly from the sensor's raw images. Our model, trained on force
distributions inferred from Finite Element Analysis (FEA), demonstrates
promising accuracy in predicting normal and shear force distributions for the
commercially available GelSight Mini sensor. It also shows potential for
generalization across indenters, sensors of the same type, and for enabling
real-time application. The codebase, dataset and models are open-sourced and
available at https://feats-ai.github.io .
| [
{
"version": "v1",
"created": "Tue, 8 Oct 2024 11:01:12 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 10:05:23 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Helmut",
"Erik",
""
],
[
"Dziarski",
"Luca",
""
],
[
"Funk",
"Niklas",
""
],
[
"Belousov",
"Boris",
""
],
[
"Peters",
"Jan",
""
]
]
| TITLE: Learning Force Distribution Estimation for the GelSight Mini Optical
Tactile Sensor Based on Finite Element Analysis
ABSTRACT: Contact-rich manipulation remains a major challenge in robotics. Optical
tactile sensors like GelSight Mini offer a low-cost solution for contact
sensing by capturing soft-body deformations of the silicone gel. However,
accurately inferring shear and normal force distributions from these gel
deformations has yet to be fully addressed. In this work, we propose a machine
learning approach using a U-net architecture to predict force distributions
directly from the sensor's raw images. Our model, trained on force
distributions inferred from Finite Element Analysis (FEA), demonstrates
promising accuracy in predicting normal and shear force distributions for the
commercially available GelSight Mini sensor. It also shows potential for
generalization across indenters, sensors of the same type, and for enabling
real-time application. The codebase, dataset and models are open-sourced and
available at https://feats-ai.github.io .
| no_new_dataset | 0.944587 |
2411.03554 | Yingzi Ma | Yingzi Ma, Jiongxiao Wang, Fei Wang, Siyuan Ma, Jiazhao Li, Jinsheng
Pan, Xiujun Li, Furong Huang, Lichao Sun, Bo Li, Yejin Choi, Muhao Chen,
Chaowei Xiao | Benchmarking Vision Language Model Unlearning via Fictitious Facial
Identity Dataset | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine unlearning has emerged as an effective strategy for forgetting
specific information in the training data. However, with the increasing
integration of visual data, privacy concerns in Vision Language Models (VLMs)
remain underexplored. To address this, we introduce Facial Identity Unlearning
Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly
evaluate the effectiveness of unlearning algorithms under the Right to be
Forgotten setting. Specifically, we formulate the VLM unlearning task via
constructing the Fictitious Facial Identity VQA dataset and apply a two-stage
evaluation pipeline that is designed to precisely control the sources of
information and their exposure levels. In terms of evaluation, since VLM
supports various forms of ways to ask questions with the same semantic meaning,
we also provide robust evaluation metrics including membership inference
attacks and carefully designed adversarial privacy attacks to evaluate the
performance of algorithms. Through the evaluation of four baseline VLM
unlearning algorithms within FIUBench, we find that all methods remain limited
in their unlearning performance, with significant trade-offs between model
utility and forget quality. Furthermore, our findings also highlight the
importance of privacy attacks for robust evaluations. We hope FIUBench will
drive progress in developing more effective VLM unlearning algorithms.
| [
{
"version": "v1",
"created": "Tue, 5 Nov 2024 23:26:10 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Nov 2024 05:08:27 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 16:05:19 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Ma",
"Yingzi",
""
],
[
"Wang",
"Jiongxiao",
""
],
[
"Wang",
"Fei",
""
],
[
"Ma",
"Siyuan",
""
],
[
"Li",
"Jiazhao",
""
],
[
"Pan",
"Jinsheng",
""
],
[
"Li",
"Xiujun",
""
],
[
"Huang",
"Furong",
""
],
[
"Sun",
"Lichao",
""
],
[
"Li",
"Bo",
""
],
[
"Choi",
"Yejin",
""
],
[
"Chen",
"Muhao",
""
],
[
"Xiao",
"Chaowei",
""
]
]
| TITLE: Benchmarking Vision Language Model Unlearning via Fictitious Facial
Identity Dataset
ABSTRACT: Machine unlearning has emerged as an effective strategy for forgetting
specific information in the training data. However, with the increasing
integration of visual data, privacy concerns in Vision Language Models (VLMs)
remain underexplored. To address this, we introduce Facial Identity Unlearning
Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly
evaluate the effectiveness of unlearning algorithms under the Right to be
Forgotten setting. Specifically, we formulate the VLM unlearning task via
constructing the Fictitious Facial Identity VQA dataset and apply a two-stage
evaluation pipeline that is designed to precisely control the sources of
information and their exposure levels. In terms of evaluation, since VLM
supports various forms of ways to ask questions with the same semantic meaning,
we also provide robust evaluation metrics including membership inference
attacks and carefully designed adversarial privacy attacks to evaluate the
performance of algorithms. Through the evaluation of four baseline VLM
unlearning algorithms within FIUBench, we find that all methods remain limited
in their unlearning performance, with significant trade-offs between model
utility and forget quality. Furthermore, our findings also highlight the
importance of privacy attacks for robust evaluations. We hope FIUBench will
drive progress in developing more effective VLM unlearning algorithms.
| new_dataset | 0.885681 |
2411.10351 | Lin Ling | Lin Ling, Fazle Rabbi, Song Wang, Jinqiu Yang | Bias Unveiled: Investigating Social Bias in LLM-Generated Code | accepted for publication in the Association for the Advancement of
Artificial Intelligence (AAAI), 2025 | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have significantly advanced the field of
automated code generation. However, a notable research gap exists in evaluating
social biases that may be present in the code produced by LLMs. To solve this
issue, we propose a novel fairness framework, i.e., Solar, to assess and
mitigate the social biases of LLM-generated code. Specifically, Solar can
automatically generate test cases for quantitatively uncovering social biases
of the auto-generated code by LLMs. To quantify the severity of social biases
in generated code, we develop a dataset that covers a diverse set of social
problems. We applied Solar and the crafted dataset to four state-of-the-art
LLMs for code generation. Our evaluation reveals severe bias in the
LLM-generated code from all the subject LLMs. Furthermore, we explore several
prompting strategies for mitigating bias, including Chain-of-Thought (CoT)
prompting, combining positive role-playing with CoT prompting and dialogue with
Solar. Our experiments show that dialogue with Solar can effectively reduce
social bias in LLM-generated code by up to 90%. Last, we make the code and data
publicly available is highly extensible to evaluate new social problems.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2024 16:55:57 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Nov 2024 15:44:21 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Jan 2025 18:21:23 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Mar 2025 18:59:21 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Ling",
"Lin",
""
],
[
"Rabbi",
"Fazle",
""
],
[
"Wang",
"Song",
""
],
[
"Yang",
"Jinqiu",
""
]
]
| TITLE: Bias Unveiled: Investigating Social Bias in LLM-Generated Code
ABSTRACT: Large language models (LLMs) have significantly advanced the field of
automated code generation. However, a notable research gap exists in evaluating
social biases that may be present in the code produced by LLMs. To solve this
issue, we propose a novel fairness framework, i.e., Solar, to assess and
mitigate the social biases of LLM-generated code. Specifically, Solar can
automatically generate test cases for quantitatively uncovering social biases
of the auto-generated code by LLMs. To quantify the severity of social biases
in generated code, we develop a dataset that covers a diverse set of social
problems. We applied Solar and the crafted dataset to four state-of-the-art
LLMs for code generation. Our evaluation reveals severe bias in the
LLM-generated code from all the subject LLMs. Furthermore, we explore several
prompting strategies for mitigating bias, including Chain-of-Thought (CoT)
prompting, combining positive role-playing with CoT prompting and dialogue with
Solar. Our experiments show that dialogue with Solar can effectively reduce
social bias in LLM-generated code by up to 90%. Last, we make the code and data
publicly available is highly extensible to evaluate new social problems.
| new_dataset | 0.961965 |
2411.12877 | Jo\~ao Sedoc | Tingting Liu, Salvatore Giorgi, Ankit Aich, Allison Lahnala, Brenda
Curtis, Lyle Ungar, Jo\~ao Sedoc | The Illusion of Empathy: How AI Chatbots Shape Conversation Perception | null | null | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As AI chatbots increasingly incorporate empathy, understanding user-centered
perceptions of chatbot empathy and its impact on conversation quality remains
essential yet under-explored. This study examines how chatbot identity and
perceived empathy influence users' overall conversation experience. Analyzing
155 conversations from two datasets, we found that while GPT-based chatbots
were rated significantly higher in conversational quality, they were
consistently perceived as less empathetic than human conversational partners.
Empathy ratings from GPT-4o annotations aligned with user ratings, reinforcing
the perception of lower empathy in chatbots compared to humans. Our findings
underscore the critical role of perceived empathy in shaping conversation
quality, revealing that achieving high-quality human-AI interactions requires
more than simply embedding empathetic language; it necessitates addressing the
nuanced ways users interpret and experience empathy in conversations with
chatbots.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2024 21:47:08 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Feb 2025 19:54:22 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Feb 2025 19:56:10 GMT"
},
{
"version": "v4",
"created": "Thu, 6 Mar 2025 20:06:51 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Liu",
"Tingting",
""
],
[
"Giorgi",
"Salvatore",
""
],
[
"Aich",
"Ankit",
""
],
[
"Lahnala",
"Allison",
""
],
[
"Curtis",
"Brenda",
""
],
[
"Ungar",
"Lyle",
""
],
[
"Sedoc",
"João",
""
]
]
| TITLE: The Illusion of Empathy: How AI Chatbots Shape Conversation Perception
ABSTRACT: As AI chatbots increasingly incorporate empathy, understanding user-centered
perceptions of chatbot empathy and its impact on conversation quality remains
essential yet under-explored. This study examines how chatbot identity and
perceived empathy influence users' overall conversation experience. Analyzing
155 conversations from two datasets, we found that while GPT-based chatbots
were rated significantly higher in conversational quality, they were
consistently perceived as less empathetic than human conversational partners.
Empathy ratings from GPT-4o annotations aligned with user ratings, reinforcing
the perception of lower empathy in chatbots compared to humans. Our findings
underscore the critical role of perceived empathy in shaping conversation
quality, revealing that achieving high-quality human-AI interactions requires
more than simply embedding empathetic language; it necessitates addressing the
nuanced ways users interpret and experience empathy in conversations with
chatbots.
| no_new_dataset | 0.947186 |
2411.15811 | Pan Liao | Pan Liao, Feng Yang, Di Wu, Jinwen Yu, Wenhui Zhao, Bo Liu | FastTrackTr:Towards Fast Multi-Object Tracking with Transformers | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Transformer-based multi-object tracking (MOT) methods have captured the
attention of many researchers in recent years. However, these models often
suffer from slow inference speeds due to their structure or other issues. To
address this problem, we revisited the Joint Detection and Tracking (JDT)
method by looking back at past approaches. By integrating the original JDT
approach with some advanced theories, this paper employs an efficient method of
information transfer between frames on the DETR, constructing a fast and novel
JDT-type MOT framework: FastTrackTr. Thanks to the superiority of this
information transfer method, our approach not only reduces the number of
queries required during tracking but also avoids the excessive introduction of
network structures, ensuring model simplicity. Experimental results indicate
that our method has the potential to achieve real-time tracking and exhibits
competitive tracking accuracy across multiple datasets.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 12:34:02 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2025 11:47:52 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 03:39:49 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Liao",
"Pan",
""
],
[
"Yang",
"Feng",
""
],
[
"Wu",
"Di",
""
],
[
"Yu",
"Jinwen",
""
],
[
"Zhao",
"Wenhui",
""
],
[
"Liu",
"Bo",
""
]
]
| TITLE: FastTrackTr:Towards Fast Multi-Object Tracking with Transformers
ABSTRACT: Transformer-based multi-object tracking (MOT) methods have captured the
attention of many researchers in recent years. However, these models often
suffer from slow inference speeds due to their structure or other issues. To
address this problem, we revisited the Joint Detection and Tracking (JDT)
method by looking back at past approaches. By integrating the original JDT
approach with some advanced theories, this paper employs an efficient method of
information transfer between frames on the DETR, constructing a fast and novel
JDT-type MOT framework: FastTrackTr. Thanks to the superiority of this
information transfer method, our approach not only reduces the number of
queries required during tracking but also avoids the excessive introduction of
network structures, ensuring model simplicity. Experimental results indicate
that our method has the potential to achieve real-time tracking and exhibits
competitive tracking accuracy across multiple datasets.
| no_new_dataset | 0.942348 |
2411.17902 | Tyler Wilson | Tyler S. Wilson, Wil Thomason, Zachary Kingston, Lydia E. Kavraki,
Jonathan D. Gammell | Nearest-Neighbourless Asymptotically Optimal Motion Planning with Fully
Connected Informed Trees (FCIT*) | IEEE International Conference on Robotics and Automation (ICRA) 2025,
6 + 1 pages, 3 figures, 1 table. A video of FCIT* can be found at
https://www.youtube.com/watch?v=Lb_5Znpcleg . Information on the
implementation of FCIT* is available at
https://robotic-esp.com/code/fcitstar/ | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Improving the performance of motion planning algorithms for
high-degree-of-freedom robots usually requires reducing the cost or frequency
of computationally expensive operations. Traditionally, and especially for
asymptotically optimal sampling-based motion planners, the most expensive
operations are local motion validation and querying the nearest neighbours of a
configuration.
Recent advances have significantly reduced the cost of motion validation by
using single instruction/multiple data (SIMD) parallelism to improve solution
times for satisficing motion planning problems. These advances have not yet
been applied to asymptotically optimal motion planning.
This paper presents Fully Connected Informed Trees (FCIT*), the first fully
connected, informed, anytime almost-surely asymptotically optimal (ASAO)
algorithm. FCIT* exploits the radically reduced cost of edge evaluation via
SIMD parallelism to build and search fully connected graphs. This removes the
need for nearest-neighbours structures, which are a dominant cost for many
sampling-based motion planners, and allows it to find initial solutions faster
than state-of-the-art ASAO (VAMP, OMPL) and satisficing (OMPL) algorithms on
the MotionBenchMaker dataset while converging towards optimal plans in an
anytime manner.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 21:35:55 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 01:47:25 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Wilson",
"Tyler S.",
""
],
[
"Thomason",
"Wil",
""
],
[
"Kingston",
"Zachary",
""
],
[
"Kavraki",
"Lydia E.",
""
],
[
"Gammell",
"Jonathan D.",
""
]
]
| TITLE: Nearest-Neighbourless Asymptotically Optimal Motion Planning with Fully
Connected Informed Trees (FCIT*)
ABSTRACT: Improving the performance of motion planning algorithms for
high-degree-of-freedom robots usually requires reducing the cost or frequency
of computationally expensive operations. Traditionally, and especially for
asymptotically optimal sampling-based motion planners, the most expensive
operations are local motion validation and querying the nearest neighbours of a
configuration.
Recent advances have significantly reduced the cost of motion validation by
using single instruction/multiple data (SIMD) parallelism to improve solution
times for satisficing motion planning problems. These advances have not yet
been applied to asymptotically optimal motion planning.
This paper presents Fully Connected Informed Trees (FCIT*), the first fully
connected, informed, anytime almost-surely asymptotically optimal (ASAO)
algorithm. FCIT* exploits the radically reduced cost of edge evaluation via
SIMD parallelism to build and search fully connected graphs. This removes the
need for nearest-neighbours structures, which are a dominant cost for many
sampling-based motion planners, and allows it to find initial solutions faster
than state-of-the-art ASAO (VAMP, OMPL) and satisficing (OMPL) algorithms on
the MotionBenchMaker dataset while converging towards optimal plans in an
anytime manner.
| no_new_dataset | 0.949106 |
2411.17984 | Huiyang Hu | Huiyang Hu, Peijin Wang, Hanbo Bi, Boyuan Tong, Zhaozhi Wang, Wenhui
Diao, Hao Chang, Yingchao Feng, Ziqi Zhang, Yaowei Wang, Qixiang Ye, Kun Fu,
Xian Sun | RS-vHeat: Heat Conduction Guided Efficient Remote Sensing Foundation
Model | 19 pages, 8 figures and 10 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote sensing foundation models largely break away from the traditional
paradigm of designing task-specific models, offering greater scalability across
multiple tasks. However, they face challenges such as low computational
efficiency and limited interpretability, especially when dealing with
large-scale remote sensing images. To overcome these, we draw inspiration from
heat conduction, a physical process modeling local heat diffusion. Building on
this idea, we are the first to explore the potential of using the parallel
computing model of heat conduction to simulate the local region correlations in
high-resolution remote sensing images, and introduce RS-vHeat, an efficient
multi-modal remote sensing foundation model. Specifically, RS-vHeat 1) applies
the Heat Conduction Operator (HCO) with a complexity of $O(N^{1.5})$ and a
global receptive field, reducing computational overhead while capturing remote
sensing object structure information to guide heat diffusion; 2) learns the
frequency distribution representations of various scenes through a
self-supervised strategy based on frequency domain hierarchical masking and
multi-domain reconstruction; 3) significantly improves efficiency and
performance over state-of-the-art techniques across 4 tasks and 10 datasets.
Compared to attention-based remote sensing foundation models, we reduce memory
usage by 84\%, FLOPs by 24\% and improves throughput by 2.7 times. The code
will be made publicly available.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 01:43:38 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 13:24:25 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Hu",
"Huiyang",
""
],
[
"Wang",
"Peijin",
""
],
[
"Bi",
"Hanbo",
""
],
[
"Tong",
"Boyuan",
""
],
[
"Wang",
"Zhaozhi",
""
],
[
"Diao",
"Wenhui",
""
],
[
"Chang",
"Hao",
""
],
[
"Feng",
"Yingchao",
""
],
[
"Zhang",
"Ziqi",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Ye",
"Qixiang",
""
],
[
"Fu",
"Kun",
""
],
[
"Sun",
"Xian",
""
]
]
| TITLE: RS-vHeat: Heat Conduction Guided Efficient Remote Sensing Foundation
Model
ABSTRACT: Remote sensing foundation models largely break away from the traditional
paradigm of designing task-specific models, offering greater scalability across
multiple tasks. However, they face challenges such as low computational
efficiency and limited interpretability, especially when dealing with
large-scale remote sensing images. To overcome these, we draw inspiration from
heat conduction, a physical process modeling local heat diffusion. Building on
this idea, we are the first to explore the potential of using the parallel
computing model of heat conduction to simulate the local region correlations in
high-resolution remote sensing images, and introduce RS-vHeat, an efficient
multi-modal remote sensing foundation model. Specifically, RS-vHeat 1) applies
the Heat Conduction Operator (HCO) with a complexity of $O(N^{1.5})$ and a
global receptive field, reducing computational overhead while capturing remote
sensing object structure information to guide heat diffusion; 2) learns the
frequency distribution representations of various scenes through a
self-supervised strategy based on frequency domain hierarchical masking and
multi-domain reconstruction; 3) significantly improves efficiency and
performance over state-of-the-art techniques across 4 tasks and 10 datasets.
Compared to attention-based remote sensing foundation models, we reduce memory
usage by 84\%, FLOPs by 24\% and improves throughput by 2.7 times. The code
will be made publicly available.
| no_new_dataset | 0.954009 |
2412.06234 | Seungtae Nam | Seungtae Nam, Xiangyu Sun, Gyeongjin Kang, Younggeun Lee, Seungjun Oh,
Eunbyung Park | Generative Densification: Learning to Densify Gaussians for
High-Fidelity Generalizable 3D Reconstruction | Project page: https://stnamjef.github.io/GenerativeDensification/ | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generalized feed-forward Gaussian models have achieved significant progress
in sparse-view 3D reconstruction by leveraging prior knowledge from large
multi-view datasets. However, these models often struggle to represent
high-frequency details due to the limited number of Gaussians. While the
densification strategy used in per-scene 3D Gaussian splatting (3D-GS)
optimization can be adapted to the feed-forward models, it may not be ideally
suited for generalized scenarios. In this paper, we propose Generative
Densification, an efficient and generalizable method to densify Gaussians
generated by feed-forward models. Unlike the 3D-GS densification strategy,
which iteratively splits and clones raw Gaussian parameters, our method
up-samples feature representations from the feed-forward models and generates
their corresponding fine Gaussians in a single forward pass, leveraging the
embedded prior knowledge for enhanced generalization. Experimental results on
both object-level and scene-level reconstruction tasks demonstrate that our
method outperforms state-of-the-art approaches with comparable or smaller model
sizes, achieving notable improvements in representing fine details.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 06:20:51 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Dec 2024 06:17:36 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 06:02:35 GMT"
}
]
| 2025-03-10T00:00:00 | [
[
"Nam",
"Seungtae",
""
],
[
"Sun",
"Xiangyu",
""
],
[
"Kang",
"Gyeongjin",
""
],
[
"Lee",
"Younggeun",
""
],
[
"Oh",
"Seungjun",
""
],
[
"Park",
"Eunbyung",
""
]
]
| TITLE: Generative Densification: Learning to Densify Gaussians for
High-Fidelity Generalizable 3D Reconstruction
ABSTRACT: Generalized feed-forward Gaussian models have achieved significant progress
in sparse-view 3D reconstruction by leveraging prior knowledge from large
multi-view datasets. However, these models often struggle to represent
high-frequency details due to the limited number of Gaussians. While the
densification strategy used in per-scene 3D Gaussian splatting (3D-GS)
optimization can be adapted to the feed-forward models, it may not be ideally
suited for generalized scenarios. In this paper, we propose Generative
Densification, an efficient and generalizable method to densify Gaussians
generated by feed-forward models. Unlike the 3D-GS densification strategy,
which iteratively splits and clones raw Gaussian parameters, our method
up-samples feature representations from the feed-forward models and generates
their corresponding fine Gaussians in a single forward pass, leveraging the
embedded prior knowledge for enhanced generalization. Experimental results on
both object-level and scene-level reconstruction tasks demonstrate that our
method outperforms state-of-the-art approaches with comparable or smaller model
sizes, achieving notable improvements in representing fine details.
| no_new_dataset | 0.950411 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.