id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2411.01099 | Bryan Bo Cao | Bryan Bo Cao, Lawrence O'Gorman, Michael Coss, Shubham Jain | Few-Class Arena: A Benchmark for Efficient Selection of Vision Models
and Dataset Difficulty Measurement | 10 pages, 32 pages including References and Appendix, 19 figures, 8
tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose Few-Class Arena (FCA), as a unified benchmark with focus on
testing efficient image classification models for few classes. A wide variety
of benchmark datasets with many classes (80-1000) have been created to assist
Computer Vision architectural evolution. An increasing number of vision models
are evaluated with these many-class datasets. However, real-world applications
often involve substantially fewer classes of interest (2-10). This gap between
many and few classes makes it difficult to predict performance of the few-class
applications using models trained on the available many-class datasets. To
date, little has been offered to evaluate models in this Few-Class Regime. We
conduct a systematic evaluation of the ResNet family trained on ImageNet
subsets from 2 to 1000 classes, and test a wide spectrum of Convolutional
Neural Networks and Transformer architectures over ten datasets by using our
newly proposed FCA tool. Furthermore, to aid an up-front assessment of dataset
difficulty and a more efficient selection of models, we incorporate a
difficulty measure as a function of class similarity. FCA offers a new tool for
efficient machine learning in the Few-Class Regime, with goals ranging from a
new efficient class similarity proposal, to lightweight model architecture
design, to a new scaling law. FCA is user-friendly and can be easily extended
to new models and datasets, facilitating future research work. Our benchmark is
available at https://github.com/bryanbocao/fca.
| [
{
"version": "v1",
"created": "Sat, 2 Nov 2024 01:31:47 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 05:33:33 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Cao",
"Bryan Bo",
""
],
[
"O'Gorman",
"Lawrence",
""
],
[
"Coss",
"Michael",
""
],
[
"Jain",
"Shubham",
""
]
]
| TITLE: Few-Class Arena: A Benchmark for Efficient Selection of Vision Models
and Dataset Difficulty Measurement
ABSTRACT: We propose Few-Class Arena (FCA), as a unified benchmark with focus on
testing efficient image classification models for few classes. A wide variety
of benchmark datasets with many classes (80-1000) have been created to assist
Computer Vision architectural evolution. An increasing number of vision models
are evaluated with these many-class datasets. However, real-world applications
often involve substantially fewer classes of interest (2-10). This gap between
many and few classes makes it difficult to predict performance of the few-class
applications using models trained on the available many-class datasets. To
date, little has been offered to evaluate models in this Few-Class Regime. We
conduct a systematic evaluation of the ResNet family trained on ImageNet
subsets from 2 to 1000 classes, and test a wide spectrum of Convolutional
Neural Networks and Transformer architectures over ten datasets by using our
newly proposed FCA tool. Furthermore, to aid an up-front assessment of dataset
difficulty and a more efficient selection of models, we incorporate a
difficulty measure as a function of class similarity. FCA offers a new tool for
efficient machine learning in the Few-Class Regime, with goals ranging from a
new efficient class similarity proposal, to lightweight model architecture
design, to a new scaling law. FCA is user-friendly and can be easily extended
to new models and datasets, facilitating future research work. Our benchmark is
available at https://github.com/bryanbocao/fca.
| no_new_dataset | 0.950227 |
2411.02372 | Neel Dey | Neel Dey, Benjamin Billot, Hallee E. Wong, Clinton J. Wang, Mengwei
Ren, P. Ellen Grant, Adrian V. Dalca, Polina Golland | Learning General-Purpose Biomedical Volume Representations using
Randomized Synthesis | ICLR 2025: International Conference on Learning Representations. Code
and model weights available at https://github.com/neel-dey/anatomix.
Keywords: synthetic data, representation learning, medical image analysis,
image registration, image segmentation | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current volumetric biomedical foundation models struggle to generalize as
public 3D datasets are small and do not cover the broad diversity of medical
procedures, conditions, anatomical regions, and imaging protocols. We address
this by creating a representation learning method that instead anticipates
strong domain shifts at training time itself. We first propose a data engine
that synthesizes highly variable training samples that would enable
generalization to new biomedical contexts. To then train a single 3D network
for any voxel-level task, we develop a contrastive learning method that
pretrains the network to be stable against nuisance imaging variation simulated
by the data engine, a key inductive bias for generalization. This network's
features can be used as robust representations of input images for downstream
tasks and its weights provide a strong, dataset-agnostic initialization for
finetuning on new datasets. As a result, we set new standards across both
multimodality registration and few-shot segmentation, a first for any 3D
biomedical vision model, all without (pre-)training on any existing dataset of
real images.
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 18:40:46 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 17:34:53 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dey",
"Neel",
""
],
[
"Billot",
"Benjamin",
""
],
[
"Wong",
"Hallee E.",
""
],
[
"Wang",
"Clinton J.",
""
],
[
"Ren",
"Mengwei",
""
],
[
"Grant",
"P. Ellen",
""
],
[
"Dalca",
"Adrian V.",
""
],
[
"Golland",
"Polina",
""
]
]
| TITLE: Learning General-Purpose Biomedical Volume Representations using
Randomized Synthesis
ABSTRACT: Current volumetric biomedical foundation models struggle to generalize as
public 3D datasets are small and do not cover the broad diversity of medical
procedures, conditions, anatomical regions, and imaging protocols. We address
this by creating a representation learning method that instead anticipates
strong domain shifts at training time itself. We first propose a data engine
that synthesizes highly variable training samples that would enable
generalization to new biomedical contexts. To then train a single 3D network
for any voxel-level task, we develop a contrastive learning method that
pretrains the network to be stable against nuisance imaging variation simulated
by the data engine, a key inductive bias for generalization. This network's
features can be used as robust representations of input images for downstream
tasks and its weights provide a strong, dataset-agnostic initialization for
finetuning on new datasets. As a result, we set new standards across both
multimodality registration and few-shot segmentation, a first for any 3D
biomedical vision model, all without (pre-)training on any existing dataset of
real images.
| no_new_dataset | 0.9455 |
2411.06916 | Pascal Janetzky | Pascal Janetzky, Tobias Schlagenhauf, Stefan Feuerriegel | Slowing Down Forgetting in Continual Learning | null | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | A common challenge in continual learning (CL) is catastrophic forgetting,
where the performance on old tasks drops after new, additional tasks are
learned. In this paper, we propose a novel framework called ReCL to slow down
forgetting in CL. Our framework exploits an implicit bias of gradient-based
neural networks due to which these converge to margin maximization points. Such
convergence points allow us to reconstruct old data from previous tasks, which
we then combine with the current training data. Our framework is flexible and
can be applied on top of existing, state-of-the-art CL methods. We further
demonstrate the performance gain from our framework across a large series of
experiments, including two challenging CL scenarios (class incremental and
domain incremental learning), different datasets (MNIST, CIFAR10,
TinyImagenet), and different network architectures. Across all experiments, we
find large performance gains through ReCL. To the best of our knowledge, our
framework is the first to address catastrophic forgetting by leveraging models
in CL as their own memory buffers.
| [
{
"version": "v1",
"created": "Mon, 11 Nov 2024 12:19:28 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 10:22:24 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Janetzky",
"Pascal",
""
],
[
"Schlagenhauf",
"Tobias",
""
],
[
"Feuerriegel",
"Stefan",
""
]
]
| TITLE: Slowing Down Forgetting in Continual Learning
ABSTRACT: A common challenge in continual learning (CL) is catastrophic forgetting,
where the performance on old tasks drops after new, additional tasks are
learned. In this paper, we propose a novel framework called ReCL to slow down
forgetting in CL. Our framework exploits an implicit bias of gradient-based
neural networks due to which these converge to margin maximization points. Such
convergence points allow us to reconstruct old data from previous tasks, which
we then combine with the current training data. Our framework is flexible and
can be applied on top of existing, state-of-the-art CL methods. We further
demonstrate the performance gain from our framework across a large series of
experiments, including two challenging CL scenarios (class incremental and
domain incremental learning), different datasets (MNIST, CIFAR10,
TinyImagenet), and different network architectures. Across all experiments, we
find large performance gains through ReCL. To the best of our knowledge, our
framework is the first to address catastrophic forgetting by leveraging models
in CL as their own memory buffers.
| no_new_dataset | 0.949623 |
2411.07848 | Sonia Raychaudhuri | Sonia Raychaudhuri, Duy Ta, Katrina Ashton, Angel X. Chang, Jiuguang
Wang, Bernadette Bucher | Zero-shot Object-Centric Instruction Following: Integrating Foundation
Models with Traditional Navigation | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large scale scenes such as multifloor homes can be robustly and efficiently
mapped with a 3D graph of landmarks estimated jointly with robot poses in a
factor graph, a technique commonly used in commercial robots such as drones and
robot vacuums. In this work, we propose Language-Inferred Factor Graph for
Instruction Following (LIFGIF), a zero-shot method to ground natural language
instructions in such a map. LIFGIF also includes a policy for following natural
language navigation instructions in a novel environment while the map is
constructed, enabling robust navigation performance in the physical world. To
evaluate LIFGIF, we present a new dataset, Object-Centric VLN (OC-VLN), in
order to evaluate grounding of object-centric natural language navigation
instructions. We compare to two state-of-the-art zero-shot baselines from
related tasks, Object Goal Navigation and Vision Language Navigation, to
demonstrate that LIFGIF outperforms them across all our evaluation metrics on
OCVLN. Finally, we successfully demonstrate the effectiveness of LIFGIF for
performing zero-shot object-centric instruction following in the real world on
a Boston Dynamics Spot robot.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 15:01:40 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 17:33:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Raychaudhuri",
"Sonia",
""
],
[
"Ta",
"Duy",
""
],
[
"Ashton",
"Katrina",
""
],
[
"Chang",
"Angel X.",
""
],
[
"Wang",
"Jiuguang",
""
],
[
"Bucher",
"Bernadette",
""
]
]
| TITLE: Zero-shot Object-Centric Instruction Following: Integrating Foundation
Models with Traditional Navigation
ABSTRACT: Large scale scenes such as multifloor homes can be robustly and efficiently
mapped with a 3D graph of landmarks estimated jointly with robot poses in a
factor graph, a technique commonly used in commercial robots such as drones and
robot vacuums. In this work, we propose Language-Inferred Factor Graph for
Instruction Following (LIFGIF), a zero-shot method to ground natural language
instructions in such a map. LIFGIF also includes a policy for following natural
language navigation instructions in a novel environment while the map is
constructed, enabling robust navigation performance in the physical world. To
evaluate LIFGIF, we present a new dataset, Object-Centric VLN (OC-VLN), in
order to evaluate grounding of object-centric natural language navigation
instructions. We compare to two state-of-the-art zero-shot baselines from
related tasks, Object Goal Navigation and Vision Language Navigation, to
demonstrate that LIFGIF outperforms them across all our evaluation metrics on
OCVLN. Finally, we successfully demonstrate the effectiveness of LIFGIF for
performing zero-shot object-centric instruction following in the real world on
a Boston Dynamics Spot robot.
| new_dataset | 0.95877 |
2411.08470 | Hatef Otroshi Shahreza | Hatef Otroshi Shahreza and S\'ebastien Marcel | HyperFace: Generating Synthetic Face Recognition Datasets by Exploring
Face Embedding Hypersphere | Accepted in ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Face recognition datasets are often collected by crawling Internet and
without individuals' consents, raising ethical and privacy concerns. Generating
synthetic datasets for training face recognition models has emerged as a
promising alternative. However, the generation of synthetic datasets remains
challenging as it entails adequate inter-class and intra-class variations.
While advances in generative models have made it easier to increase intra-class
variations in face datasets (such as pose, illumination, etc.), generating
sufficient inter-class variation is still a difficult task. In this paper, we
formulate the dataset generation as a packing problem on the embedding space
(represented on a hypersphere) of a face recognition model and propose a new
synthetic dataset generation approach, called HyperFace. We formalize our
packing problem as an optimization problem and solve it with a gradient
descent-based approach. Then, we use a conditional face generator model to
synthesize face images from the optimized embeddings. We use our generated
datasets to train face recognition models and evaluate the trained models on
several benchmarking real datasets. Our experimental results show that models
trained with HyperFace achieve state-of-the-art performance in training face
recognition using synthetic datasets.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 09:42:12 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 11:52:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Shahreza",
"Hatef Otroshi",
""
],
[
"Marcel",
"Sébastien",
""
]
]
| TITLE: HyperFace: Generating Synthetic Face Recognition Datasets by Exploring
Face Embedding Hypersphere
ABSTRACT: Face recognition datasets are often collected by crawling Internet and
without individuals' consents, raising ethical and privacy concerns. Generating
synthetic datasets for training face recognition models has emerged as a
promising alternative. However, the generation of synthetic datasets remains
challenging as it entails adequate inter-class and intra-class variations.
While advances in generative models have made it easier to increase intra-class
variations in face datasets (such as pose, illumination, etc.), generating
sufficient inter-class variation is still a difficult task. In this paper, we
formulate the dataset generation as a packing problem on the embedding space
(represented on a hypersphere) of a face recognition model and propose a new
synthetic dataset generation approach, called HyperFace. We formalize our
packing problem as an optimization problem and solve it with a gradient
descent-based approach. Then, we use a conditional face generator model to
synthesize face images from the optimized embeddings. We use our generated
datasets to train face recognition models and evaluate the trained models on
several benchmarking real datasets. Our experimental results show that models
trained with HyperFace achieve state-of-the-art performance in training face
recognition using synthetic datasets.
| no_new_dataset | 0.930142 |
2411.09484 | Fabio Bellavia | Fabio Bellavia, Zhenjun Zhao, Luca Morelli, Fabio Remondino | Image Matching Filtering and Refinement by Planes and Beyond | project page: https://github.com/fb82/MiHo | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper introduces a modular, non-deep learning method for filtering and
refining sparse correspondences in image matching. Assuming that motion flow
within the scene can be approximated by local homography transformations,
matches are aggregated into overlapping clusters corresponding to virtual
planes using an iterative RANSAC-based approach, with non-conforming
correspondences discarded. Moreover, the underlying planar structural design
provides an explicit map between local patches associated with the matches,
enabling optional refinement of keypoint positions through cross-correlation
template matching after patch reprojection. Finally, to enhance robustness and
fault-tolerance against violations of the piece-wise planar approximation
assumption, a further strategy is designed for minimizing relative patch
distortion in the plane reprojection by introducing an intermediate homography
that projects both patches into a common plane. The proposed method is
extensively evaluated on standard datasets and image matching pipelines, and
compared with state-of-the-art approaches. Unlike other current comparisons,
the proposed benchmark also takes into account the more general, real, and
practical cases where camera intrinsics are unavailable. Experimental results
demonstrate that our proposed non-deep learning, geometry-based approach
achieves performances that are either superior to or on par with recent
state-of-the-art deep learning methods. Finally, this study suggests that there
are still development potential in actual image matching solutions in the
considered research direction, which could be in the future incorporated in
novel deep image matching architectures.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2024 14:37:50 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Nov 2024 17:48:31 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 17:29:09 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Bellavia",
"Fabio",
""
],
[
"Zhao",
"Zhenjun",
""
],
[
"Morelli",
"Luca",
""
],
[
"Remondino",
"Fabio",
""
]
]
| TITLE: Image Matching Filtering and Refinement by Planes and Beyond
ABSTRACT: This paper introduces a modular, non-deep learning method for filtering and
refining sparse correspondences in image matching. Assuming that motion flow
within the scene can be approximated by local homography transformations,
matches are aggregated into overlapping clusters corresponding to virtual
planes using an iterative RANSAC-based approach, with non-conforming
correspondences discarded. Moreover, the underlying planar structural design
provides an explicit map between local patches associated with the matches,
enabling optional refinement of keypoint positions through cross-correlation
template matching after patch reprojection. Finally, to enhance robustness and
fault-tolerance against violations of the piece-wise planar approximation
assumption, a further strategy is designed for minimizing relative patch
distortion in the plane reprojection by introducing an intermediate homography
that projects both patches into a common plane. The proposed method is
extensively evaluated on standard datasets and image matching pipelines, and
compared with state-of-the-art approaches. Unlike other current comparisons,
the proposed benchmark also takes into account the more general, real, and
practical cases where camera intrinsics are unavailable. Experimental results
demonstrate that our proposed non-deep learning, geometry-based approach
achieves performances that are either superior to or on par with recent
state-of-the-art deep learning methods. Finally, this study suggests that there
are still development potential in actual image matching solutions in the
considered research direction, which could be in the future incorporated in
novel deep image matching architectures.
| no_new_dataset | 0.950319 |
2411.09851 | Ho Fung Tsoi | Ho Fung Tsoi, Dylan Rankin, Cecile Caillol, Miles Cranmer, Sridhara
Dasu, Javier Duarte, Philip Harris, Elliot Lipeles, Vladimir Loncar | SymbolFit: Automatic Parametric Modeling with Symbolic Regression | 50 pages, 35 figures. Under review. The API can be used
out-of-the-box and is available at https://github.com/hftsoi/symbolfit | null | null | null | hep-ex cs.LG physics.data-an | http://creativecommons.org/licenses/by/4.0/ | We introduce SymbolFit, a framework that automates parametric modeling by
using symbolic regression to perform a machine-search for functions that fit
the data while simultaneously providing uncertainty estimates in a single run.
Traditionally, constructing a parametric model to accurately describe binned
data has been a manual and iterative process, requiring an adequate functional
form to be determined before the fit can be performed. The main challenge
arises when the appropriate functional forms cannot be derived from first
principles, especially when there is no underlying true closed-form function
for the distribution. In this work, we develop a framework that automates and
streamlines the process by utilizing symbolic regression, a machine learning
technique that explores a vast space of candidate functions without requiring a
predefined functional form because the functional form itself is treated as a
trainable parameter, making the process far more efficient and effortless than
traditional regression methods. We demonstrate the framework in high-energy
physics experiments at the CERN Large Hadron Collider (LHC) using five real
proton-proton collision datasets from new physics searches, including
background modeling in resonance searches for high-mass dijet, trijet,
paired-dijet, diphoton, and dimuon events. We show that our framework can
flexibly and efficiently generate a wide range of candidate functions that fit
a nontrivial distribution well using a simple fit configuration that varies
only by random seed, and that the same fit configuration, which defines a vast
function space, can also be applied to distributions of different shapes,
whereas achieving a comparable result with traditional methods would have
required extensive manual effort.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2024 00:09:37 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Feb 2025 02:11:22 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 23:29:50 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Tsoi",
"Ho Fung",
""
],
[
"Rankin",
"Dylan",
""
],
[
"Caillol",
"Cecile",
""
],
[
"Cranmer",
"Miles",
""
],
[
"Dasu",
"Sridhara",
""
],
[
"Duarte",
"Javier",
""
],
[
"Harris",
"Philip",
""
],
[
"Lipeles",
"Elliot",
""
],
[
"Loncar",
"Vladimir",
""
]
]
| TITLE: SymbolFit: Automatic Parametric Modeling with Symbolic Regression
ABSTRACT: We introduce SymbolFit, a framework that automates parametric modeling by
using symbolic regression to perform a machine-search for functions that fit
the data while simultaneously providing uncertainty estimates in a single run.
Traditionally, constructing a parametric model to accurately describe binned
data has been a manual and iterative process, requiring an adequate functional
form to be determined before the fit can be performed. The main challenge
arises when the appropriate functional forms cannot be derived from first
principles, especially when there is no underlying true closed-form function
for the distribution. In this work, we develop a framework that automates and
streamlines the process by utilizing symbolic regression, a machine learning
technique that explores a vast space of candidate functions without requiring a
predefined functional form because the functional form itself is treated as a
trainable parameter, making the process far more efficient and effortless than
traditional regression methods. We demonstrate the framework in high-energy
physics experiments at the CERN Large Hadron Collider (LHC) using five real
proton-proton collision datasets from new physics searches, including
background modeling in resonance searches for high-mass dijet, trijet,
paired-dijet, diphoton, and dimuon events. We show that our framework can
flexibly and efficiently generate a wide range of candidate functions that fit
a nontrivial distribution well using a simple fit configuration that varies
only by random seed, and that the same fit configuration, which defines a vast
function space, can also be applied to distributions of different shapes,
whereas achieving a comparable result with traditional methods would have
required extensive manual effort.
| no_new_dataset | 0.952264 |
2411.10027 | Yang Xiao | Yang Xiao and Rohan Kumar Das | XLSR-Mamba: A Dual-Column Bidirectional State Space Model for Spoofing
Attack Detection | Accepted by IEEE Signal Processing Letters | null | null | null | eess.AS cs.SD | http://creativecommons.org/licenses/by/4.0/ | Transformers and their variants have achieved great success in speech
processing. However, their multi-head self-attention mechanism is
computationally expensive. Therefore, one novel selective state space model,
Mamba, has been proposed as an alternative. Building on its success in
automatic speech recognition, we apply Mamba for spoofing attack detection.
Mamba is well-suited for this task as it can capture the artifacts in spoofed
speech signals by handling long-length sequences. However, Mamba's performance
may suffer when it is trained with limited labeled data. To mitigate this, we
propose combining a new structure of Mamba based on a dual-column architecture
with self-supervised learning, using the pre-trained wav2vec 2.0 model. The
experiments show that our proposed approach achieves competitive results and
faster inference on the ASVspoof 2021 LA and DF datasets, and on the more
challenging In-the-Wild dataset, it emerges as the strongest candidate for
spoofing attack detection. The code has been publicly released in
https://github.com/swagshaw/XLSR-Mamba.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2024 08:13:51 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 18:09:14 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xiao",
"Yang",
""
],
[
"Das",
"Rohan Kumar",
""
]
]
| TITLE: XLSR-Mamba: A Dual-Column Bidirectional State Space Model for Spoofing
Attack Detection
ABSTRACT: Transformers and their variants have achieved great success in speech
processing. However, their multi-head self-attention mechanism is
computationally expensive. Therefore, one novel selective state space model,
Mamba, has been proposed as an alternative. Building on its success in
automatic speech recognition, we apply Mamba for spoofing attack detection.
Mamba is well-suited for this task as it can capture the artifacts in spoofed
speech signals by handling long-length sequences. However, Mamba's performance
may suffer when it is trained with limited labeled data. To mitigate this, we
propose combining a new structure of Mamba based on a dual-column architecture
with self-supervised learning, using the pre-trained wav2vec 2.0 model. The
experiments show that our proposed approach achieves competitive results and
faster inference on the ASVspoof 2021 LA and DF datasets, and on the more
challenging In-the-Wild dataset, it emerges as the strongest candidate for
spoofing attack detection. The code has been publicly released in
https://github.com/swagshaw/XLSR-Mamba.
| no_new_dataset | 0.944228 |
2411.13983 | Hansung Kim | Hansung Kim, Edward L. Zhu, Chang Seok Lim, Francesco Borrelli | Learning Two-agent Motion Planning Strategies from Generalized Nash
Equilibrium for Model Predictive Control | Accepted Proceeding at 2025 Learning for Dynamics and Control
Conference (L4DC) | null | null | null | cs.MA cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an Implicit Game-Theoretic MPC (IGT-MPC), a decentralized
algorithm for two-agent motion planning that uses a learned value function that
predicts the game-theoretic interaction outcomes as the terminal cost-to-go
function in a model predictive control (MPC) framework, guiding agents to
implicitly account for interactions with other agents and maximize their
reward. This approach applies to competitive and cooperative multi-agent motion
planning problems which we formulate as constrained dynamic games. Given a
constrained dynamic game, we randomly sample initial conditions and solve for
the generalized Nash equilibrium (GNE) to generate a dataset of GNE solutions,
computing the reward outcome of each game-theoretic interaction from the GNE.
The data is used to train a simple neural network to predict the reward
outcome, which we use as the terminal cost-to-go function in an MPC scheme. We
showcase emerging competitive and coordinated behaviors using IGT-MPC in
scenarios such as two-vehicle head-to-head racing and un-signalized
intersection navigation. IGT-MPC offers a novel method integrating machine
learning and game-theoretic reasoning into model-based decentralized
multi-agent motion planning.
| [
{
"version": "v1",
"created": "Thu, 21 Nov 2024 09:47:15 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Nov 2024 02:42:55 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 23:56:38 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kim",
"Hansung",
""
],
[
"Zhu",
"Edward L.",
""
],
[
"Lim",
"Chang Seok",
""
],
[
"Borrelli",
"Francesco",
""
]
]
| TITLE: Learning Two-agent Motion Planning Strategies from Generalized Nash
Equilibrium for Model Predictive Control
ABSTRACT: We introduce an Implicit Game-Theoretic MPC (IGT-MPC), a decentralized
algorithm for two-agent motion planning that uses a learned value function that
predicts the game-theoretic interaction outcomes as the terminal cost-to-go
function in a model predictive control (MPC) framework, guiding agents to
implicitly account for interactions with other agents and maximize their
reward. This approach applies to competitive and cooperative multi-agent motion
planning problems which we formulate as constrained dynamic games. Given a
constrained dynamic game, we randomly sample initial conditions and solve for
the generalized Nash equilibrium (GNE) to generate a dataset of GNE solutions,
computing the reward outcome of each game-theoretic interaction from the GNE.
The data is used to train a simple neural network to predict the reward
outcome, which we use as the terminal cost-to-go function in an MPC scheme. We
showcase emerging competitive and coordinated behaviors using IGT-MPC in
scenarios such as two-vehicle head-to-head racing and un-signalized
intersection navigation. IGT-MPC offers a novel method integrating machine
learning and game-theoretic reasoning into model-based decentralized
multi-agent motion planning.
| no_new_dataset | 0.930679 |
2411.14896 | Anna Glazkova | Anna Glazkova and Olga Zakharova | Evaluating LLM Prompts for Data Augmentation in Multi-label
Classification of Ecological Texts | Ivannikov ISPRAS Open Conference (ISPRAS) 2024 | 2024 Ivannikov Ispras Open Conference (ISPRAS), Moscow, Russian
Federation, 2024, pp. 1-7 | 10.1109/ISPRAS64596.2024.10899128 | null | cs.CL cs.CY cs.SI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) play a crucial role in natural language
processing (NLP) tasks, improving the understanding, generation, and
manipulation of human language across domains such as translating, summarizing,
and classifying text. Previous studies have demonstrated that instruction-based
LLMs can be effectively utilized for data augmentation to generate diverse and
realistic text samples. This study applied prompt-based data augmentation to
detect mentions of green practices in Russian social media. Detecting green
practices in social media aids in understanding their prevalence and helps
formulate recommendations for scaling eco-friendly actions to mitigate
environmental issues. We evaluated several prompts for augmenting texts in a
multi-label classification task, either by rewriting existing datasets using
LLMs, generating new data, or combining both approaches. Our results revealed
that all strategies improved classification performance compared to the models
fine-tuned only on the original dataset, outperforming baselines in most cases.
The best results were obtained with the prompt that paraphrased the original
text while clearly indicating the relevant categories.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 12:37:41 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Glazkova",
"Anna",
""
],
[
"Zakharova",
"Olga",
""
]
]
| TITLE: Evaluating LLM Prompts for Data Augmentation in Multi-label
Classification of Ecological Texts
ABSTRACT: Large language models (LLMs) play a crucial role in natural language
processing (NLP) tasks, improving the understanding, generation, and
manipulation of human language across domains such as translating, summarizing,
and classifying text. Previous studies have demonstrated that instruction-based
LLMs can be effectively utilized for data augmentation to generate diverse and
realistic text samples. This study applied prompt-based data augmentation to
detect mentions of green practices in Russian social media. Detecting green
practices in social media aids in understanding their prevalence and helps
formulate recommendations for scaling eco-friendly actions to mitigate
environmental issues. We evaluated several prompts for augmenting texts in a
multi-label classification task, either by rewriting existing datasets using
LLMs, generating new data, or combining both approaches. Our results revealed
that all strategies improved classification performance compared to the models
fine-tuned only on the original dataset, outperforming baselines in most cases.
The best results were obtained with the prompt that paraphrased the original
text while clearly indicating the relevant categories.
| no_new_dataset | 0.948298 |
2411.14917 | Aurel Appius | Aurel X. Appius, Emiland Garrabe, Francois Helenon, Mahdi Khoramshahi,
Mohamed Chetouani, Stephane Doncieux | Task-Aware Robotic Grasping by evaluating Quality Diversity Solutions
through Foundation Models | 6 pages, 6 figures, submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2025, Video:
https://youtu.be/TCLXm8kPWz4 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Task-aware robotic grasping is a challenging problem that requires the
integration of semantic understanding and geometric reasoning. This paper
proposes a novel framework that leverages Large Language Models (LLMs) and
Quality Diversity (QD) algorithms to enable zero-shot task-conditioned grasp
synthesis. The framework segments objects into meaningful subparts and labels
each subpart semantically, creating structured representations that can be used
to prompt an LLM. By coupling semantic and geometric representations of an
object's structure, the LLM's knowledge about tasks and which parts to grasp
can be applied in the physical world. The QD-generated grasp archive provides a
diverse set of grasps, allowing us to select the most suitable grasp based on
the task. We evaluated the proposed method on a subset of the YCB dataset with
a Franka Emika robot. A consolidated ground truth for task-specific grasp
regions is established through a survey. Our work achieves a weighted
intersection over union (IoU) of 73.6% in predicting task-conditioned grasp
regions in 65 task-object combinations. An end-to-end validation study on a
smaller subset further confirms the effectiveness of our approach, with 88% of
responses favoring the task-aware grasp over the control group. A binomial test
shows that participants significantly prefer the task-aware grasp.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 13:18:41 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 22:48:10 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Appius",
"Aurel X.",
""
],
[
"Garrabe",
"Emiland",
""
],
[
"Helenon",
"Francois",
""
],
[
"Khoramshahi",
"Mahdi",
""
],
[
"Chetouani",
"Mohamed",
""
],
[
"Doncieux",
"Stephane",
""
]
]
| TITLE: Task-Aware Robotic Grasping by evaluating Quality Diversity Solutions
through Foundation Models
ABSTRACT: Task-aware robotic grasping is a challenging problem that requires the
integration of semantic understanding and geometric reasoning. This paper
proposes a novel framework that leverages Large Language Models (LLMs) and
Quality Diversity (QD) algorithms to enable zero-shot task-conditioned grasp
synthesis. The framework segments objects into meaningful subparts and labels
each subpart semantically, creating structured representations that can be used
to prompt an LLM. By coupling semantic and geometric representations of an
object's structure, the LLM's knowledge about tasks and which parts to grasp
can be applied in the physical world. The QD-generated grasp archive provides a
diverse set of grasps, allowing us to select the most suitable grasp based on
the task. We evaluated the proposed method on a subset of the YCB dataset with
a Franka Emika robot. A consolidated ground truth for task-specific grasp
regions is established through a survey. Our work achieves a weighted
intersection over union (IoU) of 73.6% in predicting task-conditioned grasp
regions in 65 task-object combinations. An end-to-end validation study on a
smaller subset further confirms the effectiveness of our approach, with 88% of
responses favoring the task-aware grasp over the control group. A binomial test
shows that participants significantly prefer the task-aware grasp.
| no_new_dataset | 0.941815 |
2411.17637 | Raviraj Joshi | Suramya Jadhav, Abhay Shanbhag, Amogh Thakurdesai, Ridhima Sinare,
Raviraj Joshi | On Limitations of LLM as Annotator for Low Resource Languages | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low-resource languages face significant challenges due to the lack of
sufficient linguistic data, resources, and tools for tasks such as supervised
learning, annotation, and classification. This shortage hinders the development
of accurate models and datasets, making it difficult to perform critical NLP
tasks like sentiment analysis or hate speech detection. To bridge this gap,
Large Language Models (LLMs) present an opportunity for potential annotators,
capable of generating datasets and resources for these underrepresented
languages. In this paper, we focus on Marathi, a low-resource language, and
evaluate the performance of both closed-source and open-source LLMs as
annotators, while also comparing these results with fine-tuned BERT models. We
assess models such as GPT-4o and Gemini 1.0 Pro, Gemma 2 (2B and 9B), and Llama
3.1 (8B and 405B) on classification tasks including sentiment analysis, news
classification, and hate speech detection. Our findings reveal that while LLMs
excel in annotation tasks for high-resource languages like English, they still
fall short when applied to Marathi. Even advanced models like GPT-4o and Llama
3.1 405B underperform compared to fine-tuned BERT-based baselines, with GPT-4o
and Llama 3.1 405B trailing fine-tuned BERT by accuracy margins of 10.2% and
14.1%, respectively. This highlights the limitations of LLMs as annotators for
low-resource languages.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 17:55:37 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 16:07:45 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Jadhav",
"Suramya",
""
],
[
"Shanbhag",
"Abhay",
""
],
[
"Thakurdesai",
"Amogh",
""
],
[
"Sinare",
"Ridhima",
""
],
[
"Joshi",
"Raviraj",
""
]
]
| TITLE: On Limitations of LLM as Annotator for Low Resource Languages
ABSTRACT: Low-resource languages face significant challenges due to the lack of
sufficient linguistic data, resources, and tools for tasks such as supervised
learning, annotation, and classification. This shortage hinders the development
of accurate models and datasets, making it difficult to perform critical NLP
tasks like sentiment analysis or hate speech detection. To bridge this gap,
Large Language Models (LLMs) present an opportunity for potential annotators,
capable of generating datasets and resources for these underrepresented
languages. In this paper, we focus on Marathi, a low-resource language, and
evaluate the performance of both closed-source and open-source LLMs as
annotators, while also comparing these results with fine-tuned BERT models. We
assess models such as GPT-4o and Gemini 1.0 Pro, Gemma 2 (2B and 9B), and Llama
3.1 (8B and 405B) on classification tasks including sentiment analysis, news
classification, and hate speech detection. Our findings reveal that while LLMs
excel in annotation tasks for high-resource languages like English, they still
fall short when applied to Marathi. Even advanced models like GPT-4o and Llama
3.1 405B underperform compared to fine-tuned BERT-based baselines, with GPT-4o
and Llama 3.1 405B trailing fine-tuned BERT by accuracy margins of 10.2% and
14.1%, respectively. This highlights the limitations of LLMs as annotators for
low-resource languages.
| no_new_dataset | 0.950595 |
2411.18018 | Hao Ding | Hao Ding, Zhongpai Gao, Benjamin Planche, Tianyu Luan, Abhishek
Sharma, Meng Zheng, Ange Lou, Terrence Chen, Mathias Unberath, Ziyan Wu | Neural Finite-State Machines for Surgical Phase Recognition | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Surgical phase recognition (SPR) is crucial for applications in workflow
optimization, performance evaluation, and real-time intervention guidance.
However, current deep learning models often struggle with fragmented
predictions, failing to capture the sequential nature of surgical workflows. We
propose the Neural Finite-State Machine (NFSM), a novel approach that enforces
temporal coherence by integrating classical state-transition priors with modern
neural networks. NFSM leverages learnable global state embeddings as unique
phase identifiers and dynamic transition tables to model phase-to-phase
progressions. Additionally, a future phase forecasting mechanism employs
repeated frame padding to anticipate upcoming transitions. Implemented as a
plug-and-play module, NFSM can be integrated into existing SPR pipelines
without changing their core architectures. We demonstrate state-of-the-art
performance across multiple benchmarks, including a significant improvement on
the BernBypass70 dataset - raising video-level accuracy by 0.9 points and
phase-level precision, recall, F1-score, and mAP by 3.8, 3.1, 3.3, and 4.1,
respectively. Ablation studies confirm each component's effectiveness and the
module's adaptability to various architectures. By unifying finite-state
principles with deep learning, NFSM offers a robust path toward consistent,
long-term surgical video analysis.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2024 03:21:57 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 04:05:24 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ding",
"Hao",
""
],
[
"Gao",
"Zhongpai",
""
],
[
"Planche",
"Benjamin",
""
],
[
"Luan",
"Tianyu",
""
],
[
"Sharma",
"Abhishek",
""
],
[
"Zheng",
"Meng",
""
],
[
"Lou",
"Ange",
""
],
[
"Chen",
"Terrence",
""
],
[
"Unberath",
"Mathias",
""
],
[
"Wu",
"Ziyan",
""
]
]
| TITLE: Neural Finite-State Machines for Surgical Phase Recognition
ABSTRACT: Surgical phase recognition (SPR) is crucial for applications in workflow
optimization, performance evaluation, and real-time intervention guidance.
However, current deep learning models often struggle with fragmented
predictions, failing to capture the sequential nature of surgical workflows. We
propose the Neural Finite-State Machine (NFSM), a novel approach that enforces
temporal coherence by integrating classical state-transition priors with modern
neural networks. NFSM leverages learnable global state embeddings as unique
phase identifiers and dynamic transition tables to model phase-to-phase
progressions. Additionally, a future phase forecasting mechanism employs
repeated frame padding to anticipate upcoming transitions. Implemented as a
plug-and-play module, NFSM can be integrated into existing SPR pipelines
without changing their core architectures. We demonstrate state-of-the-art
performance across multiple benchmarks, including a significant improvement on
the BernBypass70 dataset - raising video-level accuracy by 0.9 points and
phase-level precision, recall, F1-score, and mAP by 3.8, 3.1, 3.3, and 4.1,
respectively. Ablation studies confirm each component's effectiveness and the
module's adaptability to various architectures. By unifying finite-state
principles with deep learning, NFSM offers a robust path toward consistent,
long-term surgical video analysis.
| no_new_dataset | 0.941547 |
2411.18872 | Roozbeh Yousefzadeh | Roozbeh Yousefzadeh and Xuenan Cao and Azim Ospanov | A Lean Dataset for International Math Olympiad: Small Steps towards
Writing Math Proofs for Hard Problems | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Using AI to write formal proofs for mathematical problems is a challenging
task that has seen some advancements in recent years. Automated systems such as
Lean can verify the correctness of proofs written in formal language, yet
writing the proofs in formal language can be challenging for humans and
machines. The miniF2F benchmark has 20 IMO problems in its test set, yet formal
proofs are available only for 6 of these problems (3 of which are only written
by mathematicians). The model with best accuracy can only prove 2 of these 20
IMO problems, from 1950s and 60s, while its training set is a secret. In this
work, we write complete, original formal proofs for the remaining IMO problems
in Lean along with 3 extra problems from IMO 2022 and 2023. This effort expands
the availability of proof currently in the public domain by creating 5,880
lines of Lean proof. The goal of the paper is to pave the way for developing AI
models that can automatically write the formal proofs for all the IMO problems
in miniF2F and beyond by providing an evaluation benchmark. In this pursuit, we
devise a method to decompose the proofs of these problems into their building
blocks, constructing a dataset of 1,329 lemmas with more than 40k lines of Lean
code. These lemmas are not trivial, yet they are approachable, providing the
opportunity to evaluate and diagnose the failures and successes of AI models.
We evaluate the ability of the SOTA LLMs on our dataset and analyze their
success and failure modes from different perspectives. Our dataset and code is
available at: https://github.com/roozbeh-yz/IMO-Steps.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 02:50:42 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 02:41:10 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yousefzadeh",
"Roozbeh",
""
],
[
"Cao",
"Xuenan",
""
],
[
"Ospanov",
"Azim",
""
]
]
| TITLE: A Lean Dataset for International Math Olympiad: Small Steps towards
Writing Math Proofs for Hard Problems
ABSTRACT: Using AI to write formal proofs for mathematical problems is a challenging
task that has seen some advancements in recent years. Automated systems such as
Lean can verify the correctness of proofs written in formal language, yet
writing the proofs in formal language can be challenging for humans and
machines. The miniF2F benchmark has 20 IMO problems in its test set, yet formal
proofs are available only for 6 of these problems (3 of which are only written
by mathematicians). The model with best accuracy can only prove 2 of these 20
IMO problems, from 1950s and 60s, while its training set is a secret. In this
work, we write complete, original formal proofs for the remaining IMO problems
in Lean along with 3 extra problems from IMO 2022 and 2023. This effort expands
the availability of proof currently in the public domain by creating 5,880
lines of Lean proof. The goal of the paper is to pave the way for developing AI
models that can automatically write the formal proofs for all the IMO problems
in miniF2F and beyond by providing an evaluation benchmark. In this pursuit, we
devise a method to decompose the proofs of these problems into their building
blocks, constructing a dataset of 1,329 lemmas with more than 40k lines of Lean
code. These lemmas are not trivial, yet they are approachable, providing the
opportunity to evaluate and diagnose the failures and successes of AI models.
We evaluate the ability of the SOTA LLMs on our dataset and analyze their
success and failure modes from different perspectives. Our dataset and code is
available at: https://github.com/roozbeh-yz/IMO-Steps.
| new_dataset | 0.975367 |
2411.19289 | Rui Zhou | Rui Zhou, Jingbin Liu, Junbin Xie, Jianyu Zhang, Yingze Hu, Jiele Zhao | ADUGS-VINS: Generalized Visual-Inertial Odometry for Robust Navigation
in Highly Dynamic and Complex Environments | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual-inertial odometry (VIO) is widely used in various fields, such as
robots, drones, and autonomous vehicles. However, real-world scenes often
feature dynamic objects, compromising the accuracy of VIO. The diversity and
partial occlusion of these objects present a tough challenge for existing
dynamic VIO methods. To tackle this challenge, we introduce ADUGS-VINS, which
integrates an enhanced SORT algorithm along with a promptable foundation model
into VIO, thereby improving pose estimation accuracy in environments with
diverse dynamic objects and frequent occlusions. We evaluated our proposed
method using multiple public datasets representing various scenes, as well as
in a real-world scenario involving diverse dynamic objects. The experimental
results demonstrate that our proposed method performs impressively in multiple
scenarios, outperforming other state-of-the-art methods. This highlights its
remarkable generalization and adaptability in diverse dynamic environments,
showcasing its potential to handle various dynamic objects in practical
applications.
| [
{
"version": "v1",
"created": "Thu, 28 Nov 2024 17:41:33 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 11:12:24 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 07:18:14 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhou",
"Rui",
""
],
[
"Liu",
"Jingbin",
""
],
[
"Xie",
"Junbin",
""
],
[
"Zhang",
"Jianyu",
""
],
[
"Hu",
"Yingze",
""
],
[
"Zhao",
"Jiele",
""
]
]
| TITLE: ADUGS-VINS: Generalized Visual-Inertial Odometry for Robust Navigation
in Highly Dynamic and Complex Environments
ABSTRACT: Visual-inertial odometry (VIO) is widely used in various fields, such as
robots, drones, and autonomous vehicles. However, real-world scenes often
feature dynamic objects, compromising the accuracy of VIO. The diversity and
partial occlusion of these objects present a tough challenge for existing
dynamic VIO methods. To tackle this challenge, we introduce ADUGS-VINS, which
integrates an enhanced SORT algorithm along with a promptable foundation model
into VIO, thereby improving pose estimation accuracy in environments with
diverse dynamic objects and frequent occlusions. We evaluated our proposed
method using multiple public datasets representing various scenes, as well as
in a real-world scenario involving diverse dynamic objects. The experimental
results demonstrate that our proposed method performs impressively in multiple
scenarios, outperforming other state-of-the-art methods. This highlights its
remarkable generalization and adaptability in diverse dynamic environments,
showcasing its potential to handle various dynamic objects in practical
applications.
| no_new_dataset | 0.946843 |
2412.00537 | Mahalakshmi Sabanayagam | Mahalakshmi Sabanayagam and Lukas Gosch and Stephan G\"unnemann and
Debarghya Ghoshdastidar | Exact Certification of (Graph) Neural Networks Against Label Poisoning | Published as a spotlight presentation at ICLR 2025 | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | Machine learning models are highly vulnerable to label flipping, i.e., the
adversarial modification (poisoning) of training labels to compromise
performance. Thus, deriving robustness certificates is important to guarantee
that test predictions remain unaffected and to understand worst-case robustness
behavior. However, for Graph Neural Networks (GNNs), the problem of certifying
label flipping has so far been unsolved. We change this by introducing an exact
certification method, deriving both sample-wise and collective certificates.
Our method leverages the Neural Tangent Kernel (NTK) to capture the training
dynamics of wide networks enabling us to reformulate the bilevel optimization
problem representing label flipping into a Mixed-Integer Linear Program (MILP).
We apply our method to certify a broad range of GNN architectures in node
classification tasks. Thereby, concerning the worst-case robustness to label
flipping: $(i)$ we establish hierarchies of GNNs on different benchmark graphs;
$(ii)$ quantify the effect of architectural choices such as activations, depth
and skip-connections; and surprisingly, $(iii)$ uncover a novel phenomenon of
the robustness plateauing for intermediate perturbation budgets across all
investigated datasets and architectures. While we focus on GNNs, our
certificates are applicable to sufficiently wide NNs in general through their
NTK. Thus, our work presents the first exact certificate to a poisoning attack
ever derived for neural networks, which could be of independent interest. The
code is available at https://github.com/saper0/qpcert.
| [
{
"version": "v1",
"created": "Sat, 30 Nov 2024 17:05:12 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 09:26:05 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Sabanayagam",
"Mahalakshmi",
""
],
[
"Gosch",
"Lukas",
""
],
[
"Günnemann",
"Stephan",
""
],
[
"Ghoshdastidar",
"Debarghya",
""
]
]
| TITLE: Exact Certification of (Graph) Neural Networks Against Label Poisoning
ABSTRACT: Machine learning models are highly vulnerable to label flipping, i.e., the
adversarial modification (poisoning) of training labels to compromise
performance. Thus, deriving robustness certificates is important to guarantee
that test predictions remain unaffected and to understand worst-case robustness
behavior. However, for Graph Neural Networks (GNNs), the problem of certifying
label flipping has so far been unsolved. We change this by introducing an exact
certification method, deriving both sample-wise and collective certificates.
Our method leverages the Neural Tangent Kernel (NTK) to capture the training
dynamics of wide networks enabling us to reformulate the bilevel optimization
problem representing label flipping into a Mixed-Integer Linear Program (MILP).
We apply our method to certify a broad range of GNN architectures in node
classification tasks. Thereby, concerning the worst-case robustness to label
flipping: $(i)$ we establish hierarchies of GNNs on different benchmark graphs;
$(ii)$ quantify the effect of architectural choices such as activations, depth
and skip-connections; and surprisingly, $(iii)$ uncover a novel phenomenon of
the robustness plateauing for intermediate perturbation budgets across all
investigated datasets and architectures. While we focus on GNNs, our
certificates are applicable to sufficiently wide NNs in general through their
NTK. Thus, our work presents the first exact certificate to a poisoning attack
ever derived for neural networks, which could be of independent interest. The
code is available at https://github.com/saper0/qpcert.
| no_new_dataset | 0.946051 |
2412.01021 | Andi Han | Andi Han, Wei Huang, Yuan Cao, Difan Zou | On the Feature Learning in Diffusion Models | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The predominant success of diffusion models in generative modeling has
spurred significant interest in understanding their theoretical foundations. In
this work, we propose a feature learning framework aimed at analyzing and
comparing the training dynamics of diffusion models with those of traditional
classification models. Our theoretical analysis demonstrates that diffusion
models, due to the denoising objective, are encouraged to learn more balanced
and comprehensive representations of the data. In contrast, neural networks
with a similar architecture trained for classification tend to prioritize
learning specific patterns in the data, often focusing on easy-to-learn
components. To support these theoretical insights, we conduct several
experiments on both synthetic and real-world datasets, which empirically
validate our findings and highlight the distinct feature learning dynamics in
diffusion models compared to classification.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 00:41:25 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 02:13:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Han",
"Andi",
""
],
[
"Huang",
"Wei",
""
],
[
"Cao",
"Yuan",
""
],
[
"Zou",
"Difan",
""
]
]
| TITLE: On the Feature Learning in Diffusion Models
ABSTRACT: The predominant success of diffusion models in generative modeling has
spurred significant interest in understanding their theoretical foundations. In
this work, we propose a feature learning framework aimed at analyzing and
comparing the training dynamics of diffusion models with those of traditional
classification models. Our theoretical analysis demonstrates that diffusion
models, due to the denoising objective, are encouraged to learn more balanced
and comprehensive representations of the data. In contrast, neural networks
with a similar architecture trained for classification tend to prioritize
learning specific patterns in the data, often focusing on easy-to-learn
components. To support these theoretical insights, we conduct several
experiments on both synthetic and real-world datasets, which empirically
validate our findings and highlight the distinct feature learning dynamics in
diffusion models compared to classification.
| no_new_dataset | 0.953923 |
2412.02799 | Jinyang Liu | Jinyang Liu, Pu Jiao, Kai Zhao, Xin Liang, Sheng Di, Franck Cappello | QPET: A Versatile and Portable Quantity-of-Interest-preservation
Framework for Error-Bounded Lossy Compression | null | null | null | null | cs.DB cs.CE cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Error-bounded lossy compression has been widely adopted in many scientific
domains because it can address the challenges in storing, transferring, and
analyzing unprecedented amounts of scientific data. Although error-bounded
lossy compression offers general data distortion control by enforcing strict
error bounds on raw data, it may fail to meet the quality requirements on the
results of downstream analysis, a.k.a. Quantities of Interest (QoIs), derived
from raw data. This may lead to uncertainties and even misinterpretations in
scientific discoveries, significantly limiting the use of lossy compression in
practice. In this paper, we propose QPET, a novel, versatile, and portable
framework for QoI-preserving error-bounded lossy compression, which overcomes
the challenges of modeling diverse QoIs by leveraging numerical strategies.
QPET features (1) high portability to multiple existing lossy compressors, (2)
versatile preservation to most differentiable univariate and multivariate QoIs,
and (3) significant compression improvements in QoI-preservation tasks.
Experiments with six real-world datasets demonstrate that integrating QPET into
state-of-the-art error-bounded lossy compressors can gain 2x to 10x compression
speedups of existing QoI-preserving error-bounded lossy compression solutions,
up to 1000% compression ratio improvements to general-purpose compressors, and
up to 133% compression ratio improvements to existing QoI-integrated scientific
compressors.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 20:01:23 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 00:49:38 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Jinyang",
""
],
[
"Jiao",
"Pu",
""
],
[
"Zhao",
"Kai",
""
],
[
"Liang",
"Xin",
""
],
[
"Di",
"Sheng",
""
],
[
"Cappello",
"Franck",
""
]
]
| TITLE: QPET: A Versatile and Portable Quantity-of-Interest-preservation
Framework for Error-Bounded Lossy Compression
ABSTRACT: Error-bounded lossy compression has been widely adopted in many scientific
domains because it can address the challenges in storing, transferring, and
analyzing unprecedented amounts of scientific data. Although error-bounded
lossy compression offers general data distortion control by enforcing strict
error bounds on raw data, it may fail to meet the quality requirements on the
results of downstream analysis, a.k.a. Quantities of Interest (QoIs), derived
from raw data. This may lead to uncertainties and even misinterpretations in
scientific discoveries, significantly limiting the use of lossy compression in
practice. In this paper, we propose QPET, a novel, versatile, and portable
framework for QoI-preserving error-bounded lossy compression, which overcomes
the challenges of modeling diverse QoIs by leveraging numerical strategies.
QPET features (1) high portability to multiple existing lossy compressors, (2)
versatile preservation to most differentiable univariate and multivariate QoIs,
and (3) significant compression improvements in QoI-preservation tasks.
Experiments with six real-world datasets demonstrate that integrating QPET into
state-of-the-art error-bounded lossy compressors can gain 2x to 10x compression
speedups of existing QoI-preserving error-bounded lossy compression solutions,
up to 1000% compression ratio improvements to general-purpose compressors, and
up to 133% compression ratio improvements to existing QoI-integrated scientific
compressors.
| no_new_dataset | 0.944638 |
2412.03173 | Saksham Sharma | Saksham Sharma, Akshit Raizada, Suresh Sundaram | IRisPath: Enhancing Costmap for Off-Road Navigation with Robust IR-RGB
Fusion for Improved Day and Night Traversability | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Autonomous off-road navigation is required for applications in agriculture,
construction, search and rescue and defence. Traditional on-road autonomous
methods struggle with dynamic terrains, leading to poor vehicle control in
off-road conditions. Recent deep-learning models have used perception sensors
along with kinesthetic feedback for navigation on such terrains. However, this
approach has out-of-domain uncertainty. Factors like change in time of day and
weather impacts the performance of the model. We propose a multi modal fusion
network "IRisPath" capable of using Thermal and RGB images to provide
robustness against dynamic weather and light conditions. To aid further works
in this domain, we also open-source a day-night dataset with Thermal and RGB
images along with pseudo-labels for traversability. In order to co-register for
fusion model we also develop a novel method for targetless extrinsic
calibration of Thermal, LiDAR and RGB cameras with translation accuracy of
+/-1.7cm and rotation accuracy of +/-0.827degrees.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2024 09:53:09 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 06:24:05 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Sharma",
"Saksham",
""
],
[
"Raizada",
"Akshit",
""
],
[
"Sundaram",
"Suresh",
""
]
]
| TITLE: IRisPath: Enhancing Costmap for Off-Road Navigation with Robust IR-RGB
Fusion for Improved Day and Night Traversability
ABSTRACT: Autonomous off-road navigation is required for applications in agriculture,
construction, search and rescue and defence. Traditional on-road autonomous
methods struggle with dynamic terrains, leading to poor vehicle control in
off-road conditions. Recent deep-learning models have used perception sensors
along with kinesthetic feedback for navigation on such terrains. However, this
approach has out-of-domain uncertainty. Factors like change in time of day and
weather impacts the performance of the model. We propose a multi modal fusion
network "IRisPath" capable of using Thermal and RGB images to provide
robustness against dynamic weather and light conditions. To aid further works
in this domain, we also open-source a day-night dataset with Thermal and RGB
images along with pseudo-labels for traversability. In order to co-register for
fusion model we also develop a novel method for targetless extrinsic
calibration of Thermal, LiDAR and RGB cameras with translation accuracy of
+/-1.7cm and rotation accuracy of +/-0.827degrees.
| new_dataset | 0.955277 |
2412.04034 | John Cartlidge | Yunhua Pei, Jin Zheng, John Cartlidge | Dynamic Graph Representation with Contrastive Learning for Financial
Market Prediction: Integrating Temporal Evolution and Static Relations | 12 pages, 2 figures, author manuscript accepted for ICAART 2025
(International Conference on Agents and Artificial Intelligence) | 17th International Conference on Agents and Artificial
Intelligence (ICAART), Volume 2, Feb. 2025, pp. 298-309. (Best Paper Award) | 10.5220/0013154700003890 | null | cs.LG cs.NE q-fin.CP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal Graph Learning (TGL) is crucial for capturing the evolving nature of
stock markets. Traditional methods often ignore the interplay between dynamic
temporal changes and static relational structures between stocks. To address
this issue, we propose the Dynamic Graph Representation with Contrastive
Learning (DGRCL) framework, which integrates dynamic and static graph relations
to improve the accuracy of stock trend prediction. Our framework introduces two
key components: the Embedding Enhancement (EE) module and the Contrastive
Constrained Training (CCT) module. The EE module focuses on dynamically
capturing the temporal evolution of stock data, while the CCT module enforces
static constraints based on stock relations, refined within contrastive
learning. This dual-relation approach allows for a more comprehensive
understanding of stock market dynamics. Our experiments on two major U.S. stock
market datasets, NASDAQ and NYSE, demonstrate that DGRCL significantly
outperforms state-of-the-art TGL baselines. Ablation studies indicate the
importance of both modules. Overall, DGRCL not only enhances prediction ability
but also provides a robust framework for integrating temporal and relational
data in dynamic graphs. Code and data are available for public access.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 10:15:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Pei",
"Yunhua",
""
],
[
"Zheng",
"Jin",
""
],
[
"Cartlidge",
"John",
""
]
]
| TITLE: Dynamic Graph Representation with Contrastive Learning for Financial
Market Prediction: Integrating Temporal Evolution and Static Relations
ABSTRACT: Temporal Graph Learning (TGL) is crucial for capturing the evolving nature of
stock markets. Traditional methods often ignore the interplay between dynamic
temporal changes and static relational structures between stocks. To address
this issue, we propose the Dynamic Graph Representation with Contrastive
Learning (DGRCL) framework, which integrates dynamic and static graph relations
to improve the accuracy of stock trend prediction. Our framework introduces two
key components: the Embedding Enhancement (EE) module and the Contrastive
Constrained Training (CCT) module. The EE module focuses on dynamically
capturing the temporal evolution of stock data, while the CCT module enforces
static constraints based on stock relations, refined within contrastive
learning. This dual-relation approach allows for a more comprehensive
understanding of stock market dynamics. Our experiments on two major U.S. stock
market datasets, NASDAQ and NYSE, demonstrate that DGRCL significantly
outperforms state-of-the-art TGL baselines. Ablation studies indicate the
importance of both modules. Overall, DGRCL not only enhances prediction ability
but also provides a robust framework for integrating temporal and relational
data in dynamic graphs. Code and data are available for public access.
| no_new_dataset | 0.943452 |
2412.05707 | Youssef Shoeb | Youssef Shoeb, Nazir Nayal, Azarm Nowzad, Fatma G\"uney, Hanno
Gottschalk | Segment-Level Road Obstacle Detection Using Visual Foundation Model
Priors and Likelihood Ratios | 10 pages, 4 figures, and 1 table, to be published in VISAPP 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Detecting road obstacles is essential for autonomous vehicles to navigate
dynamic and complex traffic environments safely. Current road obstacle
detection methods typically assign a score to each pixel and apply a threshold
to generate final predictions. However, selecting an appropriate threshold is
challenging, and the per-pixel classification approach often leads to
fragmented predictions with numerous false positives. In this work, we propose
a novel method that leverages segment-level features from visual foundation
models and likelihood ratios to predict road obstacles directly. By focusing on
segments rather than individual pixels, our approach enhances detection
accuracy, reduces false positives, and offers increased robustness to scene
variability. We benchmark our approach against existing methods on the
RoadObstacle and LostAndFound datasets, achieving state-of-the-art performance
without needing a predefined threshold.
| [
{
"version": "v1",
"created": "Sat, 7 Dec 2024 17:40:20 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Jan 2025 00:37:27 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 01:46:15 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Shoeb",
"Youssef",
""
],
[
"Nayal",
"Nazir",
""
],
[
"Nowzad",
"Azarm",
""
],
[
"Güney",
"Fatma",
""
],
[
"Gottschalk",
"Hanno",
""
]
]
| TITLE: Segment-Level Road Obstacle Detection Using Visual Foundation Model
Priors and Likelihood Ratios
ABSTRACT: Detecting road obstacles is essential for autonomous vehicles to navigate
dynamic and complex traffic environments safely. Current road obstacle
detection methods typically assign a score to each pixel and apply a threshold
to generate final predictions. However, selecting an appropriate threshold is
challenging, and the per-pixel classification approach often leads to
fragmented predictions with numerous false positives. In this work, we propose
a novel method that leverages segment-level features from visual foundation
models and likelihood ratios to predict road obstacles directly. By focusing on
segments rather than individual pixels, our approach enhances detection
accuracy, reduces false positives, and offers increased robustness to scene
variability. We benchmark our approach against existing methods on the
RoadObstacle and LostAndFound datasets, achieving state-of-the-art performance
without needing a predefined threshold.
| no_new_dataset | 0.959762 |
2412.07236 | Jiquan Wang | Jiquan Wang, Sha Zhao, Zhiling Luo, Yangxuan Zhou, Haiteng Jiang,
Shijian Li, Tao Li, Gang Pan | CBraMod: A Criss-Cross Brain Foundation Model for EEG Decoding | Accepted by The Thirteenth International Conference on Learning
Representations (ICLR 2025) | null | null | null | eess.SP cs.AI cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electroencephalography (EEG) is a non-invasive technique to measure and
record brain electrical activity, widely used in various BCI and healthcare
applications. Early EEG decoding methods rely on supervised learning, limited
by specific tasks and datasets, hindering model performance and
generalizability. With the success of large language models, there is a growing
body of studies focusing on EEG foundation models. However, these studies still
leave challenges: Firstly, most of existing EEG foundation models employ full
EEG modeling strategy. It models the spatial and temporal dependencies between
all EEG patches together, but ignores that the spatial and temporal
dependencies are heterogeneous due to the unique structural characteristics of
EEG signals. Secondly, existing EEG foundation models have limited
generalizability on a wide range of downstream BCI tasks due to varying formats
of EEG data, making it challenging to adapt to. To address these challenges, we
propose a novel foundation model called CBraMod. Specifically, we devise a
criss-cross transformer as the backbone to thoroughly leverage the structural
characteristics of EEG signals, which can model spatial and temporal
dependencies separately through two parallel attention mechanisms. And we
utilize an asymmetric conditional positional encoding scheme which can encode
positional information of EEG patches and be easily adapted to the EEG with
diverse formats. CBraMod is pre-trained on a very large corpus of EEG through
patch-based masked EEG reconstruction. We evaluate CBraMod on up to 10
downstream BCI tasks (12 public datasets). CBraMod achieves the
state-of-the-art performance across the wide range of tasks, proving its strong
capability and generalizability. The source code is publicly available at
https://github.com/wjq-learning/CBraMod.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 06:56:36 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Feb 2025 04:05:43 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Feb 2025 12:48:15 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Mar 2025 03:13:54 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Jiquan",
""
],
[
"Zhao",
"Sha",
""
],
[
"Luo",
"Zhiling",
""
],
[
"Zhou",
"Yangxuan",
""
],
[
"Jiang",
"Haiteng",
""
],
[
"Li",
"Shijian",
""
],
[
"Li",
"Tao",
""
],
[
"Pan",
"Gang",
""
]
]
| TITLE: CBraMod: A Criss-Cross Brain Foundation Model for EEG Decoding
ABSTRACT: Electroencephalography (EEG) is a non-invasive technique to measure and
record brain electrical activity, widely used in various BCI and healthcare
applications. Early EEG decoding methods rely on supervised learning, limited
by specific tasks and datasets, hindering model performance and
generalizability. With the success of large language models, there is a growing
body of studies focusing on EEG foundation models. However, these studies still
leave challenges: Firstly, most of existing EEG foundation models employ full
EEG modeling strategy. It models the spatial and temporal dependencies between
all EEG patches together, but ignores that the spatial and temporal
dependencies are heterogeneous due to the unique structural characteristics of
EEG signals. Secondly, existing EEG foundation models have limited
generalizability on a wide range of downstream BCI tasks due to varying formats
of EEG data, making it challenging to adapt to. To address these challenges, we
propose a novel foundation model called CBraMod. Specifically, we devise a
criss-cross transformer as the backbone to thoroughly leverage the structural
characteristics of EEG signals, which can model spatial and temporal
dependencies separately through two parallel attention mechanisms. And we
utilize an asymmetric conditional positional encoding scheme which can encode
positional information of EEG patches and be easily adapted to the EEG with
diverse formats. CBraMod is pre-trained on a very large corpus of EEG through
patch-based masked EEG reconstruction. We evaluate CBraMod on up to 10
downstream BCI tasks (12 public datasets). CBraMod achieves the
state-of-the-art performance across the wide range of tasks, proving its strong
capability and generalizability. The source code is publicly available at
https://github.com/wjq-learning/CBraMod.
| no_new_dataset | 0.943243 |
2412.07407 | Billy Joe Franks | Billy Joe Franks, Moshe Eliasof, Semih Cant\"urk, Guy Wolf,
Carola-Bibiane Sch\"onlieb, Sophie Fellenz, Marius Kloft | Towards Graph Foundation Models: A Study on the Generalization of
Positional and Structural Encodings | Published at TMLR (https://openreview.net/forum?id=mSoDRZXsqj) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in integrating positional and structural encodings (PSEs)
into graph neural networks (GNNs) have significantly enhanced their performance
across various graph learning tasks. However, the general applicability of
these encodings and their potential to serve as foundational representations
for graphs remain uncertain. This paper investigates the fine-tuning
efficiency, scalability with sample size, and generalization capability of
learnable PSEs across diverse graph datasets. Specifically, we evaluate their
potential as universal pre-trained models that can be easily adapted to new
tasks with minimal fine-tuning and limited data. Furthermore, we assess the
expressivity of the learned representations, particularly, when used to augment
downstream GNNs. We demonstrate through extensive benchmarking and empirical
analysis that PSEs generally enhance downstream models. However, some datasets
may require specific PSE-augmentations to achieve optimal performance.
Nevertheless, our findings highlight their significant potential to become
integral components of future graph foundation models. We provide new insights
into the strengths and limitations of PSEs, contributing to the broader
discourse on foundation models in graph learning.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 10:58:47 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 08:05:53 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Franks",
"Billy Joe",
""
],
[
"Eliasof",
"Moshe",
""
],
[
"Cantürk",
"Semih",
""
],
[
"Wolf",
"Guy",
""
],
[
"Schönlieb",
"Carola-Bibiane",
""
],
[
"Fellenz",
"Sophie",
""
],
[
"Kloft",
"Marius",
""
]
]
| TITLE: Towards Graph Foundation Models: A Study on the Generalization of
Positional and Structural Encodings
ABSTRACT: Recent advances in integrating positional and structural encodings (PSEs)
into graph neural networks (GNNs) have significantly enhanced their performance
across various graph learning tasks. However, the general applicability of
these encodings and their potential to serve as foundational representations
for graphs remain uncertain. This paper investigates the fine-tuning
efficiency, scalability with sample size, and generalization capability of
learnable PSEs across diverse graph datasets. Specifically, we evaluate their
potential as universal pre-trained models that can be easily adapted to new
tasks with minimal fine-tuning and limited data. Furthermore, we assess the
expressivity of the learned representations, particularly, when used to augment
downstream GNNs. We demonstrate through extensive benchmarking and empirical
analysis that PSEs generally enhance downstream models. However, some datasets
may require specific PSE-augmentations to achieve optimal performance.
Nevertheless, our findings highlight their significant potential to become
integral components of future graph foundation models. We provide new insights
into the strengths and limitations of PSEs, contributing to the broader
discourse on foundation models in graph learning.
| no_new_dataset | 0.949012 |
2412.07487 | Yik Lung Pang | Yik Lung Pang, Alessio Xompero, Changjae Oh, Andrea Cavallaro | Stereo Hand-Object Reconstruction for Human-to-Robot Handover | 8 pages, 9 figures, 1 table | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Jointly estimating hand and object shape facilitates the grasping task in
human-to-robot handovers. However, relying on hand-crafted prior knowledge
about the geometric structure of the object fails when generalising to unseen
objects, and depth sensors fail to detect transparent objects such as drinking
glasses. In this work, we propose a stereo-based method for hand-object
reconstruction that combines single-view reconstructions probabilistically to
form a coherent stereo reconstruction. We learn 3D shape priors from a large
synthetic hand-object dataset to ensure that our method is generalisable, and
use RGB inputs to better capture transparent objects. We show that our method
reduces the object Chamfer distance compared to existing RGB based hand-object
reconstruction methods on single view and stereo settings. We process the
reconstructed hand-object shape with a projection-based outlier removal step
and use the output to guide a human-to-robot handover pipeline with
wide-baseline stereo RGB cameras. Our hand-object reconstruction enables a
robot to successfully receive a diverse range of household objects from the
human.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 13:12:32 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 14:04:23 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Pang",
"Yik Lung",
""
],
[
"Xompero",
"Alessio",
""
],
[
"Oh",
"Changjae",
""
],
[
"Cavallaro",
"Andrea",
""
]
]
| TITLE: Stereo Hand-Object Reconstruction for Human-to-Robot Handover
ABSTRACT: Jointly estimating hand and object shape facilitates the grasping task in
human-to-robot handovers. However, relying on hand-crafted prior knowledge
about the geometric structure of the object fails when generalising to unseen
objects, and depth sensors fail to detect transparent objects such as drinking
glasses. In this work, we propose a stereo-based method for hand-object
reconstruction that combines single-view reconstructions probabilistically to
form a coherent stereo reconstruction. We learn 3D shape priors from a large
synthetic hand-object dataset to ensure that our method is generalisable, and
use RGB inputs to better capture transparent objects. We show that our method
reduces the object Chamfer distance compared to existing RGB based hand-object
reconstruction methods on single view and stereo settings. We process the
reconstructed hand-object shape with a projection-based outlier removal step
and use the output to guide a human-to-robot handover pipeline with
wide-baseline stereo RGB cameras. Our hand-object reconstruction enables a
robot to successfully receive a diverse range of household objects from the
human.
| no_new_dataset | 0.947866 |
2412.09945 | Xinhao Zhong | Xinhao Zhong, Bin Chen, Hao Fang, Xulin Gu, Shu-Tao Xia, En-Hui Yang | Going Beyond Feature Similarity: Effective Dataset distillation based on
Class-aware Conditional Mutual Information | Accepted to ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Dataset distillation (DD) aims to minimize the time and memory consumption
needed for training deep neural networks on large datasets, by creating a
smaller synthetic dataset that has similar performance to that of the full real
dataset. However, current dataset distillation methods often result in
synthetic datasets that are excessively difficult for networks to learn from,
due to the compression of a substantial amount of information from the original
data through metrics measuring feature similarity, e,g., distribution matching
(DM). In this work, we introduce conditional mutual information (CMI) to assess
the class-aware complexity of a dataset and propose a novel method by
minimizing CMI. Specifically, we minimize the distillation loss while
constraining the class-aware complexity of the synthetic dataset by minimizing
its empirical CMI from the feature space of pre-trained networks,
simultaneously. Conducting on a thorough set of experiments, we show that our
method can serve as a general regularization method to existing DD methods and
improve the performance and training efficiency.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 08:10:47 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Feb 2025 13:50:09 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 13:24:41 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhong",
"Xinhao",
""
],
[
"Chen",
"Bin",
""
],
[
"Fang",
"Hao",
""
],
[
"Gu",
"Xulin",
""
],
[
"Xia",
"Shu-Tao",
""
],
[
"Yang",
"En-Hui",
""
]
]
| TITLE: Going Beyond Feature Similarity: Effective Dataset distillation based on
Class-aware Conditional Mutual Information
ABSTRACT: Dataset distillation (DD) aims to minimize the time and memory consumption
needed for training deep neural networks on large datasets, by creating a
smaller synthetic dataset that has similar performance to that of the full real
dataset. However, current dataset distillation methods often result in
synthetic datasets that are excessively difficult for networks to learn from,
due to the compression of a substantial amount of information from the original
data through metrics measuring feature similarity, e,g., distribution matching
(DM). In this work, we introduce conditional mutual information (CMI) to assess
the class-aware complexity of a dataset and propose a novel method by
minimizing CMI. Specifically, we minimize the distillation loss while
constraining the class-aware complexity of the synthetic dataset by minimizing
its empirical CMI from the feature space of pre-trained networks,
simultaneously. Conducting on a thorough set of experiments, we show that our
method can serve as a general regularization method to existing DD methods and
improve the performance and training efficiency.
| no_new_dataset | 0.946597 |
2412.12164 | Lingzhi Shen | Lingzhi Shen, Yunfei Long, Xiaohao Cai, Imran Razzak, Guanming Chen,
Kang Liu, and Shoaib Jameel | GAMED: Knowledge Adaptive Multi-Experts Decoupling for Multimodal Fake
News Detection | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Multimodal fake news detection often involves modelling heterogeneous data
sources, such as vision and language. Existing detection methods typically rely
on fusion effectiveness and cross-modal consistency to model the content,
complicating understanding how each modality affects prediction accuracy.
Additionally, these methods are primarily based on static feature modelling,
making it difficult to adapt to the dynamic changes and relationships between
different data modalities. This paper develops a significantly novel approach,
GAMED, for multimodal modelling, which focuses on generating distinctive and
discriminative features through modal decoupling to enhance cross-modal
synergies, thereby optimizing overall performance in the detection process.
GAMED leverages multiple parallel expert networks to refine features and
pre-embed semantic knowledge to improve the experts' ability in information
selection and viewpoint sharing. Subsequently, the feature distribution of each
modality is adaptively adjusted based on the respective experts' opinions.
GAMED also introduces a novel classification technique to dynamically manage
contributions from different modalities, while improving the explainability of
decisions. Experimental results on the Fakeddit and Yang datasets demonstrate
that GAMED performs better than recently developed state-of-the-art models. The
source code can be accessed at https://github.com/slz0925/GAMED.
| [
{
"version": "v1",
"created": "Wed, 11 Dec 2024 19:12:22 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 15:12:38 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Shen",
"Lingzhi",
""
],
[
"Long",
"Yunfei",
""
],
[
"Cai",
"Xiaohao",
""
],
[
"Razzak",
"Imran",
""
],
[
"Chen",
"Guanming",
""
],
[
"Liu",
"Kang",
""
],
[
"Jameel",
"Shoaib",
""
]
]
| TITLE: GAMED: Knowledge Adaptive Multi-Experts Decoupling for Multimodal Fake
News Detection
ABSTRACT: Multimodal fake news detection often involves modelling heterogeneous data
sources, such as vision and language. Existing detection methods typically rely
on fusion effectiveness and cross-modal consistency to model the content,
complicating understanding how each modality affects prediction accuracy.
Additionally, these methods are primarily based on static feature modelling,
making it difficult to adapt to the dynamic changes and relationships between
different data modalities. This paper develops a significantly novel approach,
GAMED, for multimodal modelling, which focuses on generating distinctive and
discriminative features through modal decoupling to enhance cross-modal
synergies, thereby optimizing overall performance in the detection process.
GAMED leverages multiple parallel expert networks to refine features and
pre-embed semantic knowledge to improve the experts' ability in information
selection and viewpoint sharing. Subsequently, the feature distribution of each
modality is adaptively adjusted based on the respective experts' opinions.
GAMED also introduces a novel classification technique to dynamically manage
contributions from different modalities, while improving the explainability of
decisions. Experimental results on the Fakeddit and Yang datasets demonstrate
that GAMED performs better than recently developed state-of-the-art models. The
source code can be accessed at https://github.com/slz0925/GAMED.
| no_new_dataset | 0.94428 |
2412.12540 | Austin Cheng | Austin Cheng, Alston Lo, Kin Long Kelvin Lee, Santiago Miret, Al\'an
Aspuru-Guzik | Stiefel Flow Matching for Moment-Constrained Structure Elucidation | ICLR 2025 | null | null | null | cs.LG physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular structure elucidation is a fundamental step in understanding
chemical phenomena, with applications in identifying molecules in natural
products, lab syntheses, forensic samples, and the interstellar medium. We
consider the task of predicting a molecule's all-atom 3D structure given only
its molecular formula and moments of inertia, motivated by the ability of
rotational spectroscopy to measure these moments. While existing generative
models can conditionally sample 3D structures with approximately correct
moments, this soft conditioning fails to leverage the many digits of precision
afforded by experimental rotational spectroscopy. To address this, we first
show that the space of $n$-atom point clouds with a fixed set of moments of
inertia is embedded in the Stiefel manifold $\mathrm{St}(n, 4)$. We then
propose Stiefel Flow Matching as a generative model for elucidating 3D
structure under exact moment constraints. Additionally, we learn simpler and
shorter flows by finding approximate solutions for equivariant optimal
transport on the Stiefel manifold. Empirically, enforcing exact moment
constraints allows Stiefel Flow Matching to achieve higher success rates and
faster sampling than Euclidean diffusion models, even on high-dimensional
manifolds corresponding to large molecules in the GEOM dataset.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 05:07:10 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 05:26:04 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Cheng",
"Austin",
""
],
[
"Lo",
"Alston",
""
],
[
"Lee",
"Kin Long Kelvin",
""
],
[
"Miret",
"Santiago",
""
],
[
"Aspuru-Guzik",
"Alán",
""
]
]
| TITLE: Stiefel Flow Matching for Moment-Constrained Structure Elucidation
ABSTRACT: Molecular structure elucidation is a fundamental step in understanding
chemical phenomena, with applications in identifying molecules in natural
products, lab syntheses, forensic samples, and the interstellar medium. We
consider the task of predicting a molecule's all-atom 3D structure given only
its molecular formula and moments of inertia, motivated by the ability of
rotational spectroscopy to measure these moments. While existing generative
models can conditionally sample 3D structures with approximately correct
moments, this soft conditioning fails to leverage the many digits of precision
afforded by experimental rotational spectroscopy. To address this, we first
show that the space of $n$-atom point clouds with a fixed set of moments of
inertia is embedded in the Stiefel manifold $\mathrm{St}(n, 4)$. We then
propose Stiefel Flow Matching as a generative model for elucidating 3D
structure under exact moment constraints. Additionally, we learn simpler and
shorter flows by finding approximate solutions for equivariant optimal
transport on the Stiefel manifold. Empirically, enforcing exact moment
constraints allows Stiefel Flow Matching to achieve higher success rates and
faster sampling than Euclidean diffusion models, even on high-dimensional
manifolds corresponding to large molecules in the GEOM dataset.
| no_new_dataset | 0.954351 |
2412.15598 | Zheng Chen | Zheng Chen, Yasuko Matsubara, Yasushi Sakurai, Jimeng Sun | Long-Term EEG Partitioning for Seizure Onset Detection | Accepted at AAAI 2025 | null | null | null | cs.LG cs.AI eess.SP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deep learning models have recently shown great success in classifying
epileptic patients using EEG recordings. Unfortunately, classification-based
methods lack a sound mechanism to detect the onset of seizure events. In this
work, we propose a two-stage framework, SODor, that explicitly models seizure
onset through a novel task formulation of subsequence clustering. Given an EEG
sequence, the framework first learns a set of second-level embeddings with
label supervision. It then employs model-based clustering to explicitly capture
long-term temporal dependencies in EEG sequences and identify meaningful
subsequences. Epochs within a subsequence share a common cluster assignment
(normal or seizure), with cluster or state transitions representing successful
onset detections. Extensive experiments on three datasets demonstrate that our
method can correct misclassifications, achieving 5\%-11\% classification
improvements over other baselines and accurately detecting seizure onsets.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2024 06:42:58 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 06:39:17 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Zheng",
""
],
[
"Matsubara",
"Yasuko",
""
],
[
"Sakurai",
"Yasushi",
""
],
[
"Sun",
"Jimeng",
""
]
]
| TITLE: Long-Term EEG Partitioning for Seizure Onset Detection
ABSTRACT: Deep learning models have recently shown great success in classifying
epileptic patients using EEG recordings. Unfortunately, classification-based
methods lack a sound mechanism to detect the onset of seizure events. In this
work, we propose a two-stage framework, SODor, that explicitly models seizure
onset through a novel task formulation of subsequence clustering. Given an EEG
sequence, the framework first learns a set of second-level embeddings with
label supervision. It then employs model-based clustering to explicitly capture
long-term temporal dependencies in EEG sequences and identify meaningful
subsequences. Epochs within a subsequence share a common cluster assignment
(normal or seizure), with cluster or state transitions representing successful
onset detections. Extensive experiments on three datasets demonstrate that our
method can correct misclassifications, achieving 5\%-11\% classification
improvements over other baselines and accurately detecting seizure onsets.
| no_new_dataset | 0.949389 |
2412.16667 | Nicolas E. Diaz Ferreyra PhD | Nicol\'as E. D\'iaz Ferreyra, Sirine Khelifi, Nalin Arachchilage and
Riccardo Scandariato | The Good, the Bad, and the (Un)Usable: A Rapid Literature Review on
Privacy as Code | Accepted at the 18th International Conference on Cooperative and
Human Aspects of Software Engineering (CHASE '25) | null | null | null | cs.SE cs.CY cs.HC | http://creativecommons.org/licenses/by/4.0/ | Privacy and security are central to the design of information systems endowed
with sound data protection and cyber resilience capabilities. Still, developers
often struggle to incorporate these properties into software projects as they
either lack proper cybersecurity training or do not consider them a priority.
Prior work has tried to support privacy and security engineering activities
through threat modeling methods for scrutinizing flaws in system architectures.
Moreover, several techniques for the automatic identification of
vulnerabilities and the generation of secure code implementations have also
been proposed in the current literature. Conversely, such as-code approaches
seem under-investigated in the privacy domain, with little work elaborating on
(i) the automatic detection of privacy properties in source code or (ii) the
generation of privacy-friendly code. In this work, we seek to characterize the
current research landscape of Privacy as Code (PaC) methods and tools by
conducting a rapid literature review. Our results suggest that PaC research is
in its infancy, especially regarding the performance evaluation and usability
assessment of the existing approaches. Based on these findings, we outline and
discuss prospective research directions concerning empirical studies with
software practitioners, the curation of benchmark datasets, and the role of
generative AI technologies.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2024 15:30:17 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 17:05:13 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ferreyra",
"Nicolás E. Díaz",
""
],
[
"Khelifi",
"Sirine",
""
],
[
"Arachchilage",
"Nalin",
""
],
[
"Scandariato",
"Riccardo",
""
]
]
| TITLE: The Good, the Bad, and the (Un)Usable: A Rapid Literature Review on
Privacy as Code
ABSTRACT: Privacy and security are central to the design of information systems endowed
with sound data protection and cyber resilience capabilities. Still, developers
often struggle to incorporate these properties into software projects as they
either lack proper cybersecurity training or do not consider them a priority.
Prior work has tried to support privacy and security engineering activities
through threat modeling methods for scrutinizing flaws in system architectures.
Moreover, several techniques for the automatic identification of
vulnerabilities and the generation of secure code implementations have also
been proposed in the current literature. Conversely, such as-code approaches
seem under-investigated in the privacy domain, with little work elaborating on
(i) the automatic detection of privacy properties in source code or (ii) the
generation of privacy-friendly code. In this work, we seek to characterize the
current research landscape of Privacy as Code (PaC) methods and tools by
conducting a rapid literature review. Our results suggest that PaC research is
in its infancy, especially regarding the performance evaluation and usability
assessment of the existing approaches. Based on these findings, we outline and
discuss prospective research directions concerning empirical studies with
software practitioners, the curation of benchmark datasets, and the role of
generative AI technologies.
| no_new_dataset | 0.941654 |
2412.17242 | Yule Liu | Yule Liu, Zhiyuan Zhong, Yifan Liao, Zhen Sun, Jingyi Zheng, Jiaheng
Wei, Qingyuan Gong, Fenghua Tong, Yang Chen, Yang Zhang, Xinlei He | On the Generalization and Adaptation Ability of Machine-Generated Text
Detectors in Academic Writing | null | null | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | The rising popularity of large language models (LLMs) has raised concerns
about machine-generated text (MGT), particularly in academic settings, where
issues like plagiarism and misinformation are prevalent. As a result,
developing a highly generalizable and adaptable MGT detection system has become
an urgent priority. Given that LLMs are most commonly misused in academic
writing, this work investigates the generalization and adaptation capabilities
of MGT detectors in three key aspects specific to academic writing: First, we
construct MGT-Acedemic, a large-scale dataset comprising over 336M tokens and
749K samples. MGT-Acedemic focuses on academic writing, featuring human-written
texts (HWTs) and MGTs across STEM, Humanities, and Social Sciences, paired with
an extensible code framework for efficient benchmarking. Second, we benchmark
the performance of various detectors for binary classification and attribution
tasks in both in-domain and cross-domain settings. This benchmark reveals the
often-overlooked challenges of attribution tasks. Third, we introduce a novel
attribution task where models have to adapt to new classes over time without
(or with very limited) access to prior training data in both few-shot and
many-shot scenarios. We implement eight different adapting techniques to
improve the performance and highlight the inherent complexity of the task. Our
findings provide insights into the generalization and adaptation ability of MGT
detectors across diverse scenarios and lay the foundation for building robust,
adaptive detection systems. The code framework is available at
https://github.com/Y-L-LIU/MGTBench-2.0.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 03:30:34 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Feb 2025 08:13:52 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 03:08:43 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Yule",
""
],
[
"Zhong",
"Zhiyuan",
""
],
[
"Liao",
"Yifan",
""
],
[
"Sun",
"Zhen",
""
],
[
"Zheng",
"Jingyi",
""
],
[
"Wei",
"Jiaheng",
""
],
[
"Gong",
"Qingyuan",
""
],
[
"Tong",
"Fenghua",
""
],
[
"Chen",
"Yang",
""
],
[
"Zhang",
"Yang",
""
],
[
"He",
"Xinlei",
""
]
]
| TITLE: On the Generalization and Adaptation Ability of Machine-Generated Text
Detectors in Academic Writing
ABSTRACT: The rising popularity of large language models (LLMs) has raised concerns
about machine-generated text (MGT), particularly in academic settings, where
issues like plagiarism and misinformation are prevalent. As a result,
developing a highly generalizable and adaptable MGT detection system has become
an urgent priority. Given that LLMs are most commonly misused in academic
writing, this work investigates the generalization and adaptation capabilities
of MGT detectors in three key aspects specific to academic writing: First, we
construct MGT-Acedemic, a large-scale dataset comprising over 336M tokens and
749K samples. MGT-Acedemic focuses on academic writing, featuring human-written
texts (HWTs) and MGTs across STEM, Humanities, and Social Sciences, paired with
an extensible code framework for efficient benchmarking. Second, we benchmark
the performance of various detectors for binary classification and attribution
tasks in both in-domain and cross-domain settings. This benchmark reveals the
often-overlooked challenges of attribution tasks. Third, we introduce a novel
attribution task where models have to adapt to new classes over time without
(or with very limited) access to prior training data in both few-shot and
many-shot scenarios. We implement eight different adapting techniques to
improve the performance and highlight the inherent complexity of the task. Our
findings provide insights into the generalization and adaptation ability of MGT
detectors across diverse scenarios and lay the foundation for building robust,
adaptive detection systems. The code framework is available at
https://github.com/Y-L-LIU/MGTBench-2.0.
| new_dataset | 0.96225 |
2412.18407 | Siavash Ameli | Siavash Ameli, Siyuan Zhuang, Ion Stoica, Michael W. Mahoney | A Statistical Framework for Ranking LLM-Based Chatbots | null | The Thirteenth International Conference on Learning
Representations (2025) | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have transformed natural language processing,
with frameworks like Chatbot Arena providing pioneering platforms for
evaluating these models. By facilitating millions of pairwise comparisons based
on human judgments, Chatbot Arena has become a cornerstone in LLM evaluation,
offering rich datasets for ranking models in open-ended conversational tasks.
Building upon this foundation, we propose a statistical framework that
incorporates key advancements to address specific challenges in pairwise
comparison analysis. First, we introduce a factored tie model that enhances the
ability to handle ties -- an integral aspect of human-judged comparisons --
significantly improving the model's fit to observed data. Second, we extend the
framework to model covariance between competitors, enabling deeper insights
into performance relationships and facilitating intuitive groupings into
performance tiers. Third, we resolve optimization challenges arising from
parameter non-uniqueness by introducing novel constraints, ensuring stable and
interpretable parameter estimation. Through rigorous evaluation and extensive
experimentation, our framework demonstrates substantial improvements over
existing methods in modeling pairwise comparison data. To support
reproducibility and practical adoption, we release leaderbot, an open-source
Python package implementing our models and analyses.
| [
{
"version": "v1",
"created": "Tue, 24 Dec 2024 12:54:19 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ameli",
"Siavash",
""
],
[
"Zhuang",
"Siyuan",
""
],
[
"Stoica",
"Ion",
""
],
[
"Mahoney",
"Michael W.",
""
]
]
| TITLE: A Statistical Framework for Ranking LLM-Based Chatbots
ABSTRACT: Large language models (LLMs) have transformed natural language processing,
with frameworks like Chatbot Arena providing pioneering platforms for
evaluating these models. By facilitating millions of pairwise comparisons based
on human judgments, Chatbot Arena has become a cornerstone in LLM evaluation,
offering rich datasets for ranking models in open-ended conversational tasks.
Building upon this foundation, we propose a statistical framework that
incorporates key advancements to address specific challenges in pairwise
comparison analysis. First, we introduce a factored tie model that enhances the
ability to handle ties -- an integral aspect of human-judged comparisons --
significantly improving the model's fit to observed data. Second, we extend the
framework to model covariance between competitors, enabling deeper insights
into performance relationships and facilitating intuitive groupings into
performance tiers. Third, we resolve optimization challenges arising from
parameter non-uniqueness by introducing novel constraints, ensuring stable and
interpretable parameter estimation. Through rigorous evaluation and extensive
experimentation, our framework demonstrates substantial improvements over
existing methods in modeling pairwise comparison data. To support
reproducibility and practical adoption, we release leaderbot, an open-source
Python package implementing our models and analyses.
| no_new_dataset | 0.938576 |
2412.19495 | Ioannis Bilionis | Ioannis Bilionis, Ricardo C. Berrios, Luis Fernandez-Luque, Carlos
Castillo | Disparate Model Performance and Stability in Machine Learning Clinical
Support for Diabetes and Heart Diseases | This paper will be presented in American Medical Informatics
Association (AMIA) Informatics Summit Conference 2025 (Pittsburgh, PA). 10
pages, 2 figures, 5 tables | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Machine Learning (ML) algorithms are vital for supporting clinical
decision-making in biomedical informatics. However, their predictive
performance can vary across demographic groups, often due to the
underrepresentation of historically marginalized populations in training
datasets. The investigation reveals widespread sex- and age-related inequities
in chronic disease datasets and their derived ML models. Thus, a novel
analytical framework is introduced, combining systematic arbitrariness with
traditional metrics like accuracy and data complexity. The analysis of data
from over 25,000 individuals with chronic diseases revealed mild sex-related
disparities, favoring predictive accuracy for males, and significant
age-related differences, with better accuracy for younger patients. Notably,
older patients showed inconsistent predictive accuracy across seven datasets,
linked to higher data complexity and lower model performance. This highlights
that representativeness in training data alone does not guarantee equitable
outcomes, and model arbitrariness must be addressed before deploying models in
clinical settings.
| [
{
"version": "v1",
"created": "Fri, 27 Dec 2024 07:31:14 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 16:05:29 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Bilionis",
"Ioannis",
""
],
[
"Berrios",
"Ricardo C.",
""
],
[
"Fernandez-Luque",
"Luis",
""
],
[
"Castillo",
"Carlos",
""
]
]
| TITLE: Disparate Model Performance and Stability in Machine Learning Clinical
Support for Diabetes and Heart Diseases
ABSTRACT: Machine Learning (ML) algorithms are vital for supporting clinical
decision-making in biomedical informatics. However, their predictive
performance can vary across demographic groups, often due to the
underrepresentation of historically marginalized populations in training
datasets. The investigation reveals widespread sex- and age-related inequities
in chronic disease datasets and their derived ML models. Thus, a novel
analytical framework is introduced, combining systematic arbitrariness with
traditional metrics like accuracy and data complexity. The analysis of data
from over 25,000 individuals with chronic diseases revealed mild sex-related
disparities, favoring predictive accuracy for males, and significant
age-related differences, with better accuracy for younger patients. Notably,
older patients showed inconsistent predictive accuracy across seven datasets,
linked to higher data complexity and lower model performance. This highlights
that representativeness in training data alone does not guarantee equitable
outcomes, and model arbitrariness must be addressed before deploying models in
clinical settings.
| no_new_dataset | 0.948346 |
2501.01791 | Nikolaos Stathoulopoulos | Nikolaos Stathoulopoulos, Christoforos Kanellakis and George
Nikolakopoulos | Balancing Accuracy and Efficiency for Large-Scale SLAM: A Minimal Subset
Approach for Scalable Loop Closures | 8 pages, 7 Figures, 2 Tables. Submitted | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Typical LiDAR SLAM architectures feature a front-end for odometry estimation
and a back-end for refining and optimizing the trajectory and map, commonly
through loop closures. However, loop closure detection in large-scale missions
presents significant computational challenges due to the need to identify,
verify, and process numerous candidate pairs for pose graph optimization.
Keyframe sampling bridges the front-end and back-end by selecting frames for
storing and processing during global optimization. This article proposes an
online keyframe sampling approach that constructs the pose graph using the most
impactful keyframes for loop closure. We introduce the Minimal Subset Approach
(MSA), which optimizes two key objectives: redundancy minimization and
information preservation, implemented within a sliding window framework. By
operating in the feature space rather than 3-D space, MSA efficiently reduces
redundant keyframes while retaining essential information. In sum, evaluations
on diverse public datasets show that the proposed approach outperforms naive
methods in reducing false positive rates in place recognition, while delivering
superior ATE and RPE in metric localization, without the need for manual
parameter tuning. Additionally, MSA demonstrates efficiency and scalability by
reducing memory usage and computational overhead during loop closure detection
and pose graph optimization.
| [
{
"version": "v1",
"created": "Fri, 3 Jan 2025 12:48:01 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 14:17:25 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Stathoulopoulos",
"Nikolaos",
""
],
[
"Kanellakis",
"Christoforos",
""
],
[
"Nikolakopoulos",
"George",
""
]
]
| TITLE: Balancing Accuracy and Efficiency for Large-Scale SLAM: A Minimal Subset
Approach for Scalable Loop Closures
ABSTRACT: Typical LiDAR SLAM architectures feature a front-end for odometry estimation
and a back-end for refining and optimizing the trajectory and map, commonly
through loop closures. However, loop closure detection in large-scale missions
presents significant computational challenges due to the need to identify,
verify, and process numerous candidate pairs for pose graph optimization.
Keyframe sampling bridges the front-end and back-end by selecting frames for
storing and processing during global optimization. This article proposes an
online keyframe sampling approach that constructs the pose graph using the most
impactful keyframes for loop closure. We introduce the Minimal Subset Approach
(MSA), which optimizes two key objectives: redundancy minimization and
information preservation, implemented within a sliding window framework. By
operating in the feature space rather than 3-D space, MSA efficiently reduces
redundant keyframes while retaining essential information. In sum, evaluations
on diverse public datasets show that the proposed approach outperforms naive
methods in reducing false positive rates in place recognition, while delivering
superior ATE and RPE in metric localization, without the need for manual
parameter tuning. Additionally, MSA demonstrates efficiency and scalability by
reducing memory usage and computational overhead during loop closure detection
and pose graph optimization.
| no_new_dataset | 0.948632 |
2501.03836 | Runci Bai | Runci Bai, Guibao Xu and Yanze Shi | SCC-YOLO: An Improved Object Detector for Assisting in Brain Tumor
Diagnosis | null | null | null | null | eess.IV cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain tumors can lead to neurological dysfunction, cognitive and
psychological changes, increased intracranial pressure, and seizures, posing
significant risks to health. The You Only Look Once (YOLO) series has shown
superior accuracy in medical imaging object detection. This paper presents a
novel SCC-YOLO architecture that integrates the SCConv module into YOLOv9. The
SCConv module optimizes convolutional efficiency by reducing spatial and
channel redundancy, enhancing image feature learning. We examine the effects of
different attention mechanisms with YOLOv9 for brain tumor detection using the
Br35H dataset and our custom dataset (Brain_Tumor_Dataset). Results indicate
that SCC-YOLO improved mAP50 by 0.3% on the Br35H dataset and by 0.5% on our
custom dataset compared to YOLOv9. SCC-YOLO achieves state-of-the-art
performance in brain tumor detection.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2025 14:45:39 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jan 2025 14:10:16 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 06:41:56 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Bai",
"Runci",
""
],
[
"Xu",
"Guibao",
""
],
[
"Shi",
"Yanze",
""
]
]
| TITLE: SCC-YOLO: An Improved Object Detector for Assisting in Brain Tumor
Diagnosis
ABSTRACT: Brain tumors can lead to neurological dysfunction, cognitive and
psychological changes, increased intracranial pressure, and seizures, posing
significant risks to health. The You Only Look Once (YOLO) series has shown
superior accuracy in medical imaging object detection. This paper presents a
novel SCC-YOLO architecture that integrates the SCConv module into YOLOv9. The
SCConv module optimizes convolutional efficiency by reducing spatial and
channel redundancy, enhancing image feature learning. We examine the effects of
different attention mechanisms with YOLOv9 for brain tumor detection using the
Br35H dataset and our custom dataset (Brain_Tumor_Dataset). Results indicate
that SCC-YOLO improved mAP50 by 0.3% on the Br35H dataset and by 0.5% on our
custom dataset compared to YOLOv9. SCC-YOLO achieves state-of-the-art
performance in brain tumor detection.
| new_dataset | 0.96157 |
2501.04690 | Md Nadim | Md Nadim, Mohammad Hassan, Ashis Kumar Mandal, Chanchal K. Roy, Banani
Roy, Kevin A. Schneider | Comparative Analysis of Quantum and Classical Support Vector Classifiers
for Software Bug Prediction: An Exploratory Study | Accepted for publication in the Springer Journal: Quantum Machine
Intelligence (https://link.springer.com/journal/42484) | null | 10.1007/s42484-025-00236-w | null | cs.SE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Purpose: Quantum computing promises to transform problem-solving across
various domains with rapid and practical solutions. Within Software Evolution
and Maintenance, Quantum Machine Learning (QML) remains mostly an underexplored
domain, particularly in addressing challenges such as detecting buggy software
commits from code repositories. Methods: In this study, we investigate the
practical application of Quantum Support Vector Classifiers (QSVC) for
detecting buggy software commits across 14 open-source software projects with
diverse dataset sizes encompassing 30,924 data instances. We compare the QML
algorithm PQSVC (Pegasos QSVC) and QSVC against the classical Support Vector
Classifier (SVC). Our technique addresses large datasets in QSVC algorithms by
dividing them into smaller subsets. We propose and evaluate an aggregation
method to combine predictions from these models to detect the entire test
dataset. We also introduce an incremental testing methodology to overcome the
difficulties of quantum feature mapping during the testing approach. Results:
The study shows the effectiveness of QSVC and PQSVC in detecting buggy software
commits. The aggregation technique successfully combines predictions from
smaller data subsets, enhancing the overall detection accuracy for the entire
test dataset. The incremental testing methodology effectively manages the
challenges associated with quantum feature mapping during the testing process.
Conclusion: We contribute to the advancement of QML algorithms in defect
prediction, unveiling the potential for further research in this domain. The
specific scenario of the Short-Term Activity Frame (STAF) highlights the early
detection of buggy software commits during the initial developmental phases of
software systems, particularly when dataset sizes remain insufficient to train
machine learning models.
| [
{
"version": "v1",
"created": "Wed, 8 Jan 2025 18:53:50 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Nadim",
"Md",
""
],
[
"Hassan",
"Mohammad",
""
],
[
"Mandal",
"Ashis Kumar",
""
],
[
"Roy",
"Chanchal K.",
""
],
[
"Roy",
"Banani",
""
],
[
"Schneider",
"Kevin A.",
""
]
]
| TITLE: Comparative Analysis of Quantum and Classical Support Vector Classifiers
for Software Bug Prediction: An Exploratory Study
ABSTRACT: Purpose: Quantum computing promises to transform problem-solving across
various domains with rapid and practical solutions. Within Software Evolution
and Maintenance, Quantum Machine Learning (QML) remains mostly an underexplored
domain, particularly in addressing challenges such as detecting buggy software
commits from code repositories. Methods: In this study, we investigate the
practical application of Quantum Support Vector Classifiers (QSVC) for
detecting buggy software commits across 14 open-source software projects with
diverse dataset sizes encompassing 30,924 data instances. We compare the QML
algorithm PQSVC (Pegasos QSVC) and QSVC against the classical Support Vector
Classifier (SVC). Our technique addresses large datasets in QSVC algorithms by
dividing them into smaller subsets. We propose and evaluate an aggregation
method to combine predictions from these models to detect the entire test
dataset. We also introduce an incremental testing methodology to overcome the
difficulties of quantum feature mapping during the testing approach. Results:
The study shows the effectiveness of QSVC and PQSVC in detecting buggy software
commits. The aggregation technique successfully combines predictions from
smaller data subsets, enhancing the overall detection accuracy for the entire
test dataset. The incremental testing methodology effectively manages the
challenges associated with quantum feature mapping during the testing process.
Conclusion: We contribute to the advancement of QML algorithms in defect
prediction, unveiling the potential for further research in this domain. The
specific scenario of the Short-Term Activity Frame (STAF) highlights the early
detection of buggy software commits during the initial developmental phases of
software systems, particularly when dataset sizes remain insufficient to train
machine learning models.
| no_new_dataset | 0.946151 |
2501.04974 | Benjamin Reichman | Benjamin Reichman, Xiaofan Yu, Lanxiang Hu, Jack Truxal, Atishay Jain,
Rushil Chandrupatla, Tajana \v{S}imuni\'c Rosing, Larry Heck | SensorQA: A Question Answering Benchmark for Daily-Life Monitoring | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the rapid growth in sensor data, effectively interpreting and
interfacing with these data in a human-understandable way has become crucial.
While existing research primarily focuses on learning classification models,
fewer studies have explored how end users can actively extract useful insights
from sensor data, often hindered by the lack of a proper dataset. To address
this gap, we introduce SensorQA, the first human-created question-answering
(QA) dataset for long-term time-series sensor data for daily life monitoring.
SensorQA is created by human workers and includes 5.6K diverse and practical
queries that reflect genuine human interests, paired with accurate answers
derived from sensor data. We further establish benchmarks for state-of-the-art
AI models on this dataset and evaluate their performance on typical edge
devices. Our results reveal a gap between current models and optimal QA
performance and efficiency, highlighting the need for new contributions. The
dataset and code are available at:
https://github.com/benjamin-reichman/SensorQA.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2025 05:06:44 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jan 2025 05:15:34 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 17:03:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Reichman",
"Benjamin",
""
],
[
"Yu",
"Xiaofan",
""
],
[
"Hu",
"Lanxiang",
""
],
[
"Truxal",
"Jack",
""
],
[
"Jain",
"Atishay",
""
],
[
"Chandrupatla",
"Rushil",
""
],
[
"Rosing",
"Tajana Šimunić",
""
],
[
"Heck",
"Larry",
""
]
]
| TITLE: SensorQA: A Question Answering Benchmark for Daily-Life Monitoring
ABSTRACT: With the rapid growth in sensor data, effectively interpreting and
interfacing with these data in a human-understandable way has become crucial.
While existing research primarily focuses on learning classification models,
fewer studies have explored how end users can actively extract useful insights
from sensor data, often hindered by the lack of a proper dataset. To address
this gap, we introduce SensorQA, the first human-created question-answering
(QA) dataset for long-term time-series sensor data for daily life monitoring.
SensorQA is created by human workers and includes 5.6K diverse and practical
queries that reflect genuine human interests, paired with accurate answers
derived from sensor data. We further establish benchmarks for state-of-the-art
AI models on this dataset and evaluate their performance on typical edge
devices. Our results reveal a gap between current models and optimal QA
performance and efficiency, highlighting the need for new contributions. The
dataset and code are available at:
https://github.com/benjamin-reichman/SensorQA.
| new_dataset | 0.964522 |
2501.07596 | Zheqi Lv | Zheqi Lv, Keming Ye, Zishu Wei, Qi Tian, Shengyu Zhang, Wenqiao Zhang,
Wenjie Wang, Kun Kuang, Tat-Seng Chua, Fei Wu | Optimize Incompatible Parameters through Compatibility-aware Knowledge
Integration | Published on AAAI'25(Oral): The Annual AAAI Conference on Artificial
Intelligence | null | null | null | cs.LG cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have become foundational to advancements in multiple
domains, including recommendation systems, natural language processing, and so
on. Despite their successes, these models often contain incompatible parameters
that can be underutilized or detrimental to model performance, particularly
when faced with specific, varying data distributions. Existing research excels
in removing such parameters or merging the outputs of multiple different
pretrained models. However, the former focuses on efficiency rather than
performance, while the latter requires several times more computing and storage
resources to support inference. In this paper, we set the goal to explicitly
improve these incompatible parameters by leveraging the complementary strengths
of different models, thereby directly enhancing the models without any
additional parameters. Specifically, we propose Compatibility-aware Knowledge
Integration (CKI), which consists of Parameter Compatibility Assessment and
Parameter Splicing, which are used to evaluate the knowledge content of
multiple models and integrate the knowledge into one model, respectively. The
integrated model can be used directly for inference or for further fine-tuning.
We conduct extensive experiments on various datasets for recommendation and
language tasks, and the results show that Compatibility-aware Knowledge
Integration can effectively optimize incompatible parameters under multiple
tasks and settings to break through the training limit of the original model
without increasing the inference cost.
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2025 01:42:43 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 13:27:01 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lv",
"Zheqi",
""
],
[
"Ye",
"Keming",
""
],
[
"Wei",
"Zishu",
""
],
[
"Tian",
"Qi",
""
],
[
"Zhang",
"Shengyu",
""
],
[
"Zhang",
"Wenqiao",
""
],
[
"Wang",
"Wenjie",
""
],
[
"Kuang",
"Kun",
""
],
[
"Chua",
"Tat-Seng",
""
],
[
"Wu",
"Fei",
""
]
]
| TITLE: Optimize Incompatible Parameters through Compatibility-aware Knowledge
Integration
ABSTRACT: Deep neural networks have become foundational to advancements in multiple
domains, including recommendation systems, natural language processing, and so
on. Despite their successes, these models often contain incompatible parameters
that can be underutilized or detrimental to model performance, particularly
when faced with specific, varying data distributions. Existing research excels
in removing such parameters or merging the outputs of multiple different
pretrained models. However, the former focuses on efficiency rather than
performance, while the latter requires several times more computing and storage
resources to support inference. In this paper, we set the goal to explicitly
improve these incompatible parameters by leveraging the complementary strengths
of different models, thereby directly enhancing the models without any
additional parameters. Specifically, we propose Compatibility-aware Knowledge
Integration (CKI), which consists of Parameter Compatibility Assessment and
Parameter Splicing, which are used to evaluate the knowledge content of
multiple models and integrate the knowledge into one model, respectively. The
integrated model can be used directly for inference or for further fine-tuning.
We conduct extensive experiments on various datasets for recommendation and
language tasks, and the results show that Compatibility-aware Knowledge
Integration can effectively optimize incompatible parameters under multiple
tasks and settings to break through the training limit of the original model
without increasing the inference cost.
| no_new_dataset | 0.944228 |
2501.09555 | Tingxuan Chen | Tingxuan Chen, Kun Yuan, Vinkle Srivastav, Nassir Navab, Nicolas Padoy | Text-driven Adaptation of Foundation Models for Few-shot Surgical
Workflow Analysis | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: Surgical workflow analysis is crucial for improving surgical
efficiency and safety. However, previous studies rely heavily on large-scale
annotated datasets, posing challenges in cost, scalability, and reliance on
expert annotations. To address this, we propose Surg-FTDA (Few-shot Text-driven
Adaptation), designed to handle various surgical workflow analysis tasks with
minimal paired image-label data.
Methods: Our approach has two key components. First, Few-shot selection-based
modality alignment selects a small subset of images and aligns their embeddings
with text embeddings from the downstream task, bridging the modality gap.
Second, Text-driven adaptation leverages only text data to train a decoder,
eliminating the need for paired image-text data. This decoder is then applied
to aligned image embeddings, enabling image-related tasks without explicit
image-text pairs.
Results: We evaluate our approach to generative tasks (image captioning) and
discriminative tasks (triplet recognition and phase recognition). Results show
that Surg-FTDA outperforms baselines and generalizes well across downstream
tasks.
Conclusion: We propose a text-driven adaptation approach that mitigates the
modality gap and handles multiple downstream tasks in surgical workflow
analysis, with minimal reliance on large annotated datasets. The code and
dataset will be released in https://github.com/CAMMA-public/Surg-FTDA
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2025 14:18:06 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jan 2025 16:28:21 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 13:05:35 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Tingxuan",
""
],
[
"Yuan",
"Kun",
""
],
[
"Srivastav",
"Vinkle",
""
],
[
"Navab",
"Nassir",
""
],
[
"Padoy",
"Nicolas",
""
]
]
| TITLE: Text-driven Adaptation of Foundation Models for Few-shot Surgical
Workflow Analysis
ABSTRACT: Purpose: Surgical workflow analysis is crucial for improving surgical
efficiency and safety. However, previous studies rely heavily on large-scale
annotated datasets, posing challenges in cost, scalability, and reliance on
expert annotations. To address this, we propose Surg-FTDA (Few-shot Text-driven
Adaptation), designed to handle various surgical workflow analysis tasks with
minimal paired image-label data.
Methods: Our approach has two key components. First, Few-shot selection-based
modality alignment selects a small subset of images and aligns their embeddings
with text embeddings from the downstream task, bridging the modality gap.
Second, Text-driven adaptation leverages only text data to train a decoder,
eliminating the need for paired image-text data. This decoder is then applied
to aligned image embeddings, enabling image-related tasks without explicit
image-text pairs.
Results: We evaluate our approach to generative tasks (image captioning) and
discriminative tasks (triplet recognition and phase recognition). Results show
that Surg-FTDA outperforms baselines and generalizes well across downstream
tasks.
Conclusion: We propose a text-driven adaptation approach that mitigates the
modality gap and handles multiple downstream tasks in surgical workflow
analysis, with minimal reliance on large annotated datasets. The code and
dataset will be released in https://github.com/CAMMA-public/Surg-FTDA
| no_new_dataset | 0.950273 |
2501.09695 | Zhihe Yang | Zhihe Yang, Xufang Luo, Dongqi Han, Yunjian Xu, Dongsheng Li | Mitigating Hallucinations in Large Vision-Language Models via DPO:
On-Policy Data Hold the Key | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hallucination remains a major challenge for Large Vision-Language Models
(LVLMs). Direct Preference Optimization (DPO) has gained increasing attention
as a simple solution to hallucination issues. It directly learns from
constructed preference pairs that reflect the severity of hallucinations in
responses to the same prompt and image. Nonetheless, different data
construction methods in existing works bring notable performance variations. We
identify a crucial factor here: outcomes are largely contingent on whether the
constructed data aligns on-policy w.r.t the initial (reference) policy of DPO.
Theoretical analysis suggests that learning from off-policy data is impeded by
the presence of KL-divergence between the updated policy and the reference
policy. From the perspective of dataset distribution, we systematically
summarize the inherent flaws in existing algorithms that employ DPO to address
hallucination issues. To alleviate the problems, we propose On-Policy Alignment
(OPA)-DPO framework, which uniquely leverages expert feedback to correct
hallucinated responses and aligns both the original and expert-revised
responses in an on-policy manner. Notably, with only 4.8k data, OPA-DPO
achieves an additional reduction in the hallucination rate of LLaVA-1.5-7B:
13.26% on the AMBER benchmark and 5.39% on the Object-Hal benchmark, compared
to the previous SOTA algorithm trained with 16k samples. Our implementation is
available at https://github.com/zhyang2226/OPA-DPO.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2025 17:48:03 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 14:48:45 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yang",
"Zhihe",
""
],
[
"Luo",
"Xufang",
""
],
[
"Han",
"Dongqi",
""
],
[
"Xu",
"Yunjian",
""
],
[
"Li",
"Dongsheng",
""
]
]
| TITLE: Mitigating Hallucinations in Large Vision-Language Models via DPO:
On-Policy Data Hold the Key
ABSTRACT: Hallucination remains a major challenge for Large Vision-Language Models
(LVLMs). Direct Preference Optimization (DPO) has gained increasing attention
as a simple solution to hallucination issues. It directly learns from
constructed preference pairs that reflect the severity of hallucinations in
responses to the same prompt and image. Nonetheless, different data
construction methods in existing works bring notable performance variations. We
identify a crucial factor here: outcomes are largely contingent on whether the
constructed data aligns on-policy w.r.t the initial (reference) policy of DPO.
Theoretical analysis suggests that learning from off-policy data is impeded by
the presence of KL-divergence between the updated policy and the reference
policy. From the perspective of dataset distribution, we systematically
summarize the inherent flaws in existing algorithms that employ DPO to address
hallucination issues. To alleviate the problems, we propose On-Policy Alignment
(OPA)-DPO framework, which uniquely leverages expert feedback to correct
hallucinated responses and aligns both the original and expert-revised
responses in an on-policy manner. Notably, with only 4.8k data, OPA-DPO
achieves an additional reduction in the hallucination rate of LLaVA-1.5-7B:
13.26% on the AMBER benchmark and 5.39% on the Object-Hal benchmark, compared
to the previous SOTA algorithm trained with 16k samples. Our implementation is
available at https://github.com/zhyang2226/OPA-DPO.
| no_new_dataset | 0.949763 |
2501.10860 | Dina Pisarevskaya | Dina Pisarevskaya and Arkaitz Zubiaga | Zero-shot and Few-shot Learning with Instruction-following LLMs for
Claim Matching in Automated Fact-checking | Published at the 31st International Conference on Computational
Linguistics (COLING 2025). Compared to the conference version of the paper,
the dataset link is added here & 2 minor typos fixed | Proceedings of the 31st International Conference on Computational
Linguistics, 2025, pages 9721-9736, Abu Dhabi, UAE | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The claim matching (CM) task can benefit an automated fact-checking pipeline
by putting together claims that can be resolved with the same fact-check. In
this work, we are the first to explore zero-shot and few-shot learning
approaches to the task. We consider CM as a binary classification task and
experiment with a set of instruction-following large language models
(GPT-3.5-turbo, Gemini-1.5-flash, Mistral-7B-Instruct, and
Llama-3-8B-Instruct), investigating prompt templates. We introduce a new CM
dataset, ClaimMatch, which will be released upon acceptance. We put LLMs to the
test in the CM task and find that it can be tackled by leveraging more mature
yet similar tasks such as natural language inference or paraphrase detection.
We also propose a pipeline for CM, which we evaluate on texts of different
lengths.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2025 19:57:54 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 22:23:54 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Pisarevskaya",
"Dina",
""
],
[
"Zubiaga",
"Arkaitz",
""
]
]
| TITLE: Zero-shot and Few-shot Learning with Instruction-following LLMs for
Claim Matching in Automated Fact-checking
ABSTRACT: The claim matching (CM) task can benefit an automated fact-checking pipeline
by putting together claims that can be resolved with the same fact-check. In
this work, we are the first to explore zero-shot and few-shot learning
approaches to the task. We consider CM as a binary classification task and
experiment with a set of instruction-following large language models
(GPT-3.5-turbo, Gemini-1.5-flash, Mistral-7B-Instruct, and
Llama-3-8B-Instruct), investigating prompt templates. We introduce a new CM
dataset, ClaimMatch, which will be released upon acceptance. We put LLMs to the
test in the CM task and find that it can be tackled by leveraging more mature
yet similar tasks such as natural language inference or paraphrase detection.
We also propose a pipeline for CM, which we evaluate on texts of different
lengths.
| new_dataset | 0.954393 |
2501.11515 | Zixuan Chen | Zixuan Chen, Yujin Wang, Xin Cai, Zhiyuan You, Zheming Lu, Fan Zhang,
Shi Guo, Tianfan Xue | UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion | Accepted by CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Capturing high dynamic range (HDR) scenes is one of the most important issues
in camera design. Majority of cameras use exposure fusion technique, which
fuses images captured by different exposure levels, to increase dynamic range.
However, this approach can only handle images with limited exposure difference,
normally 3-4 stops. When applying to very high dynamic scenes where a large
exposure difference is required, this approach often fails due to incorrect
alignment or inconsistent lighting between inputs, or tone mapping artifacts.
In this work, we propose UltraFusion, the first exposure fusion technique that
can merge input with 9 stops differences. The key idea is that we model the
exposure fusion as a guided inpainting problem, where the under-exposed image
is used as a guidance to fill the missing information of over-exposed highlight
in the over-exposed region. Using under-exposed image as a soft guidance,
instead of a hard constrain, our model is robust to potential alignment issue
or lighting variations. Moreover, utilizing the image prior of the generative
model, our model also generates natural tone mapping, even for very
high-dynamic range scene. Our approach outperforms HDR-Transformer on latest
HDR benchmarks. Moreover, to test its performance in ultra high dynamic range
scene, we capture a new real-world exposure fusion benchmark, UltraFusion
Dataset, with exposure difference up to 9 stops, and experiments show that
\model~can generate beautiful and high-quality fusion results under various
scenarios. An online demo is provided at
https://openimaginglab.github.io/UltraFusion/.
| [
{
"version": "v1",
"created": "Mon, 20 Jan 2025 14:45:07 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 09:44:03 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Zixuan",
""
],
[
"Wang",
"Yujin",
""
],
[
"Cai",
"Xin",
""
],
[
"You",
"Zhiyuan",
""
],
[
"Lu",
"Zheming",
""
],
[
"Zhang",
"Fan",
""
],
[
"Guo",
"Shi",
""
],
[
"Xue",
"Tianfan",
""
]
]
| TITLE: UltraFusion: Ultra High Dynamic Imaging using Exposure Fusion
ABSTRACT: Capturing high dynamic range (HDR) scenes is one of the most important issues
in camera design. Majority of cameras use exposure fusion technique, which
fuses images captured by different exposure levels, to increase dynamic range.
However, this approach can only handle images with limited exposure difference,
normally 3-4 stops. When applying to very high dynamic scenes where a large
exposure difference is required, this approach often fails due to incorrect
alignment or inconsistent lighting between inputs, or tone mapping artifacts.
In this work, we propose UltraFusion, the first exposure fusion technique that
can merge input with 9 stops differences. The key idea is that we model the
exposure fusion as a guided inpainting problem, where the under-exposed image
is used as a guidance to fill the missing information of over-exposed highlight
in the over-exposed region. Using under-exposed image as a soft guidance,
instead of a hard constrain, our model is robust to potential alignment issue
or lighting variations. Moreover, utilizing the image prior of the generative
model, our model also generates natural tone mapping, even for very
high-dynamic range scene. Our approach outperforms HDR-Transformer on latest
HDR benchmarks. Moreover, to test its performance in ultra high dynamic range
scene, we capture a new real-world exposure fusion benchmark, UltraFusion
Dataset, with exposure difference up to 9 stops, and experiments show that
\model~can generate beautiful and high-quality fusion results under various
scenarios. An online demo is provided at
https://openimaginglab.github.io/UltraFusion/.
| no_new_dataset | 0.951729 |
2501.11972 | Parul Kumari | Nachiket Kapure, Harsh Joshi, Parul Kumari, Rajeshwari Mistri, Manasi
Mali | "FRAME: Forward Recursive Adaptive Model Extraction-A Technique for
Advance Feature Selection" | Updated version with refinements before JMLR submission. Improved
clarity, expanded literature review, refined methodology, updated
experimental results, and enhanced conclusion. FRAME's scalability, deep
learning integration, and real-world applications are further highlighted | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The challenges in feature selection, particularly in balancing model
accuracy, interpretability, and computational efficiency, remain a critical
issue in advancing machine learning methodologies. To address these
complexities, this study introduces a novel hybrid approach, the Forward
Recursive Adaptive Model Extraction Technique (FRAME), which combines Forward
Selection and Recursive Feature Elimination (RFE) to enhance feature selection
across diverse datasets. By combining the exploratory capabilities of Forward
Selection with the refinement strengths of RFE, FRAME systematically identifies
optimal feature subsets, striking a harmonious trade-off between
experimentation and precision. A comprehensive evaluation of FRAME is conducted
against traditional methods such as SelectKBest and Lasso Regression, using
high-dimensional, noisy, and heterogeneous datasets. The results demonstrate
that FRAME consistently delivers superior predictive performance based on
downstream machine learning evaluation metrics. It efficiently performs
dimensionality reduction with strong model performance, thus being especially
useful for applications that need interpretable and accurate predictions, e.g.,
biomedical diagnostics.
This research emphasizes the need to evaluate feature selection techniques on
diverse datasets to test their robustness and generalizability. The results
indicate that FRAME has great potential for further development, especially by
incorporating deep learning frameworks for adaptive and real-time feature
selection in dynamic settings. By advancing feature selection methodologies,
FRAME offers a practical and effective solution to improve machine learning
applications across multiple domains.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 08:34:10 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 15:45:44 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Kapure",
"Nachiket",
""
],
[
"Joshi",
"Harsh",
""
],
[
"Kumari",
"Parul",
""
],
[
"Mistri",
"Rajeshwari",
""
],
[
"Mali",
"Manasi",
""
]
]
| TITLE: "FRAME: Forward Recursive Adaptive Model Extraction-A Technique for
Advance Feature Selection"
ABSTRACT: The challenges in feature selection, particularly in balancing model
accuracy, interpretability, and computational efficiency, remain a critical
issue in advancing machine learning methodologies. To address these
complexities, this study introduces a novel hybrid approach, the Forward
Recursive Adaptive Model Extraction Technique (FRAME), which combines Forward
Selection and Recursive Feature Elimination (RFE) to enhance feature selection
across diverse datasets. By combining the exploratory capabilities of Forward
Selection with the refinement strengths of RFE, FRAME systematically identifies
optimal feature subsets, striking a harmonious trade-off between
experimentation and precision. A comprehensive evaluation of FRAME is conducted
against traditional methods such as SelectKBest and Lasso Regression, using
high-dimensional, noisy, and heterogeneous datasets. The results demonstrate
that FRAME consistently delivers superior predictive performance based on
downstream machine learning evaluation metrics. It efficiently performs
dimensionality reduction with strong model performance, thus being especially
useful for applications that need interpretable and accurate predictions, e.g.,
biomedical diagnostics.
This research emphasizes the need to evaluate feature selection techniques on
diverse datasets to test their robustness and generalizability. The results
indicate that FRAME has great potential for further development, especially by
incorporating deep learning frameworks for adaptive and real-time feature
selection in dynamic settings. By advancing feature selection methodologies,
FRAME offers a practical and effective solution to improve machine learning
applications across multiple domains.
| no_new_dataset | 0.943348 |
2501.12296 | JiaCheng Zuo | Jiacheng Zuo, Haibo Hu, Zikang Zhou, Yufei Cui, Ziquan Liu, Jianping
Wang, Nan Guan, Jin Wang, Chun Jason Xue | RALAD: Bridging the Real-to-Sim Domain Gap in Autonomous Driving with
Retrieval-Augmented Learning | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the pursuit of robust autonomous driving systems, models trained on
real-world datasets often struggle to adapt to new environments, particularly
when confronted with corner cases such as extreme weather conditions.
Collecting these corner cases in the real world is non-trivial, which
necessitates the use of simulators for validation. However,the high
computational cost and the domain gap in data distribution have hindered the
seamless transition between real and simulated driving scenarios. To tackle
this challenge, we propose Retrieval-Augmented Learning for Autonomous Driving
(RALAD), a novel framework designed to bridge the real-to-sim gap at a low
cost. RALAD features three primary designs, including (1) domain adaptation via
an enhanced Optimal Transport (OT) method that accounts for both individual and
grouped image distances, (2) a simple and unified framework that can be applied
to various models, and (3) efficient fine-tuning techniques that freeze the
computationally expensive layers while maintaining robustness. Experimental
results demonstrate that RALAD compensates for the performance degradation in
simulated environments while maintaining accuracy in real-world scenarios
across three different models. Taking Cross View as an example, the mIOU and
mAP metrics in real-world scenarios remain stable before and after RALAD
fine-tuning, while in simulated environments,the mIOU and mAP metrics are
improved by 10.30% and 12.29%, respectively. Moreover, the re-training cost of
our approach is reduced by approximately 88.1%. Our code is available at
https://github.com/JiachengZuo/RALAD.git.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2025 17:03:06 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 06:45:12 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zuo",
"Jiacheng",
""
],
[
"Hu",
"Haibo",
""
],
[
"Zhou",
"Zikang",
""
],
[
"Cui",
"Yufei",
""
],
[
"Liu",
"Ziquan",
""
],
[
"Wang",
"Jianping",
""
],
[
"Guan",
"Nan",
""
],
[
"Wang",
"Jin",
""
],
[
"Xue",
"Chun Jason",
""
]
]
| TITLE: RALAD: Bridging the Real-to-Sim Domain Gap in Autonomous Driving with
Retrieval-Augmented Learning
ABSTRACT: In the pursuit of robust autonomous driving systems, models trained on
real-world datasets often struggle to adapt to new environments, particularly
when confronted with corner cases such as extreme weather conditions.
Collecting these corner cases in the real world is non-trivial, which
necessitates the use of simulators for validation. However,the high
computational cost and the domain gap in data distribution have hindered the
seamless transition between real and simulated driving scenarios. To tackle
this challenge, we propose Retrieval-Augmented Learning for Autonomous Driving
(RALAD), a novel framework designed to bridge the real-to-sim gap at a low
cost. RALAD features three primary designs, including (1) domain adaptation via
an enhanced Optimal Transport (OT) method that accounts for both individual and
grouped image distances, (2) a simple and unified framework that can be applied
to various models, and (3) efficient fine-tuning techniques that freeze the
computationally expensive layers while maintaining robustness. Experimental
results demonstrate that RALAD compensates for the performance degradation in
simulated environments while maintaining accuracy in real-world scenarios
across three different models. Taking Cross View as an example, the mIOU and
mAP metrics in real-world scenarios remain stable before and after RALAD
fine-tuning, while in simulated environments,the mIOU and mAP metrics are
improved by 10.30% and 12.29%, respectively. Moreover, the re-training cost of
our approach is reduced by approximately 88.1%. Our code is available at
https://github.com/JiachengZuo/RALAD.git.
| no_new_dataset | 0.948822 |
2501.12844 | Ruicheng Zhang | Ruicheng Zhang, Haowei Guo, Zeyu Zhang, Puxin Yan and Shen Zhao | GAMED-Snake: Gradient-aware Adaptive Momentum Evolution Deep Snake Model
for Multi-organ Segmentation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-organ segmentation is a critical yet challenging task due to complex
anatomical backgrounds, blurred boundaries, and diverse morphologies. This
study introduces the Gradient-aware Adaptive Momentum Evolution Deep Snake
(GAMED-Snake) model, which establishes a novel paradigm for contour-based
segmentation by integrating gradient-based learning with adaptive momentum
evolution mechanisms. The GAMED-Snake model incorporates three major
innovations: First, the Distance Energy Map Prior (DEMP) generates a
pixel-level force field that effectively attracts contour points towards the
true boundaries, even in scenarios with complex backgrounds and blurred edges.
Second, the Differential Convolution Inception Module (DCIM) precisely extracts
comprehensive energy gradients, significantly enhancing segmentation accuracy.
Third, the Adaptive Momentum Evolution Mechanism (AMEM) employs cross-attention
to establish dynamic features across different iterations of evolution,
enabling precise boundary alignment for diverse morphologies. Experimental
results on four challenging multi-organ segmentation datasets demonstrate that
GAMED-Snake improves the mDice metric by approximately 2% compared to
state-of-the-art methods. Code will be available at
https://github.com/SYSUzrc/GAMED-Snake.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2025 12:45:09 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 03:18:40 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Ruicheng",
""
],
[
"Guo",
"Haowei",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Yan",
"Puxin",
""
],
[
"Zhao",
"Shen",
""
]
]
| TITLE: GAMED-Snake: Gradient-aware Adaptive Momentum Evolution Deep Snake Model
for Multi-organ Segmentation
ABSTRACT: Multi-organ segmentation is a critical yet challenging task due to complex
anatomical backgrounds, blurred boundaries, and diverse morphologies. This
study introduces the Gradient-aware Adaptive Momentum Evolution Deep Snake
(GAMED-Snake) model, which establishes a novel paradigm for contour-based
segmentation by integrating gradient-based learning with adaptive momentum
evolution mechanisms. The GAMED-Snake model incorporates three major
innovations: First, the Distance Energy Map Prior (DEMP) generates a
pixel-level force field that effectively attracts contour points towards the
true boundaries, even in scenarios with complex backgrounds and blurred edges.
Second, the Differential Convolution Inception Module (DCIM) precisely extracts
comprehensive energy gradients, significantly enhancing segmentation accuracy.
Third, the Adaptive Momentum Evolution Mechanism (AMEM) employs cross-attention
to establish dynamic features across different iterations of evolution,
enabling precise boundary alignment for diverse morphologies. Experimental
results on four challenging multi-organ segmentation datasets demonstrate that
GAMED-Snake improves the mDice metric by approximately 2% compared to
state-of-the-art methods. Code will be available at
https://github.com/SYSUzrc/GAMED-Snake.
| no_new_dataset | 0.95418 |
2501.14406 | Fei Wu | Fei Wu, Jia Hu, Geyong Min, Shiqiang Wang | Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning
of Language Models | null | null | null | null | cs.DC cs.AI cs.LG cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pre-trained Language Models (PLMs) have demonstrated their superiority and
versatility in modern Natural Language Processing (NLP), effectively adapting
to various downstream tasks through further fine-tuning. Federated
Parameter-Efficient Fine-Tuning (FedPEFT) has emerged as a promising solution
to address privacy and efficiency challenges in distributed training for PLMs
on resource-constrained local devices. However, our measurements reveal two key
limitations of FedPEFT: heterogeneous data across devices leads to significant
performance degradation, and a fixed parameter configuration results in
communication inefficiency. To overcome these limitations, we propose FedARA, a
novel Adaptive Rank Allocation framework for federated parameter-efficient
fine-tuning of language models. Specifically, FedARA employs truncated Singular
Value Decomposition (SVD) adaptation to enhance similar feature representation
across clients, significantly mitigating the adverse effects of data
heterogeneity. Subsequently, it utilizes dynamic rank allocation to
progressively identify critical ranks, effectively improving communication
efficiency. Lastly, it leverages rank-based module pruning to automatically
remove inactive modules, steadily reducing local computational cost and memory
usage in each federated learning round. Extensive experiments show that FedARA
consistently outperforms baselines by an average of 6.95% to 8.49% across
various datasets and models under heterogeneous data while significantly
improving communication efficiency by 2.40$ \times$. Moreover, experiments on
various edge devices demonstrate substantial decreases in total training time
and energy consumption by up to 48.90% and 46.95%, respectively.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2025 11:19:07 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 17:30:25 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wu",
"Fei",
""
],
[
"Hu",
"Jia",
""
],
[
"Min",
"Geyong",
""
],
[
"Wang",
"Shiqiang",
""
]
]
| TITLE: Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning
of Language Models
ABSTRACT: Pre-trained Language Models (PLMs) have demonstrated their superiority and
versatility in modern Natural Language Processing (NLP), effectively adapting
to various downstream tasks through further fine-tuning. Federated
Parameter-Efficient Fine-Tuning (FedPEFT) has emerged as a promising solution
to address privacy and efficiency challenges in distributed training for PLMs
on resource-constrained local devices. However, our measurements reveal two key
limitations of FedPEFT: heterogeneous data across devices leads to significant
performance degradation, and a fixed parameter configuration results in
communication inefficiency. To overcome these limitations, we propose FedARA, a
novel Adaptive Rank Allocation framework for federated parameter-efficient
fine-tuning of language models. Specifically, FedARA employs truncated Singular
Value Decomposition (SVD) adaptation to enhance similar feature representation
across clients, significantly mitigating the adverse effects of data
heterogeneity. Subsequently, it utilizes dynamic rank allocation to
progressively identify critical ranks, effectively improving communication
efficiency. Lastly, it leverages rank-based module pruning to automatically
remove inactive modules, steadily reducing local computational cost and memory
usage in each federated learning round. Extensive experiments show that FedARA
consistently outperforms baselines by an average of 6.95% to 8.49% across
various datasets and models under heterogeneous data while significantly
improving communication efficiency by 2.40$ \times$. Moreover, experiments on
various edge devices demonstrate substantial decreases in total training time
and energy consumption by up to 48.90% and 46.95%, respectively.
| no_new_dataset | 0.951504 |
2501.15394 | Lianqing Zheng | Lianqing Zheng, Jianan Liu, Runwei Guan, Long Yang, Shouyi Lu, Yuanzhe
Li, Xiaokai Bai, Jie Bai, Zhixiong Ma, Hui-Liang Shen, and Xichan Zhu | Doracamom: Joint 3D Detection and Occupancy Prediction with Multi-view
4D Radars and Cameras for Omnidirectional Perception | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D object detection and occupancy prediction are critical tasks in autonomous
driving, attracting significant attention. Despite the potential of recent
vision-based methods, they encounter challenges under adverse conditions. Thus,
integrating cameras with next-generation 4D imaging radar to achieve unified
multi-task perception is highly significant, though research in this domain
remains limited. In this paper, we propose Doracamom, the first framework that
fuses multi-view cameras and 4D radar for joint 3D object detection and
semantic occupancy prediction, enabling comprehensive environmental perception.
Specifically, we introduce a novel Coarse Voxel Queries Generator that
integrates geometric priors from 4D radar with semantic features from images to
initialize voxel queries, establishing a robust foundation for subsequent
Transformer-based refinement. To leverage temporal information, we design a
Dual-Branch Temporal Encoder that processes multi-modal temporal features in
parallel across BEV and voxel spaces, enabling comprehensive spatio-temporal
representation learning. Furthermore, we propose a Cross-Modal BEV-Voxel Fusion
module that adaptively fuses complementary features through attention
mechanisms while employing auxiliary tasks to enhance feature quality.
Extensive experiments on the OmniHD-Scenes, View-of-Delft (VoD), and TJ4DRadSet
datasets demonstrate that Doracamom achieves state-of-the-art performance in
both tasks, establishing new benchmarks for multi-modal 3D perception. Code and
models will be publicly available.
| [
{
"version": "v1",
"created": "Sun, 26 Jan 2025 04:24:07 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 07:30:55 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zheng",
"Lianqing",
""
],
[
"Liu",
"Jianan",
""
],
[
"Guan",
"Runwei",
""
],
[
"Yang",
"Long",
""
],
[
"Lu",
"Shouyi",
""
],
[
"Li",
"Yuanzhe",
""
],
[
"Bai",
"Xiaokai",
""
],
[
"Bai",
"Jie",
""
],
[
"Ma",
"Zhixiong",
""
],
[
"Shen",
"Hui-Liang",
""
],
[
"Zhu",
"Xichan",
""
]
]
| TITLE: Doracamom: Joint 3D Detection and Occupancy Prediction with Multi-view
4D Radars and Cameras for Omnidirectional Perception
ABSTRACT: 3D object detection and occupancy prediction are critical tasks in autonomous
driving, attracting significant attention. Despite the potential of recent
vision-based methods, they encounter challenges under adverse conditions. Thus,
integrating cameras with next-generation 4D imaging radar to achieve unified
multi-task perception is highly significant, though research in this domain
remains limited. In this paper, we propose Doracamom, the first framework that
fuses multi-view cameras and 4D radar for joint 3D object detection and
semantic occupancy prediction, enabling comprehensive environmental perception.
Specifically, we introduce a novel Coarse Voxel Queries Generator that
integrates geometric priors from 4D radar with semantic features from images to
initialize voxel queries, establishing a robust foundation for subsequent
Transformer-based refinement. To leverage temporal information, we design a
Dual-Branch Temporal Encoder that processes multi-modal temporal features in
parallel across BEV and voxel spaces, enabling comprehensive spatio-temporal
representation learning. Furthermore, we propose a Cross-Modal BEV-Voxel Fusion
module that adaptively fuses complementary features through attention
mechanisms while employing auxiliary tasks to enhance feature quality.
Extensive experiments on the OmniHD-Scenes, View-of-Delft (VoD), and TJ4DRadSet
datasets demonstrate that Doracamom achieves state-of-the-art performance in
both tasks, establishing new benchmarks for multi-modal 3D perception. Code and
models will be publicly available.
| no_new_dataset | 0.949342 |
2501.15739 | Chuan Tian | Chuan Tian, C. Megan Urry, Aritra Ghosh, Daisuke Nagai, Tonima T.
Ananna, Meredith C. Powell, Connor Auge, Aayush Mishra, David B. Sanders,
Nico Cappelluti, Kevin Schawinski | Automatic Machine Learning Framework to Study Morphological Parameters
of AGN Host Galaxies within $z < 1.4$ in the Hyper Supreme-Cam Wide Survey | Accepted for publication in The Astrophysical Journal. 31 Pages. 20
Figures | null | 10.3847/1538-4357/adaec0 | null | astro-ph.GA astro-ph.IM cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | We present a composite machine learning framework to estimate posterior
probability distributions of bulge-to-total light ratio, half-light radius, and
flux for Active Galactic Nucleus (AGN) host galaxies within $z<1.4$ and $m<23$
in the Hyper Supreme-Cam Wide survey. We divide the data into five redshift
bins: low ($0<z<0.25$), mid ($0.25<z<0.5$), high ($0.5<z<0.9$), extra
($0.9<z<1.1$) and extreme ($1.1<z<1.4$), and train our models independently in
each bin. We use PSFGAN to decompose the AGN point source light from its host
galaxy, and invoke the Galaxy Morphology Posterior Estimation Network (GaMPEN)
to estimate morphological parameters of the recovered host galaxy. We first
trained our models on simulated data, and then fine-tuned our algorithm via
transfer learning using labeled real data. To create training labels for
transfer learning, we used GALFIT to fit $\sim 20,000$ real HSC galaxies in
each redshift bin. We comprehensively examined that the predicted values from
our final models agree well with the GALFIT values for the vast majority of
cases. Our PSFGAN + GaMPEN framework runs at least three orders of magnitude
faster than traditional light-profile fitting methods, and can be easily
retrained for other morphological parameters or on other datasets with diverse
ranges of resolutions, seeing conditions, and signal-to-noise ratios, making it
an ideal tool for analyzing AGN host galaxies from large surveys coming soon
from the Rubin-LSST, Euclid, and Roman telescopes.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2025 03:04:34 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Tian",
"Chuan",
""
],
[
"Urry",
"C. Megan",
""
],
[
"Ghosh",
"Aritra",
""
],
[
"Nagai",
"Daisuke",
""
],
[
"Ananna",
"Tonima T.",
""
],
[
"Powell",
"Meredith C.",
""
],
[
"Auge",
"Connor",
""
],
[
"Mishra",
"Aayush",
""
],
[
"Sanders",
"David B.",
""
],
[
"Cappelluti",
"Nico",
""
],
[
"Schawinski",
"Kevin",
""
]
]
| TITLE: Automatic Machine Learning Framework to Study Morphological Parameters
of AGN Host Galaxies within $z < 1.4$ in the Hyper Supreme-Cam Wide Survey
ABSTRACT: We present a composite machine learning framework to estimate posterior
probability distributions of bulge-to-total light ratio, half-light radius, and
flux for Active Galactic Nucleus (AGN) host galaxies within $z<1.4$ and $m<23$
in the Hyper Supreme-Cam Wide survey. We divide the data into five redshift
bins: low ($0<z<0.25$), mid ($0.25<z<0.5$), high ($0.5<z<0.9$), extra
($0.9<z<1.1$) and extreme ($1.1<z<1.4$), and train our models independently in
each bin. We use PSFGAN to decompose the AGN point source light from its host
galaxy, and invoke the Galaxy Morphology Posterior Estimation Network (GaMPEN)
to estimate morphological parameters of the recovered host galaxy. We first
trained our models on simulated data, and then fine-tuned our algorithm via
transfer learning using labeled real data. To create training labels for
transfer learning, we used GALFIT to fit $\sim 20,000$ real HSC galaxies in
each redshift bin. We comprehensively examined that the predicted values from
our final models agree well with the GALFIT values for the vast majority of
cases. Our PSFGAN + GaMPEN framework runs at least three orders of magnitude
faster than traditional light-profile fitting methods, and can be easily
retrained for other morphological parameters or on other datasets with diverse
ranges of resolutions, seeing conditions, and signal-to-noise ratios, making it
an ideal tool for analyzing AGN host galaxies from large surveys coming soon
from the Rubin-LSST, Euclid, and Roman telescopes.
| no_new_dataset | 0.951818 |
2501.15878 | Adil Kaan Akan | Adil Kaan Akan, Yucel Yemez | Slot-Guided Adaptation of Pre-trained Diffusion Models for
Object-Centric Learning and Compositional Generation | Accepted to ICLR2025. Project page:
https://kaanakan.github.io/SlotAdapt/ | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present SlotAdapt, an object-centric learning method that combines slot
attention with pretrained diffusion models by introducing adapters for
slot-based conditioning. Our method preserves the generative power of
pretrained diffusion models, while avoiding their text-centric conditioning
bias. We also incorporate an additional guidance loss into our architecture to
align cross-attention from adapter layers with slot attention. This enhances
the alignment of our model with the objects in the input image without using
external supervision. Experimental results show that our method outperforms
state-of-the-art techniques in object discovery and image generation tasks
across multiple datasets, including those with real images. Furthermore, we
demonstrate through experiments that our method performs remarkably well on
complex real-world images for compositional generation, in contrast to other
slot-based generative methods in the literature. The project page can be found
at https://kaanakan.github.io/SlotAdapt/.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2025 09:03:34 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jan 2025 08:33:41 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 10:25:36 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Akan",
"Adil Kaan",
""
],
[
"Yemez",
"Yucel",
""
]
]
| TITLE: Slot-Guided Adaptation of Pre-trained Diffusion Models for
Object-Centric Learning and Compositional Generation
ABSTRACT: We present SlotAdapt, an object-centric learning method that combines slot
attention with pretrained diffusion models by introducing adapters for
slot-based conditioning. Our method preserves the generative power of
pretrained diffusion models, while avoiding their text-centric conditioning
bias. We also incorporate an additional guidance loss into our architecture to
align cross-attention from adapter layers with slot attention. This enhances
the alignment of our model with the objects in the input image without using
external supervision. Experimental results show that our method outperforms
state-of-the-art techniques in object discovery and image generation tasks
across multiple datasets, including those with real images. Furthermore, we
demonstrate through experiments that our method performs remarkably well on
complex real-world images for compositional generation, in contrast to other
slot-based generative methods in the literature. The project page can be found
at https://kaanakan.github.io/SlotAdapt/.
| no_new_dataset | 0.950134 |
2501.19393 | Niklas Muennighoff | Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li
Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel
Cand\`es, Tatsunori Hashimoto | s1: Simple test-time scaling | 46 pages (9 main), 10 figures, 15 tables | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Test-time scaling is a promising new approach to language modeling that uses
extra test-time compute to improve performance. Recently, OpenAI's o1 model
showed this capability but did not publicly share its methodology, leading to
many replication efforts. We seek the simplest approach to achieve test-time
scaling and strong reasoning performance. First, we curate a small dataset s1K
of 1,000 questions paired with reasoning traces relying on three criteria we
validate through ablations: difficulty, diversity, and quality. Second, we
develop budget forcing to control test-time compute by forcefully terminating
the model's thinking process or lengthening it by appending "Wait" multiple
times to the model's generation when it tries to end. This can lead the model
to double-check its answer, often fixing incorrect reasoning steps. After
supervised finetuning the Qwen2.5-32B-Instruct language model on s1K and
equipping it with budget forcing, our model s1-32B exceeds o1-preview on
competition math questions by up to 27% (MATH and AIME24). Further, scaling
s1-32B with budget forcing allows extrapolating beyond its performance without
test-time intervention: from 50% to 57% on AIME24. Our model, data, and code
are open-source at https://github.com/simplescaling/s1
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 18:48:08 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Feb 2025 16:31:30 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Mar 2025 06:07:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Muennighoff",
"Niklas",
""
],
[
"Yang",
"Zitong",
""
],
[
"Shi",
"Weijia",
""
],
[
"Li",
"Xiang Lisa",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Liang",
"Percy",
""
],
[
"Candès",
"Emmanuel",
""
],
[
"Hashimoto",
"Tatsunori",
""
]
]
| TITLE: s1: Simple test-time scaling
ABSTRACT: Test-time scaling is a promising new approach to language modeling that uses
extra test-time compute to improve performance. Recently, OpenAI's o1 model
showed this capability but did not publicly share its methodology, leading to
many replication efforts. We seek the simplest approach to achieve test-time
scaling and strong reasoning performance. First, we curate a small dataset s1K
of 1,000 questions paired with reasoning traces relying on three criteria we
validate through ablations: difficulty, diversity, and quality. Second, we
develop budget forcing to control test-time compute by forcefully terminating
the model's thinking process or lengthening it by appending "Wait" multiple
times to the model's generation when it tries to end. This can lead the model
to double-check its answer, often fixing incorrect reasoning steps. After
supervised finetuning the Qwen2.5-32B-Instruct language model on s1K and
equipping it with budget forcing, our model s1-32B exceeds o1-preview on
competition math questions by up to 27% (MATH and AIME24). Further, scaling
s1-32B with budget forcing allows extrapolating beyond its performance without
test-time intervention: from 50% to 57% on AIME24. Our model, data, and code
are open-source at https://github.com/simplescaling/s1
| new_dataset | 0.956309 |
2502.00734 | Yun Chu | Yun Chu, Qiuhao Wang, Enze Zhou, Ling Fu, Qian Liu, Gang Zheng | CycleGuardian: A Framework for Automatic RespiratorySound classification
Based on Improved Deep clustering and Contrastive Learning | null | Complex Intell. Syst. 11, 200 (2025) | 10.1007/s40747-025-01800-4 | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Auscultation plays a pivotal role in early respiratory and pulmonary disease
diagnosis. Despite the emergence of deep learning-based methods for automatic
respiratory sound classification post-Covid-19, limited datasets impede
performance enhancement. Distinguishing between normal and abnormal respiratory
sounds poses challenges due to the coexistence of normal respiratory components
and noise components in both types. Moreover, different abnormal respiratory
sounds exhibit similar anomalous features, hindering their differentiation.
Besides, existing state-of-the-art models suffer from excessive parameter size,
impeding deployment on resource-constrained mobile platforms. To address these
issues, we design a lightweight network CycleGuardian and propose a framework
based on an improved deep clustering and contrastive learning. We first
generate a hybrid spectrogram for feature diversity and grouping spectrograms
to facilitating intermittent abnormal sound capture.Then, CycleGuardian
integrates a deep clustering module with a similarity-constrained clustering
component to improve the ability to capture abnormal features and a contrastive
learning module with group mixing for enhanced abnormal feature discernment.
Multi-objective optimization enhances overall performance during training. In
experiments we use the ICBHI2017 dataset, following the official split method
and without any pre-trained weights, our method achieves Sp: 82.06 $\%$, Se:
44.47$\%$, and Score: 63.26$\%$ with a network model size of 38M, comparing to
the current model, our method leads by nearly 7$\%$, achieving the current best
performances. Additionally, we deploy the network on Android devices,
showcasing a comprehensive intelligent respiratory sound auscultation system.
| [
{
"version": "v1",
"created": "Sun, 2 Feb 2025 09:56:47 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chu",
"Yun",
""
],
[
"Wang",
"Qiuhao",
""
],
[
"Zhou",
"Enze",
""
],
[
"Fu",
"Ling",
""
],
[
"Liu",
"Qian",
""
],
[
"Zheng",
"Gang",
""
]
]
| TITLE: CycleGuardian: A Framework for Automatic RespiratorySound classification
Based on Improved Deep clustering and Contrastive Learning
ABSTRACT: Auscultation plays a pivotal role in early respiratory and pulmonary disease
diagnosis. Despite the emergence of deep learning-based methods for automatic
respiratory sound classification post-Covid-19, limited datasets impede
performance enhancement. Distinguishing between normal and abnormal respiratory
sounds poses challenges due to the coexistence of normal respiratory components
and noise components in both types. Moreover, different abnormal respiratory
sounds exhibit similar anomalous features, hindering their differentiation.
Besides, existing state-of-the-art models suffer from excessive parameter size,
impeding deployment on resource-constrained mobile platforms. To address these
issues, we design a lightweight network CycleGuardian and propose a framework
based on an improved deep clustering and contrastive learning. We first
generate a hybrid spectrogram for feature diversity and grouping spectrograms
to facilitating intermittent abnormal sound capture.Then, CycleGuardian
integrates a deep clustering module with a similarity-constrained clustering
component to improve the ability to capture abnormal features and a contrastive
learning module with group mixing for enhanced abnormal feature discernment.
Multi-objective optimization enhances overall performance during training. In
experiments we use the ICBHI2017 dataset, following the official split method
and without any pre-trained weights, our method achieves Sp: 82.06 $\%$, Se:
44.47$\%$, and Score: 63.26$\%$ with a network model size of 38M, comparing to
the current model, our method leads by nearly 7$\%$, achieving the current best
performances. Additionally, we deploy the network on Android devices,
showcasing a comprehensive intelligent respiratory sound auscultation system.
| no_new_dataset | 0.950088 |
2502.01981 | Shubham Malhotra | Shubham Malhotra, Fnu Yashu, Muhammad Saqib, Dipkumar Mehta, Jagdish
Jangid and Sachin Dixit | Evaluating Fault Tolerance and Scalability in Distributed File Systems:
A Case Study of GFS, HDFS, and MinIO | 9 pages, 3 figures, 3 tables | null | null | null | cs.DC cs.ET cs.PF cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed File Systems (DFS) are essential for managing vast datasets
across multiple servers, offering benefits in scalability, fault tolerance, and
data accessibility. This paper presents a comprehensive evaluation of three
prominent DFSs - Google File System (GFS), Hadoop Distributed File System
(HDFS), and MinIO - focusing on their fault tolerance mechanisms and
scalability under varying data loads and client demands. Through detailed
analysis, how these systems handle data redundancy, server failures, and client
access protocols, ensuring reliability in dynamic, large-scale environments is
assessed. In addition, the impact of system design on performance, particularly
in distributed cloud and computing architectures is assessed. By comparing the
strengths and limitations of each DFS, the paper provides practical insights
for selecting the most appropriate system for different enterprise needs, from
high availability storage to big data analytics.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 03:52:45 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 20:52:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Malhotra",
"Shubham",
""
],
[
"Yashu",
"Fnu",
""
],
[
"Saqib",
"Muhammad",
""
],
[
"Mehta",
"Dipkumar",
""
],
[
"Jangid",
"Jagdish",
""
],
[
"Dixit",
"Sachin",
""
]
]
| TITLE: Evaluating Fault Tolerance and Scalability in Distributed File Systems:
A Case Study of GFS, HDFS, and MinIO
ABSTRACT: Distributed File Systems (DFS) are essential for managing vast datasets
across multiple servers, offering benefits in scalability, fault tolerance, and
data accessibility. This paper presents a comprehensive evaluation of three
prominent DFSs - Google File System (GFS), Hadoop Distributed File System
(HDFS), and MinIO - focusing on their fault tolerance mechanisms and
scalability under varying data loads and client demands. Through detailed
analysis, how these systems handle data redundancy, server failures, and client
access protocols, ensuring reliability in dynamic, large-scale environments is
assessed. In addition, the impact of system design on performance, particularly
in distributed cloud and computing architectures is assessed. By comparing the
strengths and limitations of each DFS, the paper provides practical insights
for selecting the most appropriate system for different enterprise needs, from
high availability storage to big data analytics.
| no_new_dataset | 0.944331 |
2502.02283 | Zhihao Guo | Zhihao Guo, Jingxuan Su, Shenglin Wang, Jinlong Fan, Jing Zhang,
Liangxiu Han, Peng Wang | GP-GS: Gaussian Processes for Enhanced Gaussian Splatting | 14 pages,11 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | 3D Gaussian Splatting has emerged as an efficient photorealistic novel view
synthesis method. However, its reliance on sparse Structure-from-Motion (SfM)
point clouds consistently compromises the scene reconstruction quality. To
address these limitations, this paper proposes a novel 3D reconstruction
framework Gaussian Processes Gaussian Splatting (GP-GS), where a multi-output
Gaussian Process model is developed to achieve adaptive and uncertainty-guided
densification of sparse SfM point clouds. Specifically, we propose a dynamic
sampling and filtering pipeline that adaptively expands the SfM point clouds by
leveraging GP-based predictions to infer new candidate points from the input 2D
pixels and depth maps. The pipeline utilizes uncertainty estimates to guide the
pruning of high-variance predictions, ensuring geometric consistency and
enabling the generation of dense point clouds. The densified point clouds
provide high-quality initial 3D Gaussians to enhance reconstruction
performance. Extensive experiments conducted on synthetic and real-world
datasets across various scales validate the effectiveness and practicality of
the proposed framework.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 12:50:16 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Feb 2025 16:09:26 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 00:25:45 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Guo",
"Zhihao",
""
],
[
"Su",
"Jingxuan",
""
],
[
"Wang",
"Shenglin",
""
],
[
"Fan",
"Jinlong",
""
],
[
"Zhang",
"Jing",
""
],
[
"Han",
"Liangxiu",
""
],
[
"Wang",
"Peng",
""
]
]
| TITLE: GP-GS: Gaussian Processes for Enhanced Gaussian Splatting
ABSTRACT: 3D Gaussian Splatting has emerged as an efficient photorealistic novel view
synthesis method. However, its reliance on sparse Structure-from-Motion (SfM)
point clouds consistently compromises the scene reconstruction quality. To
address these limitations, this paper proposes a novel 3D reconstruction
framework Gaussian Processes Gaussian Splatting (GP-GS), where a multi-output
Gaussian Process model is developed to achieve adaptive and uncertainty-guided
densification of sparse SfM point clouds. Specifically, we propose a dynamic
sampling and filtering pipeline that adaptively expands the SfM point clouds by
leveraging GP-based predictions to infer new candidate points from the input 2D
pixels and depth maps. The pipeline utilizes uncertainty estimates to guide the
pruning of high-variance predictions, ensuring geometric consistency and
enabling the generation of dense point clouds. The densified point clouds
provide high-quality initial 3D Gaussians to enhance reconstruction
performance. Extensive experiments conducted on synthetic and real-world
datasets across various scales validate the effectiveness and practicality of
the proposed framework.
| no_new_dataset | 0.950732 |
2502.05589 | Zhuoshi Pan | Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Xufang Luo, Hao Cheng,
Dongsheng Li, Yuqing Yang, Chin-Yew Lin, H. Vicky Zhao, Lili Qiu, Jianfeng
Gao | On Memory Construction and Retrieval for Personalized Conversational
Agents | 10 pages, 5 figures, conference | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | To deliver coherent and personalized experiences in long-term conversations,
existing approaches typically perform retrieval augmented response generation
by constructing memory banks from conversation history at either the
turn-level, session-level, or through summarization techniques.In this paper,
we present two key findings: (1) The granularity of memory unit matters:
turn-level, session-level, and summarization-based methods each exhibit
limitations in both memory retrieval accuracy and the semantic quality of the
retrieved content. (2) Prompt compression methods, such as LLMLingua-2, can
effectively serve as a denoising mechanism, enhancing memory retrieval accuracy
across different granularities. Building on these insights, we propose SeCom, a
method that constructs the memory bank at segment level by introducing a
conversation segmentation model that partitions long-term conversations into
topically coherent segments, while applying compression based denoising on
memory units to enhance memory retrieval. Experimental results show that SeCom
exhibits a significant performance advantage over baselines on long-term
conversation benchmarks LOCOMO and Long-MT-Bench+. Additionally, the proposed
conversation segmentation method demonstrates superior performance on dialogue
segmentation datasets such as DialSeg711, TIAGE, and SuperDialSeg.
| [
{
"version": "v1",
"created": "Sat, 8 Feb 2025 14:28:36 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Feb 2025 04:15:47 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 16:49:18 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Pan",
"Zhuoshi",
""
],
[
"Wu",
"Qianhui",
""
],
[
"Jiang",
"Huiqiang",
""
],
[
"Luo",
"Xufang",
""
],
[
"Cheng",
"Hao",
""
],
[
"Li",
"Dongsheng",
""
],
[
"Yang",
"Yuqing",
""
],
[
"Lin",
"Chin-Yew",
""
],
[
"Zhao",
"H. Vicky",
""
],
[
"Qiu",
"Lili",
""
],
[
"Gao",
"Jianfeng",
""
]
]
| TITLE: On Memory Construction and Retrieval for Personalized Conversational
Agents
ABSTRACT: To deliver coherent and personalized experiences in long-term conversations,
existing approaches typically perform retrieval augmented response generation
by constructing memory banks from conversation history at either the
turn-level, session-level, or through summarization techniques.In this paper,
we present two key findings: (1) The granularity of memory unit matters:
turn-level, session-level, and summarization-based methods each exhibit
limitations in both memory retrieval accuracy and the semantic quality of the
retrieved content. (2) Prompt compression methods, such as LLMLingua-2, can
effectively serve as a denoising mechanism, enhancing memory retrieval accuracy
across different granularities. Building on these insights, we propose SeCom, a
method that constructs the memory bank at segment level by introducing a
conversation segmentation model that partitions long-term conversations into
topically coherent segments, while applying compression based denoising on
memory units to enhance memory retrieval. Experimental results show that SeCom
exhibits a significant performance advantage over baselines on long-term
conversation benchmarks LOCOMO and Long-MT-Bench+. Additionally, the proposed
conversation segmentation method demonstrates superior performance on dialogue
segmentation datasets such as DialSeg711, TIAGE, and SuperDialSeg.
| no_new_dataset | 0.950641 |
2502.06563 | Chengwen Qi | Chengwen Qi, Ren Ma, Bowen Li, He Du, Binyuan Hui, Jinwang Wu, Yuanjun
Laili, Conghui He | Large Language Models Meet Symbolic Provers for Logical Reasoning
Evaluation | Accepted by ICLR 2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | First-order logic (FOL) reasoning, which involves sequential deduction, is
pivotal for intelligent systems and serves as a valuable task for evaluating
reasoning capabilities, particularly in chain-of-thought (CoT) contexts.
Existing benchmarks often rely on extensive human annotation or handcrafted
templates, making it difficult to achieve the necessary complexity,
scalability, and diversity for robust evaluation. To address these limitations,
we propose a novel framework called ProverGen that synergizes the generative
strengths of Large Language Models (LLMs) with the rigor and precision of
symbolic provers, enabling the creation of a scalable, diverse, and
high-quality FOL reasoning dataset, ProverQA. ProverQA is also distinguished by
its inclusion of accessible and logically coherent intermediate reasoning steps
for each problem. Our evaluation shows that state-of-the-art LLMs struggle to
solve ProverQA problems, even with CoT prompting, highlighting the dataset's
challenging nature. We also finetune Llama3.1-8B-Instruct on a separate
training set generated by our framework. The finetuned model demonstrates
consistent improvements on both in-distribution and out-of-distribution test
sets, suggesting the value of our proposed data generation framework. Code
available at: https://github.com/opendatalab/ProverGen
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 15:31:54 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 16:38:28 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Qi",
"Chengwen",
""
],
[
"Ma",
"Ren",
""
],
[
"Li",
"Bowen",
""
],
[
"Du",
"He",
""
],
[
"Hui",
"Binyuan",
""
],
[
"Wu",
"Jinwang",
""
],
[
"Laili",
"Yuanjun",
""
],
[
"He",
"Conghui",
""
]
]
| TITLE: Large Language Models Meet Symbolic Provers for Logical Reasoning
Evaluation
ABSTRACT: First-order logic (FOL) reasoning, which involves sequential deduction, is
pivotal for intelligent systems and serves as a valuable task for evaluating
reasoning capabilities, particularly in chain-of-thought (CoT) contexts.
Existing benchmarks often rely on extensive human annotation or handcrafted
templates, making it difficult to achieve the necessary complexity,
scalability, and diversity for robust evaluation. To address these limitations,
we propose a novel framework called ProverGen that synergizes the generative
strengths of Large Language Models (LLMs) with the rigor and precision of
symbolic provers, enabling the creation of a scalable, diverse, and
high-quality FOL reasoning dataset, ProverQA. ProverQA is also distinguished by
its inclusion of accessible and logically coherent intermediate reasoning steps
for each problem. Our evaluation shows that state-of-the-art LLMs struggle to
solve ProverQA problems, even with CoT prompting, highlighting the dataset's
challenging nature. We also finetune Llama3.1-8B-Instruct on a separate
training set generated by our framework. The finetuned model demonstrates
consistent improvements on both in-distribution and out-of-distribution test
sets, suggesting the value of our proposed data generation framework. Code
available at: https://github.com/opendatalab/ProverGen
| new_dataset | 0.96641 |
2502.07176 | Lizhong Chen | Cale Coffman, Lizhong Chen | MatrixKAN: Parallelized Kolmogorov-Arnold Network | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Kolmogorov-Arnold Networks (KAN) are a new class of neural network
architecture representing a promising alternative to the Multilayer Perceptron
(MLP), demonstrating improved expressiveness and interpretability. However,
KANs suffer from slow training and inference speeds relative to MLPs due in
part to the recursive nature of the underlying B-spline calculations. This
issue is particularly apparent with respect to KANs utilizing high-degree
B-splines, as the number of required non-parallelizable recursions is
proportional to B-spline degree. We solve this issue by proposing MatrixKAN, a
novel optimization that parallelizes B-spline calculations with matrix
representation and operations, thus significantly improving effective
computation time for models utilizing high-degree B-splines. In this paper, we
demonstrate the superior scaling of MatrixKAN's computation time relative to
B-spline degree. Further, our experiments demonstrate speedups of approximately
40x relative to KAN, with significant additional speedup potential for larger
datasets or higher spline degrees.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2025 01:59:46 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 19:24:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Coffman",
"Cale",
""
],
[
"Chen",
"Lizhong",
""
]
]
| TITLE: MatrixKAN: Parallelized Kolmogorov-Arnold Network
ABSTRACT: Kolmogorov-Arnold Networks (KAN) are a new class of neural network
architecture representing a promising alternative to the Multilayer Perceptron
(MLP), demonstrating improved expressiveness and interpretability. However,
KANs suffer from slow training and inference speeds relative to MLPs due in
part to the recursive nature of the underlying B-spline calculations. This
issue is particularly apparent with respect to KANs utilizing high-degree
B-splines, as the number of required non-parallelizable recursions is
proportional to B-spline degree. We solve this issue by proposing MatrixKAN, a
novel optimization that parallelizes B-spline calculations with matrix
representation and operations, thus significantly improving effective
computation time for models utilizing high-degree B-splines. In this paper, we
demonstrate the superior scaling of MatrixKAN's computation time relative to
B-spline degree. Further, our experiments demonstrate speedups of approximately
40x relative to KAN, with significant additional speedup potential for larger
datasets or higher spline degrees.
| no_new_dataset | 0.95253 |
2502.08079 | Peng-Fei Zhang | Peng-Fei Zhang, Guangdong Bai, Zi Huang | MAA: Meticulous Adversarial Attack against Vision-Language Pre-trained
Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current adversarial attacks for evaluating the robustness of vision-language
pre-trained (VLP) models in multi-modal tasks suffer from limited
transferability, where attacks crafted for a specific model often struggle to
generalize effectively across different models, limiting their utility in
assessing robustness more broadly. This is mainly attributed to the
over-reliance on model-specific features and regions, particularly in the image
modality. In this paper, we propose an elegant yet highly effective method
termed Meticulous Adversarial Attack (MAA) to fully exploit model-independent
characteristics and vulnerabilities of individual samples, achieving enhanced
generalizability and reduced model dependence. MAA emphasizes fine-grained
optimization of adversarial images by developing a novel resizing and sliding
crop (RScrop) technique, incorporating a multi-granularity similarity
disruption (MGSD) strategy. Extensive experiments across diverse VLP models,
multiple benchmark datasets, and a variety of downstream tasks demonstrate that
MAA significantly enhances the effectiveness and transferability of adversarial
attacks. A large cohort of performance studies is conducted to generate
insights into the effectiveness of various model configurations, guiding future
advancements in this domain.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 02:53:27 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 02:16:39 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Mar 2025 01:35:58 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Peng-Fei",
""
],
[
"Bai",
"Guangdong",
""
],
[
"Huang",
"Zi",
""
]
]
| TITLE: MAA: Meticulous Adversarial Attack against Vision-Language Pre-trained
Models
ABSTRACT: Current adversarial attacks for evaluating the robustness of vision-language
pre-trained (VLP) models in multi-modal tasks suffer from limited
transferability, where attacks crafted for a specific model often struggle to
generalize effectively across different models, limiting their utility in
assessing robustness more broadly. This is mainly attributed to the
over-reliance on model-specific features and regions, particularly in the image
modality. In this paper, we propose an elegant yet highly effective method
termed Meticulous Adversarial Attack (MAA) to fully exploit model-independent
characteristics and vulnerabilities of individual samples, achieving enhanced
generalizability and reduced model dependence. MAA emphasizes fine-grained
optimization of adversarial images by developing a novel resizing and sliding
crop (RScrop) technique, incorporating a multi-granularity similarity
disruption (MGSD) strategy. Extensive experiments across diverse VLP models,
multiple benchmark datasets, and a variety of downstream tasks demonstrate that
MAA significantly enhances the effectiveness and transferability of adversarial
attacks. A large cohort of performance studies is conducted to generate
insights into the effectiveness of various model configurations, guiding future
advancements in this domain.
| no_new_dataset | 0.9462 |
2502.08813 | Mohammed Daoudi | Fouad Boutaleb, Emery Pierson, Nicolas Doudeau, Cl\'emence Nineuil,
Ali Amad, Mohamed Daoudi | Measuring Anxiety Levels with Head Motion Patterns in Severe Depression
Population | 19th IEEE International Conference on Automatic Face and Gesture
Recognition (FG), 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Depression and anxiety are prevalent mental health disorders that frequently
cooccur, with anxiety significantly influencing both the manifestation and
treatment of depression. An accurate assessment of anxiety levels in
individuals with depression is crucial to develop effective and personalized
treatment plans. This study proposes a new noninvasive method for quantifying
anxiety severity by analyzing head movements -- specifically speed,
acceleration, and angular displacement -- during video-recorded interviews with
patients suffering from severe depression. Using data from a new CALYPSO
Depression Dataset, we extracted head motion characteristics and applied
regression analysis to predict clinically evaluated anxiety levels. Our results
demonstrate a high level of precision, achieving a mean absolute error (MAE) of
0.35 in predicting the severity of psychological anxiety based on head movement
patterns. This indicates that our approach can enhance the understanding of
anxiety's role in depression and assist psychiatrists in refining treatment
strategies for individuals.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 21:55:26 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 05:50:08 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Boutaleb",
"Fouad",
""
],
[
"Pierson",
"Emery",
""
],
[
"Doudeau",
"Nicolas",
""
],
[
"Nineuil",
"Clémence",
""
],
[
"Amad",
"Ali",
""
],
[
"Daoudi",
"Mohamed",
""
]
]
| TITLE: Measuring Anxiety Levels with Head Motion Patterns in Severe Depression
Population
ABSTRACT: Depression and anxiety are prevalent mental health disorders that frequently
cooccur, with anxiety significantly influencing both the manifestation and
treatment of depression. An accurate assessment of anxiety levels in
individuals with depression is crucial to develop effective and personalized
treatment plans. This study proposes a new noninvasive method for quantifying
anxiety severity by analyzing head movements -- specifically speed,
acceleration, and angular displacement -- during video-recorded interviews with
patients suffering from severe depression. Using data from a new CALYPSO
Depression Dataset, we extracted head motion characteristics and applied
regression analysis to predict clinically evaluated anxiety levels. Our results
demonstrate a high level of precision, achieving a mean absolute error (MAE) of
0.35 in predicting the severity of psychological anxiety based on head movement
patterns. This indicates that our approach can enhance the understanding of
anxiety's role in depression and assist psychiatrists in refining treatment
strategies for individuals.
| new_dataset | 0.960435 |
2502.10310 | Omar Faruk | Md Pranto and Omar Faruk | Object Detection and Tracking | 10 pages, 5 figures | null | null | null | cs.CV cs.CY | http://creativecommons.org/licenses/by-sa/4.0/ | Efficient and accurate object detection is an important topic in the
development of computer vision systems. With the advent of deep learning
techniques, the accuracy of object detection has increased significantly. The
project aims to integrate a modern technique for object detection with the aim
of achieving high accuracy with real-time performance. The reliance on other
computer vision algorithms in many object identification systems, which results
in poor and ineffective performance, is a significant obstacle. In this
research, we solve the end-to-end object detection problem entirely using deep
learning techniques. The network is trained using the most difficult publicly
available dataset, which is used for an annual item detection challenge.
Applications that need object detection can benefit the system's quick and
precise finding.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 17:13:52 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Pranto",
"Md",
""
],
[
"Faruk",
"Omar",
""
]
]
| TITLE: Object Detection and Tracking
ABSTRACT: Efficient and accurate object detection is an important topic in the
development of computer vision systems. With the advent of deep learning
techniques, the accuracy of object detection has increased significantly. The
project aims to integrate a modern technique for object detection with the aim
of achieving high accuracy with real-time performance. The reliance on other
computer vision algorithms in many object identification systems, which results
in poor and ineffective performance, is a significant obstacle. In this
research, we solve the end-to-end object detection problem entirely using deep
learning techniques. The network is trained using the most difficult publicly
available dataset, which is used for an annual item detection challenge.
Applications that need object detection can benefit the system's quick and
precise finding.
| no_new_dataset | 0.952353 |
2502.10982 | Yunfei Liu | Yunfei Liu, Lei Zhu, Lijian Lin, Ye Zhu, Ailing Zhang, Yu Li | TEASER: Token Enhanced Spatial Modeling for Expressions Reconstruction | Accepted by ICLR 2025, code and demos are available at
https://tinyurl.com/TEASER-project | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D facial reconstruction from a single in-the-wild image is a crucial task in
human-centered computer vision tasks. While existing methods can recover
accurate facial shapes, there remains significant space for improvement in
fine-grained expression capture. Current approaches struggle with irregular
mouth shapes, exaggerated expressions, and asymmetrical facial movements. We
present TEASER (Token EnhAnced Spatial modeling for Expressions
Reconstruction), which addresses these challenges and enhances 3D facial
geometry performance. TEASER tackles two main limitations of existing methods:
insufficient photometric loss for self-reconstruction and inaccurate
localization of subtle expressions. We introduce a multi-scale tokenizer to
extract facial appearance information. Combined with a neural renderer, these
tokens provide precise geometric guidance for expression reconstruction.
Furthermore, TEASER incorporates a pose-dependent landmark loss to further
improve geometric performances. Our approach not only significantly enhances
expression reconstruction quality but also offers interpretable tokens suitable
for various downstream applications, such as photorealistic facial video
driving, expression transfer, and identity swapping. Quantitative and
qualitative experimental results across multiple datasets demonstrate that
TEASER achieves state-of-the-art performance in precise expression
reconstruction.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 04:00:06 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Feb 2025 03:43:41 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 07:31:57 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Yunfei",
""
],
[
"Zhu",
"Lei",
""
],
[
"Lin",
"Lijian",
""
],
[
"Zhu",
"Ye",
""
],
[
"Zhang",
"Ailing",
""
],
[
"Li",
"Yu",
""
]
]
| TITLE: TEASER: Token Enhanced Spatial Modeling for Expressions Reconstruction
ABSTRACT: 3D facial reconstruction from a single in-the-wild image is a crucial task in
human-centered computer vision tasks. While existing methods can recover
accurate facial shapes, there remains significant space for improvement in
fine-grained expression capture. Current approaches struggle with irregular
mouth shapes, exaggerated expressions, and asymmetrical facial movements. We
present TEASER (Token EnhAnced Spatial modeling for Expressions
Reconstruction), which addresses these challenges and enhances 3D facial
geometry performance. TEASER tackles two main limitations of existing methods:
insufficient photometric loss for self-reconstruction and inaccurate
localization of subtle expressions. We introduce a multi-scale tokenizer to
extract facial appearance information. Combined with a neural renderer, these
tokens provide precise geometric guidance for expression reconstruction.
Furthermore, TEASER incorporates a pose-dependent landmark loss to further
improve geometric performances. Our approach not only significantly enhances
expression reconstruction quality but also offers interpretable tokens suitable
for various downstream applications, such as photorealistic facial video
driving, expression transfer, and identity swapping. Quantitative and
qualitative experimental results across multiple datasets demonstrate that
TEASER achieves state-of-the-art performance in precise expression
reconstruction.
| no_new_dataset | 0.949902 |
2502.11858 | Zeliang Zhang | Zeliang Zhang, Susan Liang, Daiki Shimada, Chenliang Xu | Rethinking Audio-Visual Adversarial Vulnerability from Temporal and
Modality Perspectives | Accepted by ICLR 2025 | null | null | null | cs.SD cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While audio-visual learning equips models with a richer understanding of the
real world by leveraging multiple sensory modalities, this integration also
introduces new vulnerabilities to adversarial attacks.
In this paper, we present a comprehensive study of the adversarial robustness
of audio-visual models, considering both temporal and modality-specific
vulnerabilities. We propose two powerful adversarial attacks: 1) a temporal
invariance attack that exploits the inherent temporal redundancy across
consecutive time segments and 2) a modality misalignment attack that introduces
incongruence between the audio and visual modalities. These attacks are
designed to thoroughly assess the robustness of audio-visual models against
diverse threats. Furthermore, to defend against such attacks, we introduce a
novel audio-visual adversarial training framework. This framework addresses key
challenges in vanilla adversarial training by incorporating efficient
adversarial perturbation crafting tailored to multi-modal data and an
adversarial curriculum strategy. Extensive experiments in the Kinetics-Sounds
dataset demonstrate that our proposed temporal and modality-based attacks in
degrading model performance can achieve state-of-the-art performance, while our
adversarial training defense largely improves the adversarial robustness as
well as the adversarial training efficiency.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 14:50:34 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2025 15:04:12 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 14:14:07 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Zeliang",
""
],
[
"Liang",
"Susan",
""
],
[
"Shimada",
"Daiki",
""
],
[
"Xu",
"Chenliang",
""
]
]
| TITLE: Rethinking Audio-Visual Adversarial Vulnerability from Temporal and
Modality Perspectives
ABSTRACT: While audio-visual learning equips models with a richer understanding of the
real world by leveraging multiple sensory modalities, this integration also
introduces new vulnerabilities to adversarial attacks.
In this paper, we present a comprehensive study of the adversarial robustness
of audio-visual models, considering both temporal and modality-specific
vulnerabilities. We propose two powerful adversarial attacks: 1) a temporal
invariance attack that exploits the inherent temporal redundancy across
consecutive time segments and 2) a modality misalignment attack that introduces
incongruence between the audio and visual modalities. These attacks are
designed to thoroughly assess the robustness of audio-visual models against
diverse threats. Furthermore, to defend against such attacks, we introduce a
novel audio-visual adversarial training framework. This framework addresses key
challenges in vanilla adversarial training by incorporating efficient
adversarial perturbation crafting tailored to multi-modal data and an
adversarial curriculum strategy. Extensive experiments in the Kinetics-Sounds
dataset demonstrate that our proposed temporal and modality-based attacks in
degrading model performance can achieve state-of-the-art performance, while our
adversarial training defense largely improves the adversarial robustness as
well as the adversarial training efficiency.
| no_new_dataset | 0.940735 |
2502.11965 | Jun Jiang | Jun Jiang, Wenjun Yu, Yunfan Li, Yuan Gao, Shugong Xu | A MIMO Wireless Channel Foundation Model via CIR-CSI Consistency | 6 pages, 2025 ICMLCN accepted | null | null | null | eess.SP cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of artificial intelligence, self-supervised learning has
demonstrated superior generalization capabilities by leveraging large-scale
unlabeled datasets for pretraining, which is especially critical for wireless
communication models to adapt to a variety of scenarios. This paper
innovatively treats Channel State Information (CSI) and Channel Impulse
Response (CIR) as naturally aligned multi-modal data and proposes the first
MIMO wireless channel foundation model, named CSI-CLIP. By effectively
capturing the joint representations of both CIR and CSI, CSI-CLIP exhibits
remarkable adaptability across scenarios and robust feature extraction
capabilities. Experimental results show that in positioning task, CSI-CLIP
reduces the mean error distance by 22%; in beam management task, it increases
accuracy by 1% compared to traditional supervised methods, as well as in the
channel identification task. These improvements not only highlight the
potential and value of CSI-CLIP in integrating sensing and communication but
also demonstrate its significant advantages over existing techniques. Moreover,
viewing CSI and CIR as multi-modal pairs and contrastive learning for wireless
channel foundation model open up new research directions in the domain of MIMO
wireless communications.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 16:13:40 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 13:07:25 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Jiang",
"Jun",
""
],
[
"Yu",
"Wenjun",
""
],
[
"Li",
"Yunfan",
""
],
[
"Gao",
"Yuan",
""
],
[
"Xu",
"Shugong",
""
]
]
| TITLE: A MIMO Wireless Channel Foundation Model via CIR-CSI Consistency
ABSTRACT: In the field of artificial intelligence, self-supervised learning has
demonstrated superior generalization capabilities by leveraging large-scale
unlabeled datasets for pretraining, which is especially critical for wireless
communication models to adapt to a variety of scenarios. This paper
innovatively treats Channel State Information (CSI) and Channel Impulse
Response (CIR) as naturally aligned multi-modal data and proposes the first
MIMO wireless channel foundation model, named CSI-CLIP. By effectively
capturing the joint representations of both CIR and CSI, CSI-CLIP exhibits
remarkable adaptability across scenarios and robust feature extraction
capabilities. Experimental results show that in positioning task, CSI-CLIP
reduces the mean error distance by 22%; in beam management task, it increases
accuracy by 1% compared to traditional supervised methods, as well as in the
channel identification task. These improvements not only highlight the
potential and value of CSI-CLIP in integrating sensing and communication but
also demonstrate its significant advantages over existing techniques. Moreover,
viewing CSI and CIR as multi-modal pairs and contrastive learning for wireless
channel foundation model open up new research directions in the domain of MIMO
wireless communications.
| no_new_dataset | 0.947186 |
2502.12361 | Xiao Yu | Xiao Yu, Ruize Xu, Chengyuan Xue, Jinzhong Zhang, Xu Ma, Zhou Yu | ConFit v2: Improving Resume-Job Matching using Hypothetical Resume
Embedding and Runner-Up Hard-Negative Mining | arXiv admin note: text overlap with arXiv:2401.16349 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A reliable resume-job matching system helps a company recommend suitable
candidates from a pool of resumes and helps a job seeker find relevant jobs
from a list of job posts. However, since job seekers apply only to a few jobs,
interaction labels in resume-job datasets are sparse. We introduce ConFit v2,
an improvement over ConFit to tackle this sparsity problem. We propose two
techniques to enhance the encoder's contrastive training process: augmenting
job data with hypothetical reference resume generated by a large language
model; and creating high-quality hard negatives from unlabeled resume/job pairs
using a novel hard-negative mining strategy. We evaluate ConFit v2 on two
real-world datasets and demonstrate that it outperforms ConFit and prior
methods (including BM25 and OpenAI text-embedding-003), achieving an average
absolute improvement of 13.8% in recall and 17.5% in nDCG across job-ranking
and resume-ranking tasks.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 22:56:42 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Feb 2025 19:18:31 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Mar 2025 22:19:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yu",
"Xiao",
""
],
[
"Xu",
"Ruize",
""
],
[
"Xue",
"Chengyuan",
""
],
[
"Zhang",
"Jinzhong",
""
],
[
"Ma",
"Xu",
""
],
[
"Yu",
"Zhou",
""
]
]
| TITLE: ConFit v2: Improving Resume-Job Matching using Hypothetical Resume
Embedding and Runner-Up Hard-Negative Mining
ABSTRACT: A reliable resume-job matching system helps a company recommend suitable
candidates from a pool of resumes and helps a job seeker find relevant jobs
from a list of job posts. However, since job seekers apply only to a few jobs,
interaction labels in resume-job datasets are sparse. We introduce ConFit v2,
an improvement over ConFit to tackle this sparsity problem. We propose two
techniques to enhance the encoder's contrastive training process: augmenting
job data with hypothetical reference resume generated by a large language
model; and creating high-quality hard negatives from unlabeled resume/job pairs
using a novel hard-negative mining strategy. We evaluate ConFit v2 on two
real-world datasets and demonstrate that it outperforms ConFit and prior
methods (including BM25 and OpenAI text-embedding-003), achieving an average
absolute improvement of 13.8% in recall and 17.5% in nDCG across job-ranking
and resume-ranking tasks.
| no_new_dataset | 0.948202 |
2502.12949 | Behraj Khan | Behraj Khan, Behroz Mirza, Nouman Durrani, Tahir Syed | Efficient Learning Under Density Shift in Incremental Settings Using
Cram\'er-Rao-Based Regularization | It is the older version of our this paper arXiv:2502.15756. So this
is the duplicate older version mistakenly uploaded. There are mistakes in the
method part of this paper | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | The continuous surge in data volume and velocity is often dealt with using
data orchestration and distributed processing approaches, abstracting away the
machine learning challenges that exist at the algorithmic level. With growing
interest in automating the learning loop, training with data that arrive in a
sequence rather than in the classical in-memory training data form will face a
machine learning challenge because of evolving feature distributions across
batches of training data biasing the cross-validation step
(\cite{sugiyama2012machine}). This work takes a distributed density estimation
angle to the problem where data are temporally distributed. It processes data
in batches and allows a neural network to treat a batch as training data. The
method accumulates knowledge about the data density via posterior probability
absorption using the Fisher Information Matrix, which contains information
about the local optimization gradients for the batch. This is then used as a
regularizer for the loss in the following batch, and therefore the density
estimate for the entire dataset constructively gets more robust to the non-iid
distribution shift. This needs the presence of a pair of batches in memory at a
time, so the space cost is not a function of the size of the complete,
distributed dataset. We proposed a novel regularization-based approach
Covariate Shift Correction $C^{2}A$ that leverages Fisher information and
Kullback-Leibler divergence to adapt to both natural and sequential covariate
shift caused by dataset fragmentation. $C^{2}A$ achieves $19\%$ accuracy at
maximum against state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2025 16:00:10 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 06:42:17 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Khan",
"Behraj",
""
],
[
"Mirza",
"Behroz",
""
],
[
"Durrani",
"Nouman",
""
],
[
"Syed",
"Tahir",
""
]
]
| TITLE: Efficient Learning Under Density Shift in Incremental Settings Using
Cram\'er-Rao-Based Regularization
ABSTRACT: The continuous surge in data volume and velocity is often dealt with using
data orchestration and distributed processing approaches, abstracting away the
machine learning challenges that exist at the algorithmic level. With growing
interest in automating the learning loop, training with data that arrive in a
sequence rather than in the classical in-memory training data form will face a
machine learning challenge because of evolving feature distributions across
batches of training data biasing the cross-validation step
(\cite{sugiyama2012machine}). This work takes a distributed density estimation
angle to the problem where data are temporally distributed. It processes data
in batches and allows a neural network to treat a batch as training data. The
method accumulates knowledge about the data density via posterior probability
absorption using the Fisher Information Matrix, which contains information
about the local optimization gradients for the batch. This is then used as a
regularizer for the loss in the following batch, and therefore the density
estimate for the entire dataset constructively gets more robust to the non-iid
distribution shift. This needs the presence of a pair of batches in memory at a
time, so the space cost is not a function of the size of the complete,
distributed dataset. We proposed a novel regularization-based approach
Covariate Shift Correction $C^{2}A$ that leverages Fisher information and
Kullback-Leibler divergence to adapt to both natural and sequential covariate
shift caused by dataset fragmentation. $C^{2}A$ achieves $19\%$ accuracy at
maximum against state-of-the-art methods.
| no_new_dataset | 0.950778 |
2502.13452 | Dongjae Lee | Hyeonjae Gil, Dongjae Lee, Giseop Kim, and Ayoung Kim | Ephemerality meets LiDAR-based Lifelong Mapping | 6+2 pages, 11 figures, accepted at ICRA 2025 | null | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Lifelong mapping is crucial for the long-term deployment of robots in dynamic
environments. In this paper, we present ELite, an ephemerality-aided
LiDAR-based lifelong mapping framework which can seamlessly align multiple
session data, remove dynamic objects, and update maps in an end-to-end fashion.
Map elements are typically classified as static or dynamic, but cases like
parked cars indicate the need for more detailed categories than binary. Central
to our approach is the probabilistic modeling of the world into two-stage
$\textit{ephemerality}$, which represent the transiency of points in the map
within two different time scales. By leveraging the spatiotemporal context
encoded in ephemeralities, ELite can accurately infer transient map elements,
maintain a reliable up-to-date static map, and improve robustness in aligning
the new data in a more fine-grained manner. Extensive real-world experiments on
long-term datasets demonstrate the robustness and effectiveness of our system.
The source code is publicly available for the robotics community:
https://github.com/dongjae0107/ELite.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 05:58:30 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 11:16:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Gil",
"Hyeonjae",
""
],
[
"Lee",
"Dongjae",
""
],
[
"Kim",
"Giseop",
""
],
[
"Kim",
"Ayoung",
""
]
]
| TITLE: Ephemerality meets LiDAR-based Lifelong Mapping
ABSTRACT: Lifelong mapping is crucial for the long-term deployment of robots in dynamic
environments. In this paper, we present ELite, an ephemerality-aided
LiDAR-based lifelong mapping framework which can seamlessly align multiple
session data, remove dynamic objects, and update maps in an end-to-end fashion.
Map elements are typically classified as static or dynamic, but cases like
parked cars indicate the need for more detailed categories than binary. Central
to our approach is the probabilistic modeling of the world into two-stage
$\textit{ephemerality}$, which represent the transiency of points in the map
within two different time scales. By leveraging the spatiotemporal context
encoded in ephemeralities, ELite can accurately infer transient map elements,
maintain a reliable up-to-date static map, and improve robustness in aligning
the new data in a more fine-grained manner. Extensive real-world experiments on
long-term datasets demonstrate the robustness and effectiveness of our system.
The source code is publicly available for the robotics community:
https://github.com/dongjae0107/ELite.
| no_new_dataset | 0.950041 |
2502.14616 | Jiangyuan Liu | Jiangyuan Liu, Hongxuan Ma, Yuxin Guo, Yuhao Zhao, Chi Zhang, Wei Sui,
Wei Zou | Monocular Depth Estimation and Segmentation for Transparent Object with
Iterative Semantic and Geometric Fusion | Accepted by ICRA(2025). The code is accessible through:
https://github.com/L-J-Yuan/MODEST | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Transparent object perception is indispensable for numerous robotic tasks.
However, accurately segmenting and estimating the depth of transparent objects
remain challenging due to complex optical properties. Existing methods
primarily delve into only one task using extra inputs or specialized sensors,
neglecting the valuable interactions among tasks and the subsequent refinement
process, leading to suboptimal and blurry predictions. To address these issues,
we propose a monocular framework, which is the first to excel in both
segmentation and depth estimation of transparent objects, with only a
single-image input. Specifically, we devise a novel semantic and geometric
fusion module, effectively integrating the multi-scale information between
tasks. In addition, drawing inspiration from human perception of objects, we
further incorporate an iterative strategy, which progressively refines initial
features for clearer results. Experiments on two challenging synthetic and
real-world datasets demonstrate that our model surpasses state-of-the-art
monocular, stereo, and multi-view methods by a large margin of about
38.8%-46.2% with only a single RGB input. Codes and models are publicly
available at https://github.com/L-J-Yuan/MODEST.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 14:57:01 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 12:37:18 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Jiangyuan",
""
],
[
"Ma",
"Hongxuan",
""
],
[
"Guo",
"Yuxin",
""
],
[
"Zhao",
"Yuhao",
""
],
[
"Zhang",
"Chi",
""
],
[
"Sui",
"Wei",
""
],
[
"Zou",
"Wei",
""
]
]
| TITLE: Monocular Depth Estimation and Segmentation for Transparent Object with
Iterative Semantic and Geometric Fusion
ABSTRACT: Transparent object perception is indispensable for numerous robotic tasks.
However, accurately segmenting and estimating the depth of transparent objects
remain challenging due to complex optical properties. Existing methods
primarily delve into only one task using extra inputs or specialized sensors,
neglecting the valuable interactions among tasks and the subsequent refinement
process, leading to suboptimal and blurry predictions. To address these issues,
we propose a monocular framework, which is the first to excel in both
segmentation and depth estimation of transparent objects, with only a
single-image input. Specifically, we devise a novel semantic and geometric
fusion module, effectively integrating the multi-scale information between
tasks. In addition, drawing inspiration from human perception of objects, we
further incorporate an iterative strategy, which progressively refines initial
features for clearer results. Experiments on two challenging synthetic and
real-world datasets demonstrate that our model surpasses state-of-the-art
monocular, stereo, and multi-view methods by a large margin of about
38.8%-46.2% with only a single RGB input. Codes and models are publicly
available at https://github.com/L-J-Yuan/MODEST.
| no_new_dataset | 0.947186 |
2502.14897 | Hamid Moradi-Kamali | Hamid Moradi-Kamali, Mohammad-Hossein Rajabi-Ghozlou, Mahdi Ghazavi,
Ali Soltani, Amirreza Sattarzadeh and Reza Entezari-Maleki | Market-Derived Financial Sentiment Analysis: Context-Aware Language
Models for Crypto Forecasting | 13 pages, 6 figures | null | null | null | cs.CE cs.CL cs.LG q-fin.ST | http://creativecommons.org/licenses/by/4.0/ | Financial Sentiment Analysis (FSA) traditionally relies on human-annotated
sentiment labels to infer investor sentiment and forecast market movements.
However, inferring the potential market impact of words based on their
human-perceived intentions is inherently challenging. We hypothesize that the
historical market reactions to words, offer a more reliable indicator of their
potential impact on markets than subjective sentiment interpretations by human
annotators. To test this hypothesis, a market-derived labeling approach is
proposed to assign tweet labels based on ensuing short-term price trends,
enabling the language model to capture the relationship between textual signals
and market dynamics directly. A domain-specific language model was fine-tuned
on these labels, achieving up to an 11% improvement in short-term trend
prediction accuracy over traditional sentiment-based benchmarks. Moreover, by
incorporating market and temporal context through prompt-tuning, the proposed
context-aware language model demonstrated an accuracy of 89.6% on a curated
dataset of 227 impactful Bitcoin-related news events with significant market
impacts. Aggregating daily tweet predictions into trading signals, our method
outperformed traditional fusion models (which combine sentiment-based and
price-based predictions). It challenged the assumption that sentiment-based
signals are inferior to price-based predictions in forecasting market
movements. Backtesting these signals across three distinct market regimes
yielded robust Sharpe ratios of up to 5.07 in trending markets and 3.73 in
neutral markets. Our findings demonstrate that language models can serve as
effective short-term market predictors. This paradigm shift underscores the
untapped capabilities of language models in financial decision-making and opens
new avenues for market prediction applications.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 21:35:18 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 10:18:09 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Moradi-Kamali",
"Hamid",
""
],
[
"Rajabi-Ghozlou",
"Mohammad-Hossein",
""
],
[
"Ghazavi",
"Mahdi",
""
],
[
"Soltani",
"Ali",
""
],
[
"Sattarzadeh",
"Amirreza",
""
],
[
"Entezari-Maleki",
"Reza",
""
]
]
| TITLE: Market-Derived Financial Sentiment Analysis: Context-Aware Language
Models for Crypto Forecasting
ABSTRACT: Financial Sentiment Analysis (FSA) traditionally relies on human-annotated
sentiment labels to infer investor sentiment and forecast market movements.
However, inferring the potential market impact of words based on their
human-perceived intentions is inherently challenging. We hypothesize that the
historical market reactions to words, offer a more reliable indicator of their
potential impact on markets than subjective sentiment interpretations by human
annotators. To test this hypothesis, a market-derived labeling approach is
proposed to assign tweet labels based on ensuing short-term price trends,
enabling the language model to capture the relationship between textual signals
and market dynamics directly. A domain-specific language model was fine-tuned
on these labels, achieving up to an 11% improvement in short-term trend
prediction accuracy over traditional sentiment-based benchmarks. Moreover, by
incorporating market and temporal context through prompt-tuning, the proposed
context-aware language model demonstrated an accuracy of 89.6% on a curated
dataset of 227 impactful Bitcoin-related news events with significant market
impacts. Aggregating daily tweet predictions into trading signals, our method
outperformed traditional fusion models (which combine sentiment-based and
price-based predictions). It challenged the assumption that sentiment-based
signals are inferior to price-based predictions in forecasting market
movements. Backtesting these signals across three distinct market regimes
yielded robust Sharpe ratios of up to 5.07 in trending markets and 3.73 in
neutral markets. Our findings demonstrate that language models can serve as
effective short-term market predictors. This paradigm shift underscores the
untapped capabilities of language models in financial decision-making and opens
new avenues for market prediction applications.
| no_new_dataset | 0.951684 |
2502.15393 | Hongchen Wei | Hongchen Wei, Zhihong Tan, Yaosi Hu, Chang Wen Chen, Zhenzhong Chen | LongCaptioning: Unlocking the Power of Long Video Caption Generation in
Large Multimodal Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Multimodal Models (LMMs) have demonstrated exceptional performance in
video captioning tasks, particularly for short videos. However, as the length
of the video increases, generating long, detailed captions becomes a
significant challenge. In this paper, we investigate the limitations of LMMs in
generating long captions for long videos. Our analysis reveals that open-source
LMMs struggle to consistently produce outputs exceeding 300 words, leading to
incomplete or overly concise descriptions of the visual content. This
limitation hinders the ability of LMMs to provide comprehensive and detailed
captions for long videos, ultimately missing important visual information.
Through controlled experiments, we find that the scarcity of paired examples
with long-captions during training is the primary factor limiting the model's
output length. However, manually annotating long-caption examples for long-form
videos is time-consuming and expensive. To overcome the annotation bottleneck,
we propose the LongCaption-Agent, a framework that synthesizes long caption
data by hierarchical semantic aggregation. % aggregating multi-level
descriptions. Using LongCaption-Agent, we curated a new long-caption dataset,
LongCaption-10K. We also develop LongCaption-Bench, a benchmark designed to
comprehensively evaluate the quality of long captions generated by LMMs. By
incorporating LongCaption-10K into training, we enable LMMs to generate
captions exceeding 1,000 words for long-form videos, while maintaining high
output quality. In LongCaption-Bench, our model achieved State-of-The-Art
performance, even surpassing larger proprietary models like GPT4o.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 11:40:23 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 02:06:59 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wei",
"Hongchen",
""
],
[
"Tan",
"Zhihong",
""
],
[
"Hu",
"Yaosi",
""
],
[
"Chen",
"Chang Wen",
""
],
[
"Chen",
"Zhenzhong",
""
]
]
| TITLE: LongCaptioning: Unlocking the Power of Long Video Caption Generation in
Large Multimodal Models
ABSTRACT: Large Multimodal Models (LMMs) have demonstrated exceptional performance in
video captioning tasks, particularly for short videos. However, as the length
of the video increases, generating long, detailed captions becomes a
significant challenge. In this paper, we investigate the limitations of LMMs in
generating long captions for long videos. Our analysis reveals that open-source
LMMs struggle to consistently produce outputs exceeding 300 words, leading to
incomplete or overly concise descriptions of the visual content. This
limitation hinders the ability of LMMs to provide comprehensive and detailed
captions for long videos, ultimately missing important visual information.
Through controlled experiments, we find that the scarcity of paired examples
with long-captions during training is the primary factor limiting the model's
output length. However, manually annotating long-caption examples for long-form
videos is time-consuming and expensive. To overcome the annotation bottleneck,
we propose the LongCaption-Agent, a framework that synthesizes long caption
data by hierarchical semantic aggregation. % aggregating multi-level
descriptions. Using LongCaption-Agent, we curated a new long-caption dataset,
LongCaption-10K. We also develop LongCaption-Bench, a benchmark designed to
comprehensively evaluate the quality of long captions generated by LMMs. By
incorporating LongCaption-10K into training, we enable LMMs to generate
captions exceeding 1,000 words for long-form videos, while maintaining high
output quality. In LongCaption-Bench, our model achieved State-of-The-Art
performance, even surpassing larger proprietary models like GPT4o.
| new_dataset | 0.957078 |
2502.15770 | Lun Wang | Lun Wang, Chuanqi Shi, Shaoshui Du, Yiyi Tao, Yixian Shen, Hang Zheng,
Yanxin Shen, Xinyu Qiu | Performance Review on LLM for solving leetcode problems | null | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents a comprehensive performance evaluation of Large Language
Models (LLMs) in solving programming challenges from Leetcode, a widely used
platform for algorithm practice and technical interviews. We began by crawling
the Leetcode website to collect a diverse set of problems encompassing various
difficulty levels and topics. Using this dataset, we generated solutions with
multiple LLMs, including GPT-4 and GPT-3.5-turbo (ChatGPT-turbo). The generated
solutions were systematically evaluated for correctness and efficiency. We
employed the pass@k metric to assess the success rates within a given number of
attempts and analyzed the runtime performance of the solutions. Our results
highlight the strengths and limitations of current LLMs [10] in code generation
and problem-solving tasks, providing insights into their potential applications
and areas for improvement in automated programming assistance.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 08:52:45 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 00:24:08 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wang",
"Lun",
""
],
[
"Shi",
"Chuanqi",
""
],
[
"Du",
"Shaoshui",
""
],
[
"Tao",
"Yiyi",
""
],
[
"Shen",
"Yixian",
""
],
[
"Zheng",
"Hang",
""
],
[
"Shen",
"Yanxin",
""
],
[
"Qiu",
"Xinyu",
""
]
]
| TITLE: Performance Review on LLM for solving leetcode problems
ABSTRACT: This paper presents a comprehensive performance evaluation of Large Language
Models (LLMs) in solving programming challenges from Leetcode, a widely used
platform for algorithm practice and technical interviews. We began by crawling
the Leetcode website to collect a diverse set of problems encompassing various
difficulty levels and topics. Using this dataset, we generated solutions with
multiple LLMs, including GPT-4 and GPT-3.5-turbo (ChatGPT-turbo). The generated
solutions were systematically evaluated for correctness and efficiency. We
employed the pass@k metric to assess the success rates within a given number of
attempts and analyzed the runtime performance of the solutions. Our results
highlight the strengths and limitations of current LLMs [10] in code generation
and problem-solving tasks, providing insights into their potential applications
and areas for improvement in automated programming assistance.
| no_new_dataset | 0.909023 |
2502.15850 | Govind Pimpale | Govind Pimpale, Axel H{\o}jmark, J\'er\'emy Scheurer, Marius Hobbhahn | Forecasting Frontier Language Model Agent Capabilities | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | As Language Models (LMs) increasingly operate as autonomous agents,
accurately forecasting their capabilities becomes crucial for societal
preparedness. We evaluate six forecasting methods that predict downstream
capabilities of LM agents. We use "one-step" approaches that predict benchmark
scores from input metrics like compute or model release date directly or
"two-step" approaches that first predict an intermediate metric like the
principal component of cross-benchmark performance (PC-1) and human-evaluated
competitive Elo ratings. We evaluate our forecasting methods by backtesting
them on a dataset of 38 LMs from the OpenLLM 2 leaderboard. We then use the
validated two-step approach (Release Date$\to$Elo$\to$Benchmark) to predict LM
agent performance for frontier models on three benchmarks: SWE-Bench Verified
(software development), Cybench (cybersecurity assessment), and RE-Bench (ML
research engineering). Our forecast predicts that by the beginning of 2026,
non-specialized LM agents with low capability elicitation will reach a success
rate of 54% on SWE-Bench Verified, while state-of-the-art LM agents will reach
an 87% success rate. Our approach does not account for recent advances in
inference-compute scaling and might thus be too conservative.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 02:34:17 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 17:11:16 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Pimpale",
"Govind",
""
],
[
"Højmark",
"Axel",
""
],
[
"Scheurer",
"Jérémy",
""
],
[
"Hobbhahn",
"Marius",
""
]
]
| TITLE: Forecasting Frontier Language Model Agent Capabilities
ABSTRACT: As Language Models (LMs) increasingly operate as autonomous agents,
accurately forecasting their capabilities becomes crucial for societal
preparedness. We evaluate six forecasting methods that predict downstream
capabilities of LM agents. We use "one-step" approaches that predict benchmark
scores from input metrics like compute or model release date directly or
"two-step" approaches that first predict an intermediate metric like the
principal component of cross-benchmark performance (PC-1) and human-evaluated
competitive Elo ratings. We evaluate our forecasting methods by backtesting
them on a dataset of 38 LMs from the OpenLLM 2 leaderboard. We then use the
validated two-step approach (Release Date$\to$Elo$\to$Benchmark) to predict LM
agent performance for frontier models on three benchmarks: SWE-Bench Verified
(software development), Cybench (cybersecurity assessment), and RE-Bench (ML
research engineering). Our forecast predicts that by the beginning of 2026,
non-specialized LM agents with low capability elicitation will reach a success
rate of 54% on SWE-Bench Verified, while state-of-the-art LM agents will reach
an 87% success rate. Our approach does not account for recent advances in
inference-compute scaling and might thus be too conservative.
| no_new_dataset | 0.944689 |
2502.16190 | Xianghong Xu | Xianghong Xu, Tieying Zhang, Xiao He, Haoyang Li, Rong Kang, Shuai
Wang, Linhui Xu, Zhimin Liang, Shangyu Luo, Lei Zhang, Jianjun Chen | AdaNDV: Adaptive Number of Distinct Value Estimation via Learning to
Select and Fuse Estimators | Accepted by VLDB 2025 | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Estimating the Number of Distinct Values (NDV) is fundamental for numerous
data management tasks, especially within database applications. However, most
existing works primarily focus on introducing new statistical or learned
estimators, while identifying the most suitable estimator for a given scenario
remains largely unexplored. Therefore, we propose AdaNDV, a learned method
designed to adaptively select and fuse existing estimators to address this
issue. Specifically, (1) we propose to use learned models to distinguish
between overestimated and underestimated estimators and then select appropriate
estimators from each category. This strategy provides a complementary
perspective by integrating overestimations and underestimations for error
correction, thereby improving the accuracy of NDV estimation. (2) To further
integrate the estimation results, we introduce a novel fusion approach that
employs a learned model to predict the weights of the selected estimators and
then applies a weighted sum to merge them. By combining these strategies, the
proposed AdaNDV fundamentally distinguishes itself from previous works that
directly estimate NDV. Moreover, extensive experiments conducted on real-world
datasets, with the number of individual columns being several orders of
magnitude larger than in previous studies, demonstrate the superior performance
of our method.
| [
{
"version": "v1",
"created": "Sat, 22 Feb 2025 11:28:15 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 02:47:36 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Xu",
"Xianghong",
""
],
[
"Zhang",
"Tieying",
""
],
[
"He",
"Xiao",
""
],
[
"Li",
"Haoyang",
""
],
[
"Kang",
"Rong",
""
],
[
"Wang",
"Shuai",
""
],
[
"Xu",
"Linhui",
""
],
[
"Liang",
"Zhimin",
""
],
[
"Luo",
"Shangyu",
""
],
[
"Zhang",
"Lei",
""
],
[
"Chen",
"Jianjun",
""
]
]
| TITLE: AdaNDV: Adaptive Number of Distinct Value Estimation via Learning to
Select and Fuse Estimators
ABSTRACT: Estimating the Number of Distinct Values (NDV) is fundamental for numerous
data management tasks, especially within database applications. However, most
existing works primarily focus on introducing new statistical or learned
estimators, while identifying the most suitable estimator for a given scenario
remains largely unexplored. Therefore, we propose AdaNDV, a learned method
designed to adaptively select and fuse existing estimators to address this
issue. Specifically, (1) we propose to use learned models to distinguish
between overestimated and underestimated estimators and then select appropriate
estimators from each category. This strategy provides a complementary
perspective by integrating overestimations and underestimations for error
correction, thereby improving the accuracy of NDV estimation. (2) To further
integrate the estimation results, we introduce a novel fusion approach that
employs a learned model to predict the weights of the selected estimators and
then applies a weighted sum to merge them. By combining these strategies, the
proposed AdaNDV fundamentally distinguishes itself from previous works that
directly estimate NDV. Moreover, extensive experiments conducted on real-world
datasets, with the number of individual columns being several orders of
magnitude larger than in previous studies, demonstrate the superior performance
of our method.
| no_new_dataset | 0.942135 |
2502.16826 | Xiangbin Wei | Xiangbin Wei | Noise2Score3D:Unsupervised Tweedie's Approach for Point Cloud Denoising | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building on recent advances in Bayesian statistics and image denoising, we
propose Noise2Score3D, a fully unsupervised framework for point cloud denoising
that addresses the critical challenge of limited availability of clean data.
Noise2Score3D learns the gradient of the underlying point cloud distribution
directly from noisy data, eliminating the need for clean data during training.
By leveraging Tweedie's formula, our method performs inference in a single
step, avoiding the iterative processes used in existing unsupervised methods,
thereby improving both performance and efficiency. Experimental results
demonstrate that Noise2Score3D achieves state-of-the-art performance on
standard benchmarks, outperforming other unsupervised methods in Chamfer
distance and point-to-mesh metrics, and rivaling some supervised approaches.
Furthermore, Noise2Score3D demonstrates strong generalization ability beyond
training datasets. Additionally, we introduce Total Variation for Point Cloud,
a criterion that allows for the estimation of unknown noise parameters, which
further enhances the method's versatility and real-world utility.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 04:23:21 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 03:09:49 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wei",
"Xiangbin",
""
]
]
| TITLE: Noise2Score3D:Unsupervised Tweedie's Approach for Point Cloud Denoising
ABSTRACT: Building on recent advances in Bayesian statistics and image denoising, we
propose Noise2Score3D, a fully unsupervised framework for point cloud denoising
that addresses the critical challenge of limited availability of clean data.
Noise2Score3D learns the gradient of the underlying point cloud distribution
directly from noisy data, eliminating the need for clean data during training.
By leveraging Tweedie's formula, our method performs inference in a single
step, avoiding the iterative processes used in existing unsupervised methods,
thereby improving both performance and efficiency. Experimental results
demonstrate that Noise2Score3D achieves state-of-the-art performance on
standard benchmarks, outperforming other unsupervised methods in Chamfer
distance and point-to-mesh metrics, and rivaling some supervised approaches.
Furthermore, Noise2Score3D demonstrates strong generalization ability beyond
training datasets. Additionally, we introduce Total Variation for Point Cloud,
a criterion that allows for the estimation of unknown noise parameters, which
further enhances the method's versatility and real-world utility.
| no_new_dataset | 0.947721 |
2502.16880 | Yepeng Weng | Yepeng Weng, Dianwen Mei, Huishi Qiu, Xujie Chen, Li Liu, Jiang Tian,
Zhongchao Shi | CORAL: Learning Consistent Representations across Multi-step Training
with Lighter Speculative Drafter | Under Review | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speculative decoding is a powerful technique that accelerates Large Language
Model (LLM) inference by leveraging a lightweight speculative draft model.
However, existing designs suffers in performance due to misalignment between
training and inference. Recent methods have tried to solve this issue by
adopting a multi-step training strategy, but the complex inputs of different
training steps make it harder for the draft model to converge. To address this,
we propose CORAL, a novel framework that improves both accuracy and efficiency
in speculative drafting. CORAL introduces Cross-Step Representation Alignment,
a method that enhances consistency across multiple training steps,
significantly improving speculative drafting performance. Additionally, we
identify the LM head as a major bottleneck in the inference speed of the draft
model. We introduce a weight-grouping mechanism that selectively activates a
subset of LM head parameters during inference, substantially reducing the
latency of the draft model. We evaluate CORAL on three LLM families and three
benchmark datasets, achieving speedup ratios of 2.50x-4.07x, outperforming
state-of-the-art methods such as EAGLE-2 and HASS. Our results demonstrate that
CORAL effectively mitigates training-inference misalignment and delivers
significant speedup for modern LLMs with large vocabularies.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 06:28:26 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 06:13:45 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Weng",
"Yepeng",
""
],
[
"Mei",
"Dianwen",
""
],
[
"Qiu",
"Huishi",
""
],
[
"Chen",
"Xujie",
""
],
[
"Liu",
"Li",
""
],
[
"Tian",
"Jiang",
""
],
[
"Shi",
"Zhongchao",
""
]
]
| TITLE: CORAL: Learning Consistent Representations across Multi-step Training
with Lighter Speculative Drafter
ABSTRACT: Speculative decoding is a powerful technique that accelerates Large Language
Model (LLM) inference by leveraging a lightweight speculative draft model.
However, existing designs suffers in performance due to misalignment between
training and inference. Recent methods have tried to solve this issue by
adopting a multi-step training strategy, but the complex inputs of different
training steps make it harder for the draft model to converge. To address this,
we propose CORAL, a novel framework that improves both accuracy and efficiency
in speculative drafting. CORAL introduces Cross-Step Representation Alignment,
a method that enhances consistency across multiple training steps,
significantly improving speculative drafting performance. Additionally, we
identify the LM head as a major bottleneck in the inference speed of the draft
model. We introduce a weight-grouping mechanism that selectively activates a
subset of LM head parameters during inference, substantially reducing the
latency of the draft model. We evaluate CORAL on three LLM families and three
benchmark datasets, achieving speedup ratios of 2.50x-4.07x, outperforming
state-of-the-art methods such as EAGLE-2 and HASS. Our results demonstrate that
CORAL effectively mitigates training-inference misalignment and delivers
significant speedup for modern LLMs with large vocabularies.
| no_new_dataset | 0.944382 |
2502.17173 | Xueru Wen | Xueru Wen, Jie Lou, Zichao Li, Yaojie Lu, Xing Yu, Yuqiu Ji, Guohai
Xu, Hongyu Lin, Ben He, Xianpei Han, Le Sun, Debing Zhang | Cheems: A Practical Guidance for Building and Evaluating Chinese Reward
Models from Scratch | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Reward models (RMs) are crucial for aligning large language models (LLMs)
with human preferences. However, most RM research is centered on English and
relies heavily on synthetic resources, which leads to limited and less reliable
datasets and benchmarks for Chinese. To address this gap, we introduce
CheemsBench, a fully human-annotated RM evaluation benchmark within Chinese
contexts, and CheemsPreference, a large-scale and diverse preference dataset
annotated through human-machine collaboration to support Chinese RM training.
We systematically evaluate open-source discriminative and generative RMs on
CheemsBench and observe significant limitations in their ability to capture
human preferences in Chinese scenarios. Additionally, based on
CheemsPreference, we construct an RM that achieves state-of-the-art performance
on CheemsBench, demonstrating the necessity of human supervision in RM
training. Our findings reveal that scaled AI-generated data struggles to fully
capture human preferences, emphasizing the importance of high-quality human
supervision in RM development.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 14:09:45 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 17:23:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Wen",
"Xueru",
""
],
[
"Lou",
"Jie",
""
],
[
"Li",
"Zichao",
""
],
[
"Lu",
"Yaojie",
""
],
[
"Yu",
"Xing",
""
],
[
"Ji",
"Yuqiu",
""
],
[
"Xu",
"Guohai",
""
],
[
"Lin",
"Hongyu",
""
],
[
"He",
"Ben",
""
],
[
"Han",
"Xianpei",
""
],
[
"Sun",
"Le",
""
],
[
"Zhang",
"Debing",
""
]
]
| TITLE: Cheems: A Practical Guidance for Building and Evaluating Chinese Reward
Models from Scratch
ABSTRACT: Reward models (RMs) are crucial for aligning large language models (LLMs)
with human preferences. However, most RM research is centered on English and
relies heavily on synthetic resources, which leads to limited and less reliable
datasets and benchmarks for Chinese. To address this gap, we introduce
CheemsBench, a fully human-annotated RM evaluation benchmark within Chinese
contexts, and CheemsPreference, a large-scale and diverse preference dataset
annotated through human-machine collaboration to support Chinese RM training.
We systematically evaluate open-source discriminative and generative RMs on
CheemsBench and observe significant limitations in their ability to capture
human preferences in Chinese scenarios. Additionally, based on
CheemsPreference, we construct an RM that achieves state-of-the-art performance
on CheemsBench, demonstrating the necessity of human supervision in RM
training. Our findings reveal that scaled AI-generated data struggles to fully
capture human preferences, emphasizing the importance of high-quality human
supervision in RM development.
| new_dataset | 0.959039 |
2502.17204 | Jie Zeng | Jie Zeng, Qianyu He, Qingyu Ren, Jiaqing Liang, Yanghua Xiao, Weikang
Zhou, Zeye Sun, Fei Yu | Order Matters: Investigate the Position Bias in Multi-constraint
Instruction Following | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world instructions with multiple constraints pose a significant
challenge to existing large language models (LLMs). An observation is that the
LLMs exhibit dramatic performance fluctuation when disturbing the order of the
incorporated constraints. Yet, none of the existing works has systematically
investigated this position bias problem in the field of multi-constraint
instruction following. To bridge this gap, we design a probing task where we
quantitatively measure the difficulty distribution of the constraints by a
novel Difficulty Distribution Index (CDDI). Through the experimental results,
we find that LLMs are more performant when presented with the constraints in a
``hard-to-easy'' order. This preference can be generalized to LLMs with
different architecture or different sizes of parameters. Additionally, we
conduct an explanation study, providing an intuitive insight into the
correlation between the LLM's attention and constraint orders. Our code and
dataset are publicly available at https://github.com/meowpass/PBIF.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 14:39:28 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 06:29:31 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zeng",
"Jie",
""
],
[
"He",
"Qianyu",
""
],
[
"Ren",
"Qingyu",
""
],
[
"Liang",
"Jiaqing",
""
],
[
"Xiao",
"Yanghua",
""
],
[
"Zhou",
"Weikang",
""
],
[
"Sun",
"Zeye",
""
],
[
"Yu",
"Fei",
""
]
]
| TITLE: Order Matters: Investigate the Position Bias in Multi-constraint
Instruction Following
ABSTRACT: Real-world instructions with multiple constraints pose a significant
challenge to existing large language models (LLMs). An observation is that the
LLMs exhibit dramatic performance fluctuation when disturbing the order of the
incorporated constraints. Yet, none of the existing works has systematically
investigated this position bias problem in the field of multi-constraint
instruction following. To bridge this gap, we design a probing task where we
quantitatively measure the difficulty distribution of the constraints by a
novel Difficulty Distribution Index (CDDI). Through the experimental results,
we find that LLMs are more performant when presented with the constraints in a
``hard-to-easy'' order. This preference can be generalized to LLMs with
different architecture or different sizes of parameters. Additionally, we
conduct an explanation study, providing an intuitive insight into the
correlation between the LLM's attention and constraint orders. Our code and
dataset are publicly available at https://github.com/meowpass/PBIF.
| no_new_dataset | 0.93744 |
2502.17810 | Ruiqi Yan | Ruiqi Yan, Xiquan Li, Wenxi Chen, Zhikang Niu, Chen Yang, Ziyang Ma,
Kai Yu, Xie Chen | URO-Bench: A Comprehensive Benchmark for End-to-End Spoken Dialogue
Models | null | null | null | null | cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, with advances in large language models (LLMs), end-to-end
spoken dialogue models (SDMs) have made significant strides. Compared to
text-based LLMs, the evaluation of SDMs needs to take speech-related aspects
into account, such as paralinguistic information and speech quality. However,
there is still a lack of comprehensive evaluations for SDMs in speech-to-speech
(S2S) scenarios. To address this gap, we propose URO-Bench, an extensive
benchmark for SDMs. Notably, URO-Bench is the first S2S benchmark that covers
evaluations about multilingualism, multi-round dialogues, and paralinguistics.
Our benchmark is divided into two difficulty levels: basic track and pro track,
consisting of 16 and 20 datasets respectively, evaluating the model's abilities
in Understanding, Reasoning, and Oral conversation. Evaluations on our proposed
benchmark reveal that current open-source SDMs perform rather well in daily QA
tasks, but lag behind their backbone LLMs in terms of instruction-following
ability and also suffer from catastrophic forgetting. Their performance in
advanced evaluations of paralinguistic information and audio understanding
remains subpar, highlighting the need for further research in this direction.
We hope that URO-Bench can effectively facilitate the development of spoken
dialogue models by providing a multifaceted evaluation of existing models and
helping to track progress in this area.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 03:31:48 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 11:14:44 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yan",
"Ruiqi",
""
],
[
"Li",
"Xiquan",
""
],
[
"Chen",
"Wenxi",
""
],
[
"Niu",
"Zhikang",
""
],
[
"Yang",
"Chen",
""
],
[
"Ma",
"Ziyang",
""
],
[
"Yu",
"Kai",
""
],
[
"Chen",
"Xie",
""
]
]
| TITLE: URO-Bench: A Comprehensive Benchmark for End-to-End Spoken Dialogue
Models
ABSTRACT: In recent years, with advances in large language models (LLMs), end-to-end
spoken dialogue models (SDMs) have made significant strides. Compared to
text-based LLMs, the evaluation of SDMs needs to take speech-related aspects
into account, such as paralinguistic information and speech quality. However,
there is still a lack of comprehensive evaluations for SDMs in speech-to-speech
(S2S) scenarios. To address this gap, we propose URO-Bench, an extensive
benchmark for SDMs. Notably, URO-Bench is the first S2S benchmark that covers
evaluations about multilingualism, multi-round dialogues, and paralinguistics.
Our benchmark is divided into two difficulty levels: basic track and pro track,
consisting of 16 and 20 datasets respectively, evaluating the model's abilities
in Understanding, Reasoning, and Oral conversation. Evaluations on our proposed
benchmark reveal that current open-source SDMs perform rather well in daily QA
tasks, but lag behind their backbone LLMs in terms of instruction-following
ability and also suffer from catastrophic forgetting. Their performance in
advanced evaluations of paralinguistic information and audio understanding
remains subpar, highlighting the need for further research in this direction.
We hope that URO-Bench can effectively facilitate the development of spoken
dialogue models by providing a multifaceted evaluation of existing models and
helping to track progress in this area.
| new_dataset | 0.787605 |
2502.17924 | Lin Hongzhan | Hongzhan Lin, Yang Deng, Yuxuan Gu, Wenxuan Zhang, Jing Ma, See-Kiong
Ng, Tat-Seng Chua | FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking
Evaluation of Large Language Models | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have significantly advanced the fact-checking
studies. However, existing automated fact-checking evaluation methods rely on
static datasets and classification metrics, which fail to automatically
evaluate the justification production and uncover the nuanced limitations of
LLMs in fact-checking. In this work, we introduce FACT-AUDIT, an agent-driven
framework that adaptively and dynamically assesses LLMs' fact-checking
capabilities. Leveraging importance sampling principles and multi-agent
collaboration, FACT-AUDIT generates adaptive and scalable datasets, performs
iterative model-centric evaluations, and updates assessments based on
model-specific responses. By incorporating justification production alongside
verdict prediction, this framework provides a comprehensive and evolving audit
of LLMs' factual reasoning capabilities, to investigate their trustworthiness.
Extensive experiments demonstrate that FACT-AUDIT effectively differentiates
among state-of-the-art LLMs, providing valuable insights into model strengths
and limitations in model-centric fact-checking analysis.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 07:44:22 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 06:46:48 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Lin",
"Hongzhan",
""
],
[
"Deng",
"Yang",
""
],
[
"Gu",
"Yuxuan",
""
],
[
"Zhang",
"Wenxuan",
""
],
[
"Ma",
"Jing",
""
],
[
"Ng",
"See-Kiong",
""
],
[
"Chua",
"Tat-Seng",
""
]
]
| TITLE: FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking
Evaluation of Large Language Models
ABSTRACT: Large Language Models (LLMs) have significantly advanced the fact-checking
studies. However, existing automated fact-checking evaluation methods rely on
static datasets and classification metrics, which fail to automatically
evaluate the justification production and uncover the nuanced limitations of
LLMs in fact-checking. In this work, we introduce FACT-AUDIT, an agent-driven
framework that adaptively and dynamically assesses LLMs' fact-checking
capabilities. Leveraging importance sampling principles and multi-agent
collaboration, FACT-AUDIT generates adaptive and scalable datasets, performs
iterative model-centric evaluations, and updates assessments based on
model-specific responses. By incorporating justification production alongside
verdict prediction, this framework provides a comprehensive and evolving audit
of LLMs' factual reasoning capabilities, to investigate their trustworthiness.
Extensive experiments demonstrate that FACT-AUDIT effectively differentiates
among state-of-the-art LLMs, providing valuable insights into model strengths
and limitations in model-centric fact-checking analysis.
| no_new_dataset | 0.944638 |
2502.17941 | Mingyuan Sun | Mingyuan Sun, Zheng Fang, Jiaxu Wang, Junjie Jiang, Delei Kong,
Chenming Hu, Yuetong Fang, Renjing Xu | Optimal Brain Apoptosis | Accepted to ICLR 2025 | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The increasing complexity and parameter count of Convolutional Neural
Networks (CNNs) and Transformers pose challenges in terms of computational
efficiency and resource demands. Pruning has been identified as an effective
strategy to address these challenges by removing redundant elements such as
neurons, channels, or connections, thereby enhancing computational efficiency
without heavily compromising performance. This paper builds on the foundational
work of Optimal Brain Damage (OBD) by advancing the methodology of parameter
importance estimation using the Hessian matrix. Unlike previous approaches that
rely on approximations, we introduce Optimal Brain Apoptosis (OBA), a novel
pruning method that calculates the Hessian-vector product value directly for
each parameter. By decomposing the Hessian matrix across network layers and
identifying conditions under which inter-layer Hessian submatrices are
non-zero, we propose a highly efficient technique for computing the
second-order Taylor expansion of parameters. This approach allows for a more
precise pruning process, particularly in the context of CNNs and Transformers,
as validated in our experiments including VGG19, ResNet32, ResNet50, and
ViT-B/16 on CIFAR10, CIFAR100 and Imagenet datasets. Our code is available at
https://github.com/NEU-REAL/OBA.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 08:03:04 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 12:00:57 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Sun",
"Mingyuan",
""
],
[
"Fang",
"Zheng",
""
],
[
"Wang",
"Jiaxu",
""
],
[
"Jiang",
"Junjie",
""
],
[
"Kong",
"Delei",
""
],
[
"Hu",
"Chenming",
""
],
[
"Fang",
"Yuetong",
""
],
[
"Xu",
"Renjing",
""
]
]
| TITLE: Optimal Brain Apoptosis
ABSTRACT: The increasing complexity and parameter count of Convolutional Neural
Networks (CNNs) and Transformers pose challenges in terms of computational
efficiency and resource demands. Pruning has been identified as an effective
strategy to address these challenges by removing redundant elements such as
neurons, channels, or connections, thereby enhancing computational efficiency
without heavily compromising performance. This paper builds on the foundational
work of Optimal Brain Damage (OBD) by advancing the methodology of parameter
importance estimation using the Hessian matrix. Unlike previous approaches that
rely on approximations, we introduce Optimal Brain Apoptosis (OBA), a novel
pruning method that calculates the Hessian-vector product value directly for
each parameter. By decomposing the Hessian matrix across network layers and
identifying conditions under which inter-layer Hessian submatrices are
non-zero, we propose a highly efficient technique for computing the
second-order Taylor expansion of parameters. This approach allows for a more
precise pruning process, particularly in the context of CNNs and Transformers,
as validated in our experiments including VGG19, ResNet32, ResNet50, and
ViT-B/16 on CIFAR10, CIFAR100 and Imagenet datasets. Our code is available at
https://github.com/NEU-REAL/OBA.
| no_new_dataset | 0.950041 |
2502.18176 | Mingkun Zhang | Mingkun Zhang, Keping Bi, Wei Chen, Jiafeng Guo, Xueqi Cheng | CLIPure: Purification in Latent Space via CLIP for Adversarially Robust
Zero-Shot Classification | accepted by ICLR 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we aim to build an adversarially robust zero-shot image
classifier. We ground our work on CLIP, a vision-language pre-trained encoder
model that can perform zero-shot classification by matching an image with text
prompts ``a photo of a <class-name>.''. Purification is the path we choose
since it does not require adversarial training on specific attack types and
thus can cope with any foreseen attacks. We then formulate purification risk as
the KL divergence between the joint distributions of the purification process
of denoising the adversarial samples and the attack process of adding
perturbations to benign samples, through bidirectional Stochastic Differential
Equations (SDEs). The final derived results inspire us to explore purification
in the multi-modal latent space of CLIP. We propose two variants for our
CLIPure approach: CLIPure-Diff which models the likelihood of images' latent
vectors with the DiffusionPrior module in DaLLE-2 (modeling the generation
process of CLIP's latent vectors), and CLIPure-Cos which models the likelihood
with the cosine similarity between the embeddings of an image and ``a photo of
a.''. As far as we know, CLIPure is the first purification method in
multi-modal latent space and CLIPure-Cos is the first purification method that
is not based on generative models, which substantially improves defense
efficiency. We conducted extensive experiments on CIFAR-10, ImageNet, and 13
datasets that previous CLIP-based defense methods used for evaluating zero-shot
classification robustness. Results show that CLIPure boosts the SOTA robustness
by a large margin, e.g., from 71.7% to 91.1% on CIFAR10, from 59.6% to 72.6% on
ImageNet, and 108% relative improvements of average robustness on the 13
datasets over previous SOTA. The code is available at
https://github.com/TMLResearchGroup-CAS/CLIPure.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 13:09:34 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 09:22:47 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Mingkun",
""
],
[
"Bi",
"Keping",
""
],
[
"Chen",
"Wei",
""
],
[
"Guo",
"Jiafeng",
""
],
[
"Cheng",
"Xueqi",
""
]
]
| TITLE: CLIPure: Purification in Latent Space via CLIP for Adversarially Robust
Zero-Shot Classification
ABSTRACT: In this paper, we aim to build an adversarially robust zero-shot image
classifier. We ground our work on CLIP, a vision-language pre-trained encoder
model that can perform zero-shot classification by matching an image with text
prompts ``a photo of a <class-name>.''. Purification is the path we choose
since it does not require adversarial training on specific attack types and
thus can cope with any foreseen attacks. We then formulate purification risk as
the KL divergence between the joint distributions of the purification process
of denoising the adversarial samples and the attack process of adding
perturbations to benign samples, through bidirectional Stochastic Differential
Equations (SDEs). The final derived results inspire us to explore purification
in the multi-modal latent space of CLIP. We propose two variants for our
CLIPure approach: CLIPure-Diff which models the likelihood of images' latent
vectors with the DiffusionPrior module in DaLLE-2 (modeling the generation
process of CLIP's latent vectors), and CLIPure-Cos which models the likelihood
with the cosine similarity between the embeddings of an image and ``a photo of
a.''. As far as we know, CLIPure is the first purification method in
multi-modal latent space and CLIPure-Cos is the first purification method that
is not based on generative models, which substantially improves defense
efficiency. We conducted extensive experiments on CIFAR-10, ImageNet, and 13
datasets that previous CLIP-based defense methods used for evaluating zero-shot
classification robustness. Results show that CLIPure boosts the SOTA robustness
by a large margin, e.g., from 71.7% to 91.1% on CIFAR10, from 59.6% to 72.6% on
ImageNet, and 108% relative improvements of average robustness on the 13
datasets over previous SOTA. The code is available at
https://github.com/TMLResearchGroup-CAS/CLIPure.
| no_new_dataset | 0.949949 |
2502.18411 | Xiangyu Zhao | Xiangyu Zhao, Shengyuan Ding, Zicheng Zhang, Haian Huang, Maosong Cao,
Weiyun Wang, Jiaqi Wang, Xinyu Fang, Wenhai Wang, Guangtao Zhai, Haodong
Duan, Hua Yang, Kai Chen | OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent advancements in open-source multi-modal large language models (MLLMs)
have primarily focused on enhancing foundational capabilities, leaving a
significant gap in human preference alignment. This paper introduces
OmniAlign-V, a comprehensive dataset of 200K high-quality training samples
featuring diverse images, complex questions, and varied response formats to
improve MLLMs' alignment with human preferences. We also present MM-AlignBench,
a human-annotated benchmark specifically designed to evaluate MLLMs' alignment
with human values. Experimental results show that finetuning MLLMs with
OmniAlign-V, using Supervised Fine-Tuning (SFT) or Direct Preference
Optimization (DPO), significantly enhances human preference alignment while
maintaining or enhancing performance on standard VQA benchmarks, preserving
their fundamental capabilities. Our datasets, benchmark, code and checkpoints
have been released at https://github.com/PhoenixZ810/OmniAlign-V.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 18:05:14 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 03:09:28 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhao",
"Xiangyu",
""
],
[
"Ding",
"Shengyuan",
""
],
[
"Zhang",
"Zicheng",
""
],
[
"Huang",
"Haian",
""
],
[
"Cao",
"Maosong",
""
],
[
"Wang",
"Weiyun",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Fang",
"Xinyu",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Zhai",
"Guangtao",
""
],
[
"Duan",
"Haodong",
""
],
[
"Yang",
"Hua",
""
],
[
"Chen",
"Kai",
""
]
]
| TITLE: OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference
ABSTRACT: Recent advancements in open-source multi-modal large language models (MLLMs)
have primarily focused on enhancing foundational capabilities, leaving a
significant gap in human preference alignment. This paper introduces
OmniAlign-V, a comprehensive dataset of 200K high-quality training samples
featuring diverse images, complex questions, and varied response formats to
improve MLLMs' alignment with human preferences. We also present MM-AlignBench,
a human-annotated benchmark specifically designed to evaluate MLLMs' alignment
with human values. Experimental results show that finetuning MLLMs with
OmniAlign-V, using Supervised Fine-Tuning (SFT) or Direct Preference
Optimization (DPO), significantly enhances human preference alignment while
maintaining or enhancing performance on standard VQA benchmarks, preserving
their fundamental capabilities. Our datasets, benchmark, code and checkpoints
have been released at https://github.com/PhoenixZ810/OmniAlign-V.
| new_dataset | 0.960547 |
2502.18883 | Viet Duong | Yanfu Yan, Viet Duong, Huajie Shao, Denys Poshyvanyk | Towards More Trustworthy Deep Code Models by Enabling
Out-of-Distribution Detection | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Numerous machine learning (ML) models have been developed, including those
for software engineering (SE) tasks, under the assumption that training and
testing data come from the same distribution. However, training and testing
distributions often differ, as training datasets rarely encompass the entire
distribution, while testing distribution tends to shift over time. Hence, when
confronted with out-of-distribution (OOD) instances that differ from the
training data, a reliable and trustworthy SE ML model must be capable of
detecting them to either abstain from making predictions, or potentially
forward these OODs to appropriate models handling other categories or tasks.
In this paper, we develop two types of SE-specific OOD detection models,
unsupervised and weakly-supervised OOD detection for code. The unsupervised OOD
detection approach is trained solely on in-distribution samples while the
weakly-supervised approach utilizes a tiny number of OOD samples to further
enhance the detection performance in various OOD scenarios. Extensive
experimental results demonstrate that our proposed methods significantly
outperform the baselines in detecting OOD samples from four different scenarios
simultaneously and also positively impact a main code understanding task.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 06:59:53 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yan",
"Yanfu",
""
],
[
"Duong",
"Viet",
""
],
[
"Shao",
"Huajie",
""
],
[
"Poshyvanyk",
"Denys",
""
]
]
| TITLE: Towards More Trustworthy Deep Code Models by Enabling
Out-of-Distribution Detection
ABSTRACT: Numerous machine learning (ML) models have been developed, including those
for software engineering (SE) tasks, under the assumption that training and
testing data come from the same distribution. However, training and testing
distributions often differ, as training datasets rarely encompass the entire
distribution, while testing distribution tends to shift over time. Hence, when
confronted with out-of-distribution (OOD) instances that differ from the
training data, a reliable and trustworthy SE ML model must be capable of
detecting them to either abstain from making predictions, or potentially
forward these OODs to appropriate models handling other categories or tasks.
In this paper, we develop two types of SE-specific OOD detection models,
unsupervised and weakly-supervised OOD detection for code. The unsupervised OOD
detection approach is trained solely on in-distribution samples while the
weakly-supervised approach utilizes a tiny number of OOD samples to further
enhance the detection performance in various OOD scenarios. Extensive
experimental results demonstrate that our proposed methods significantly
outperform the baselines in detecting OOD samples from four different scenarios
simultaneously and also positively impact a main code understanding task.
| no_new_dataset | 0.945147 |
2502.18960 | Weilin Chen | Weilin Chen, Ruichu Cai, Junjie Wan, Zeqin Yang, Jos\'e Miguel
Hern\'andez-Lobato | Nonparametric Heterogeneous Long-term Causal Effect Estimation via Data
Combination | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-term causal inference has drawn increasing attention in many scientific
domains. Existing methods mainly focus on estimating average long-term causal
effects by combining long-term observational data and short-term experimental
data. However, it is still understudied how to robustly and effectively
estimate heterogeneous long-term causal effects, significantly limiting
practical applications. In this paper, we propose several two-stage style
nonparametric estimators for heterogeneous long-term causal effect estimation,
including propensity-based, regression-based, and multiple robust estimators.
We conduct a comprehensive theoretical analysis of their asymptotic properties
under mild assumptions, with the ultimate goal of building a better
understanding of the conditions under which some estimators can be expected to
perform better. Extensive experiments across several semi-synthetic and
real-world datasets validate the theoretical results and demonstrate the
effectiveness of the proposed estimators.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 09:17:04 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 16:14:51 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Chen",
"Weilin",
""
],
[
"Cai",
"Ruichu",
""
],
[
"Wan",
"Junjie",
""
],
[
"Yang",
"Zeqin",
""
],
[
"Hernández-Lobato",
"José Miguel",
""
]
]
| TITLE: Nonparametric Heterogeneous Long-term Causal Effect Estimation via Data
Combination
ABSTRACT: Long-term causal inference has drawn increasing attention in many scientific
domains. Existing methods mainly focus on estimating average long-term causal
effects by combining long-term observational data and short-term experimental
data. However, it is still understudied how to robustly and effectively
estimate heterogeneous long-term causal effects, significantly limiting
practical applications. In this paper, we propose several two-stage style
nonparametric estimators for heterogeneous long-term causal effect estimation,
including propensity-based, regression-based, and multiple robust estimators.
We conduct a comprehensive theoretical analysis of their asymptotic properties
under mild assumptions, with the ultimate goal of building a better
understanding of the conditions under which some estimators can be expected to
perform better. Extensive experiments across several semi-synthetic and
real-world datasets validate the theoretical results and demonstrate the
effectiveness of the proposed estimators.
| no_new_dataset | 0.946941 |
2502.19252 | Li Ju | Li Ju, Xingyi Yang, Qi Li, Xinchao Wang | GraphBridge: Towards Arbitrary Transfer Learning in GNNs | 10 pages, 3 figures, 6 tables, to be published in ICLR 2025 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Graph neural networks (GNNs) are conventionally trained on a per-domain,
per-task basis. It creates a significant barrier in transferring the acquired
knowledge to different, heterogeneous data setups. This paper introduces
GraphBridge, a novel framework to enable knowledge transfer across disparate
tasks and domains in GNNs, circumventing the need for modifications to task
configurations or graph structures. Specifically, GraphBridge allows for the
augmentation of any pre-trained GNN with prediction heads and a bridging
network that connects the input to the output layer. This architecture not only
preserves the intrinsic knowledge of the original model but also supports
outputs of arbitrary dimensions. To mitigate the negative transfer problem,
GraphBridge merges the source model with a concurrently trained model, thereby
reducing the source bias when applied to the target domain. Our method is
thoroughly evaluated across diverse transfer learning scenarios, including
Graph2Graph, Node2Node, Graph2Node, and graph2point-cloud. Empirical
validation, conducted over 16 datasets representative of these scenarios,
confirms the framework's capacity for task- and domain-agnostic transfer
learning within graph-like data, marking a significant advancement in the field
of GNNs. Code is available at https://github.com/jujulili888/GraphBridge.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 15:57:51 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 16:10:27 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ju",
"Li",
""
],
[
"Yang",
"Xingyi",
""
],
[
"Li",
"Qi",
""
],
[
"Wang",
"Xinchao",
""
]
]
| TITLE: GraphBridge: Towards Arbitrary Transfer Learning in GNNs
ABSTRACT: Graph neural networks (GNNs) are conventionally trained on a per-domain,
per-task basis. It creates a significant barrier in transferring the acquired
knowledge to different, heterogeneous data setups. This paper introduces
GraphBridge, a novel framework to enable knowledge transfer across disparate
tasks and domains in GNNs, circumventing the need for modifications to task
configurations or graph structures. Specifically, GraphBridge allows for the
augmentation of any pre-trained GNN with prediction heads and a bridging
network that connects the input to the output layer. This architecture not only
preserves the intrinsic knowledge of the original model but also supports
outputs of arbitrary dimensions. To mitigate the negative transfer problem,
GraphBridge merges the source model with a concurrently trained model, thereby
reducing the source bias when applied to the target domain. Our method is
thoroughly evaluated across diverse transfer learning scenarios, including
Graph2Graph, Node2Node, Graph2Node, and graph2point-cloud. Empirical
validation, conducted over 16 datasets representative of these scenarios,
confirms the framework's capacity for task- and domain-agnostic transfer
learning within graph-like data, marking a significant advancement in the field
of GNNs. Code is available at https://github.com/jujulili888/GraphBridge.
| no_new_dataset | 0.950503 |
2502.19260 | Nadya Abdel Madjid | Nadya Abdel Madjid, Murad Mebrahtu, Abdelmoamen Nasser, Bilal Hassan,
Naoufel Werghi, Jorge Dias, and Majid Khonji | EMT: A Visual Multi-Task Benchmark Dataset for Autonomous Driving in the
Arab Gulf Region | 19 pages, 6 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces the Emirates Multi-Task (EMT) dataset - the first
publicly available dataset for autonomous driving collected in the Arab Gulf
region. The EMT dataset captures the unique road topology, high traffic
congestion, and distinctive characteristics of the Gulf region, including
variations in pedestrian clothing and weather conditions. It contains over
30,000 frames from a dash-camera perspective, along with 570,000 annotated
bounding boxes, covering approximately 150 kilometers of driving routes. The
EMT dataset supports three primary tasks: tracking, trajectory forecasting and
intention prediction. Each benchmark dataset is complemented with corresponding
evaluations: (1) multi-agent tracking experiments, focusing on multi-class
scenarios and occlusion handling; (2) trajectory forecasting evaluation using
deep sequential and interaction-aware models; and (3) intention benchmark
experiments conducted for predicting agents intentions from observed
trajectories. The dataset is publicly available at avlab.io/emt-dataset, and
pre-processing scripts along with evaluation models can be accessed at
github.com/AV-Lab/emt-dataset.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 16:06:35 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 06:08:34 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Madjid",
"Nadya Abdel",
""
],
[
"Mebrahtu",
"Murad",
""
],
[
"Nasser",
"Abdelmoamen",
""
],
[
"Hassan",
"Bilal",
""
],
[
"Werghi",
"Naoufel",
""
],
[
"Dias",
"Jorge",
""
],
[
"Khonji",
"Majid",
""
]
]
| TITLE: EMT: A Visual Multi-Task Benchmark Dataset for Autonomous Driving in the
Arab Gulf Region
ABSTRACT: This paper introduces the Emirates Multi-Task (EMT) dataset - the first
publicly available dataset for autonomous driving collected in the Arab Gulf
region. The EMT dataset captures the unique road topology, high traffic
congestion, and distinctive characteristics of the Gulf region, including
variations in pedestrian clothing and weather conditions. It contains over
30,000 frames from a dash-camera perspective, along with 570,000 annotated
bounding boxes, covering approximately 150 kilometers of driving routes. The
EMT dataset supports three primary tasks: tracking, trajectory forecasting and
intention prediction. Each benchmark dataset is complemented with corresponding
evaluations: (1) multi-agent tracking experiments, focusing on multi-class
scenarios and occlusion handling; (2) trajectory forecasting evaluation using
deep sequential and interaction-aware models; and (3) intention benchmark
experiments conducted for predicting agents intentions from observed
trajectories. The dataset is publicly available at avlab.io/emt-dataset, and
pre-processing scripts along with evaluation models can be accessed at
github.com/AV-Lab/emt-dataset.
| new_dataset | 0.962673 |
2502.19412 | Shir Ashury-Tahan | Shir Ashury-Tahan, Yifan Mai, Rajmohan C, Ariel Gera, Yotam Perlitz,
Asaf Yehudai, Elron Bandel, Leshem Choshen, Eyal Shnarch, Percy Liang and
Michal Shmueli-Scheuer | The Mighty ToRR: A Benchmark for Table Reasoning and Robustness | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Despite its real-world significance, model performance on tabular data
remains underexplored, leaving uncertainty about which model to rely on and
which prompt configuration to adopt. To address this gap, we create ToRR, a
benchmark for Table Reasoning and Robustness, measuring model performance and
robustness on table-related tasks. The benchmark includes 10 datasets that
cover different types of table reasoning capabilities across varied domains.
ToRR goes beyond model performance rankings, and is designed to reflect whether
models can handle tabular data consistently and robustly, across a variety of
common table representation formats. We present a leaderboard as well as
comprehensive analyses of the results of leading models over ToRR. Our results
reveal a striking pattern of brittle model behavior, where even strong models
are unable to perform robustly on tabular data tasks. Although no specific
table format leads to consistently better performance, we show that testing
over multiple formats is crucial for reliably estimating model capabilities.
Moreover, we show that the reliability boost from testing multiple prompts can
be equivalent to adding more test examples. Overall, our findings show that
table understanding and reasoning tasks remain a significant challenge.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 18:56:38 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 16:16:39 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ashury-Tahan",
"Shir",
""
],
[
"Mai",
"Yifan",
""
],
[
"C",
"Rajmohan",
""
],
[
"Gera",
"Ariel",
""
],
[
"Perlitz",
"Yotam",
""
],
[
"Yehudai",
"Asaf",
""
],
[
"Bandel",
"Elron",
""
],
[
"Choshen",
"Leshem",
""
],
[
"Shnarch",
"Eyal",
""
],
[
"Liang",
"Percy",
""
],
[
"Shmueli-Scheuer",
"Michal",
""
]
]
| TITLE: The Mighty ToRR: A Benchmark for Table Reasoning and Robustness
ABSTRACT: Despite its real-world significance, model performance on tabular data
remains underexplored, leaving uncertainty about which model to rely on and
which prompt configuration to adopt. To address this gap, we create ToRR, a
benchmark for Table Reasoning and Robustness, measuring model performance and
robustness on table-related tasks. The benchmark includes 10 datasets that
cover different types of table reasoning capabilities across varied domains.
ToRR goes beyond model performance rankings, and is designed to reflect whether
models can handle tabular data consistently and robustly, across a variety of
common table representation formats. We present a leaderboard as well as
comprehensive analyses of the results of leading models over ToRR. Our results
reveal a striking pattern of brittle model behavior, where even strong models
are unable to perform robustly on tabular data tasks. Although no specific
table format leads to consistently better performance, we show that testing
over multiple formats is crucial for reliably estimating model capabilities.
Moreover, we show that the reliability boost from testing multiple prompts can
be equivalent to adding more test examples. Overall, our findings show that
table understanding and reasoning tasks remain a significant challenge.
| new_dataset | 0.949342 |
2502.19454 | Menghao Li | Menghao Li, Zhenghao Zhang, Junchao Liao, Long Qin, Weizhi Wang | TransVDM: Motion-Constrained Video Diffusion Model for Transparent Video
Synthesis | null | null | null | null | cs.GR | http://creativecommons.org/licenses/by/4.0/ | Recent developments in Video Diffusion Models (VDMs) have demonstrated
remarkable capability to generate high-quality video content. Nonetheless, the
potential of VDMs for creating transparent videos remains largely uncharted. In
this paper, we introduce TransVDM, the first diffusion-based model specifically
designed for transparent video generation. TransVDM integrates a Transparent
Variational Autoencoder (TVAE) and a pretrained UNet-based VDM, along with a
novel Alpha Motion Constraint Module (AMCM). The TVAE captures the alpha
channel transparency of video frames and encodes it into the latent space of
the VDMs, facilitating a seamless transition to transparent video diffusion
models. To improve the detection of transparent areas, the AMCM integrates
motion constraints from the foreground within the VDM, helping to reduce
undesirable artifacts. Moreover, we curate a dataset containing 250K
transparent frames for training. Experimental results demonstrate the
effectiveness of our approach across various benchmarks.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2025 07:17:22 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 08:09:34 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Li",
"Menghao",
""
],
[
"Zhang",
"Zhenghao",
""
],
[
"Liao",
"Junchao",
""
],
[
"Qin",
"Long",
""
],
[
"Wang",
"Weizhi",
""
]
]
| TITLE: TransVDM: Motion-Constrained Video Diffusion Model for Transparent Video
Synthesis
ABSTRACT: Recent developments in Video Diffusion Models (VDMs) have demonstrated
remarkable capability to generate high-quality video content. Nonetheless, the
potential of VDMs for creating transparent videos remains largely uncharted. In
this paper, we introduce TransVDM, the first diffusion-based model specifically
designed for transparent video generation. TransVDM integrates a Transparent
Variational Autoencoder (TVAE) and a pretrained UNet-based VDM, along with a
novel Alpha Motion Constraint Module (AMCM). The TVAE captures the alpha
channel transparency of video frames and encodes it into the latent space of
the VDMs, facilitating a seamless transition to transparent video diffusion
models. To improve the detection of transparent areas, the AMCM integrates
motion constraints from the foreground within the VDM, helping to reduce
undesirable artifacts. Moreover, we curate a dataset containing 250K
transparent frames for training. Experimental results demonstrate the
effectiveness of our approach across various benchmarks.
| new_dataset | 0.960212 |
2502.19842 | Reza Abbasi | Reza Abbasi, Ali Nazari, Aminreza Sefid, Mohammadali Banayeeanzade,
Mohammad Hossein Rohban, Mahdieh Soleymani Baghshah | CLIP Under the Microscope: A Fine-Grained Analysis of Multi-Object
Representation | Accepted at CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Contrastive Language-Image Pre-training (CLIP) models excel in zero-shot
classification, yet face challenges in complex multi-object scenarios. This
study offers a comprehensive analysis of CLIP's limitations in these contexts
using a specialized dataset, ComCO, designed to evaluate CLIP's encoders in
diverse multi-object scenarios. Our findings reveal significant biases: the
text encoder prioritizes first-mentioned objects, and the image encoder favors
larger objects. Through retrieval and classification tasks, we quantify these
biases across multiple CLIP variants and trace their origins to CLIP's training
process, supported by analyses of the LAION dataset and training progression.
Our image-text matching experiments show substantial performance drops when
object size or token order changes, underscoring CLIP's instability with
rephrased but semantically similar captions. Extending this to longer captions
and text-to-image models like Stable Diffusion, we demonstrate how prompt order
influences object prominence in generated images. For more details and access
to our dataset and analysis code, visit our project repository:
https://clip-oscope.github.io.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 07:34:42 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2025 19:00:13 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Abbasi",
"Reza",
""
],
[
"Nazari",
"Ali",
""
],
[
"Sefid",
"Aminreza",
""
],
[
"Banayeeanzade",
"Mohammadali",
""
],
[
"Rohban",
"Mohammad Hossein",
""
],
[
"Baghshah",
"Mahdieh Soleymani",
""
]
]
| TITLE: CLIP Under the Microscope: A Fine-Grained Analysis of Multi-Object
Representation
ABSTRACT: Contrastive Language-Image Pre-training (CLIP) models excel in zero-shot
classification, yet face challenges in complex multi-object scenarios. This
study offers a comprehensive analysis of CLIP's limitations in these contexts
using a specialized dataset, ComCO, designed to evaluate CLIP's encoders in
diverse multi-object scenarios. Our findings reveal significant biases: the
text encoder prioritizes first-mentioned objects, and the image encoder favors
larger objects. Through retrieval and classification tasks, we quantify these
biases across multiple CLIP variants and trace their origins to CLIP's training
process, supported by analyses of the LAION dataset and training progression.
Our image-text matching experiments show substantial performance drops when
object size or token order changes, underscoring CLIP's instability with
rephrased but semantically similar captions. Extending this to longer captions
and text-to-image models like Stable Diffusion, we demonstrate how prompt order
influences object prominence in generated images. For more details and access
to our dataset and analysis code, visit our project repository:
https://clip-oscope.github.io.
| new_dataset | 0.968501 |
2502.20108 | Ziang Guo | Ziang Guo, Konstantin Gubernatorov, Selamawit Asfaw, Zakhar Yagudin,
Dzmitry Tsetserukou | VDT-Auto: End-to-end Autonomous Driving with VLM-Guided Diffusion
Transformers | Submitted paper | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In autonomous driving, dynamic environment and corner cases pose significant
challenges to the robustness of ego vehicle's decision-making. To address these
challenges, commencing with the representation of state-action mapping in the
end-to-end autonomous driving paradigm, we introduce a novel pipeline,
VDT-Auto. Leveraging the advancement of the state understanding of Visual
Language Model (VLM), incorporating with diffusion Transformer-based action
generation, our VDT-Auto parses the environment geometrically and contextually
for the conditioning of the diffusion process. Geometrically, we use a
bird's-eye view (BEV) encoder to extract feature grids from the surrounding
images. Contextually, the structured output of our fine-tuned VLM is processed
into textual embeddings and noisy paths. During our diffusion process, the
added noise for the forward process is sampled from the noisy path output of
the fine-tuned VLM, while the extracted BEV feature grids and embedded texts
condition the reverse process of our diffusion Transformers. Our VDT-Auto
achieved 0.52m on average L2 errors and 21% on average collision rate in the
nuScenes open-loop planning evaluation. Moreover, the real-world demonstration
exhibited prominent generalizability of our VDT-Auto. The code and dataset will
be released after acceptance.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 14:02:14 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Mar 2025 23:17:26 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Guo",
"Ziang",
""
],
[
"Gubernatorov",
"Konstantin",
""
],
[
"Asfaw",
"Selamawit",
""
],
[
"Yagudin",
"Zakhar",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
]
| TITLE: VDT-Auto: End-to-end Autonomous Driving with VLM-Guided Diffusion
Transformers
ABSTRACT: In autonomous driving, dynamic environment and corner cases pose significant
challenges to the robustness of ego vehicle's decision-making. To address these
challenges, commencing with the representation of state-action mapping in the
end-to-end autonomous driving paradigm, we introduce a novel pipeline,
VDT-Auto. Leveraging the advancement of the state understanding of Visual
Language Model (VLM), incorporating with diffusion Transformer-based action
generation, our VDT-Auto parses the environment geometrically and contextually
for the conditioning of the diffusion process. Geometrically, we use a
bird's-eye view (BEV) encoder to extract feature grids from the surrounding
images. Contextually, the structured output of our fine-tuned VLM is processed
into textual embeddings and noisy paths. During our diffusion process, the
added noise for the forward process is sampled from the noisy path output of
the fine-tuned VLM, while the extracted BEV feature grids and embedded texts
condition the reverse process of our diffusion Transformers. Our VDT-Auto
achieved 0.52m on average L2 errors and 21% on average collision rate in the
nuScenes open-loop planning evaluation. Moreover, the real-world demonstration
exhibited prominent generalizability of our VDT-Auto. The code and dataset will
be released after acceptance.
| no_new_dataset | 0.94887 |
2502.20209 | Luis Marquez-Carpintero | Luis Marquez-Carpintero, Sergio Suescun-Ferrandiz, Carolina Lorenzo
\'Alvarez, Jorge Fernandez-Herrero, Diego Viejo, Rosabel Roig-Vila, and
Miguel Cazorla | DIPSER: A Dataset for In-Person Student Engagement Recognition in the
Wild | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, a novel dataset is introduced, designed to assess student
attention within in-person classroom settings. This dataset encompasses RGB
camera data, featuring multiple cameras per student to capture both posture and
facial expressions, in addition to smartwatch sensor data for each individual.
This dataset allows machine learning algorithms to be trained to predict
attention and correlate it with emotion. A comprehensive suite of attention and
emotion labels for each student is provided, generated through self-reporting
as well as evaluations by four different experts. Our dataset uniquely combines
facial and environmental camera data, smartwatch metrics, and includes
underrepresented ethnicities in similar datasets, all within in-the-wild,
in-person settings, making it the most comprehensive dataset of its kind
currently available.
The dataset presented offers an extensive and diverse collection of data
pertaining to student interactions across different educational contexts,
augmented with additional metadata from other tools. This initiative addresses
existing deficiencies by offering a valuable resource for the analysis of
student attention and emotion in face-to-face lessons.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 15:50:21 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 13:36:57 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Marquez-Carpintero",
"Luis",
""
],
[
"Suescun-Ferrandiz",
"Sergio",
""
],
[
"Álvarez",
"Carolina Lorenzo",
""
],
[
"Fernandez-Herrero",
"Jorge",
""
],
[
"Viejo",
"Diego",
""
],
[
"Roig-Vila",
"Rosabel",
""
],
[
"Cazorla",
"Miguel",
""
]
]
| TITLE: DIPSER: A Dataset for In-Person Student Engagement Recognition in the
Wild
ABSTRACT: In this paper, a novel dataset is introduced, designed to assess student
attention within in-person classroom settings. This dataset encompasses RGB
camera data, featuring multiple cameras per student to capture both posture and
facial expressions, in addition to smartwatch sensor data for each individual.
This dataset allows machine learning algorithms to be trained to predict
attention and correlate it with emotion. A comprehensive suite of attention and
emotion labels for each student is provided, generated through self-reporting
as well as evaluations by four different experts. Our dataset uniquely combines
facial and environmental camera data, smartwatch metrics, and includes
underrepresented ethnicities in similar datasets, all within in-the-wild,
in-person settings, making it the most comprehensive dataset of its kind
currently available.
The dataset presented offers an extensive and diverse collection of data
pertaining to student interactions across different educational contexts,
augmented with additional metadata from other tools. This initiative addresses
existing deficiencies by offering a valuable resource for the analysis of
student attention and emotion in face-to-face lessons.
| new_dataset | 0.960694 |
2502.20627 | Li Yang | Li Yang, Shimaa Naser, Abdallah Shami, Sami Muhaidat, Lyndon Ong, and
M\'erouane Debbah | Towards Zero Touch Networks: Cross-Layer Automated Security Solutions
for 6G Wireless Networks | Accepted and To Appear in IEEE Transactions on Communications (TCOM);
Code is available at Github:
https://github.com/Western-OC2-Lab/Cross-Layer-Autonomous-Cybersecurity-Framework | null | 10.1109/TCOMM.2025.3547764 | null | cs.CR cs.LG cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The transition from 5G to 6G mobile networks necessitates network automation
to meet the escalating demands for high data rates, ultra-low latency, and
integrated technology. Recently, Zero-Touch Networks (ZTNs), driven by
Artificial Intelligence (AI) and Machine Learning (ML), are designed to
automate the entire lifecycle of network operations with minimal human
intervention, presenting a promising solution for enhancing automation in 5G/6G
networks. However, the implementation of ZTNs brings forth the need for
autonomous and robust cybersecurity solutions, as ZTNs rely heavily on
automation. AI/ML algorithms are widely used to develop cybersecurity
mechanisms, but require substantial specialized expertise and encounter model
drift issues, posing significant challenges in developing autonomous
cybersecurity measures. Therefore, this paper proposes an automated security
framework targeting Physical Layer Authentication (PLA) and Cross-Layer
Intrusion Detection Systems (CLIDS) to address security concerns at multiple
Internet protocol layers. The proposed framework employs drift-adaptive online
learning techniques and a novel enhanced Successive Halving (SH)-based
Automated ML (AutoML) method to automatically generate optimized ML models for
dynamic networking environments. Experimental results illustrate that the
proposed framework achieves high performance on the public Radio Frequency (RF)
fingerprinting and the Canadian Institute for CICIDS2017 datasets, showcasing
its effectiveness in addressing PLA and CLIDS tasks within dynamic and complex
networking environments. Furthermore, the paper explores open challenges and
research directions in the 5G/6G cybersecurity domain. This framework
represents a significant advancement towards fully autonomous and secure 6G
networks, paving the way for future innovations in network automation and
cybersecurity.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 01:16:11 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yang",
"Li",
""
],
[
"Naser",
"Shimaa",
""
],
[
"Shami",
"Abdallah",
""
],
[
"Muhaidat",
"Sami",
""
],
[
"Ong",
"Lyndon",
""
],
[
"Debbah",
"Mérouane",
""
]
]
| TITLE: Towards Zero Touch Networks: Cross-Layer Automated Security Solutions
for 6G Wireless Networks
ABSTRACT: The transition from 5G to 6G mobile networks necessitates network automation
to meet the escalating demands for high data rates, ultra-low latency, and
integrated technology. Recently, Zero-Touch Networks (ZTNs), driven by
Artificial Intelligence (AI) and Machine Learning (ML), are designed to
automate the entire lifecycle of network operations with minimal human
intervention, presenting a promising solution for enhancing automation in 5G/6G
networks. However, the implementation of ZTNs brings forth the need for
autonomous and robust cybersecurity solutions, as ZTNs rely heavily on
automation. AI/ML algorithms are widely used to develop cybersecurity
mechanisms, but require substantial specialized expertise and encounter model
drift issues, posing significant challenges in developing autonomous
cybersecurity measures. Therefore, this paper proposes an automated security
framework targeting Physical Layer Authentication (PLA) and Cross-Layer
Intrusion Detection Systems (CLIDS) to address security concerns at multiple
Internet protocol layers. The proposed framework employs drift-adaptive online
learning techniques and a novel enhanced Successive Halving (SH)-based
Automated ML (AutoML) method to automatically generate optimized ML models for
dynamic networking environments. Experimental results illustrate that the
proposed framework achieves high performance on the public Radio Frequency (RF)
fingerprinting and the Canadian Institute for CICIDS2017 datasets, showcasing
its effectiveness in addressing PLA and CLIDS tasks within dynamic and complex
networking environments. Furthermore, the paper explores open challenges and
research directions in the 5G/6G cybersecurity domain. This framework
represents a significant advancement towards fully autonomous and secure 6G
networks, paving the way for future innovations in network automation and
cybersecurity.
| no_new_dataset | 0.946448 |
2502.20854 | Xujie Yuan | Xujie Yuan, Yongxu Liu, Shimin Di, Shiwen Wu, Libin Zheng, Rui Meng,
Lei Chen, Xiaofang Zhou, Jian Yin | A Pilot Empirical Study on When and How to Use Knowledge Graphs as
Retrieval Augmented Generation | 8 pages, 2 figures, 14 tables | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The integration of Knowledge Graphs (KGs) into the Retrieval Augmented
Generation (RAG) framework has attracted significant interest, with early
studies showing promise in mitigating hallucinations and improving model
accuracy. However, a systematic understanding and comparative analysis of the
rapidly emerging KG-RAG methods are still lacking. This paper seeks to lay the
foundation for systematically answering the question of when and how to use
KG-RAG by analyzing their performance in various application scenarios
associated with different technical configurations. After outlining the mind
map using KG-RAG framework and summarizing its popular pipeline, we conduct a
pilot empirical study of KG-RAG works to reimplement and evaluate 6 KG-RAG
methods across 7 datasets in diverse scenarios, analyzing the impact of 9
KG-RAG configurations in combination with 17 LLMs. Our results underscore the
critical role of appropriate application conditions and optimal configurations
of KG-RAG components.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 08:53:08 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 03:00:59 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Yuan",
"Xujie",
""
],
[
"Liu",
"Yongxu",
""
],
[
"Di",
"Shimin",
""
],
[
"Wu",
"Shiwen",
""
],
[
"Zheng",
"Libin",
""
],
[
"Meng",
"Rui",
""
],
[
"Chen",
"Lei",
""
],
[
"Zhou",
"Xiaofang",
""
],
[
"Yin",
"Jian",
""
]
]
| TITLE: A Pilot Empirical Study on When and How to Use Knowledge Graphs as
Retrieval Augmented Generation
ABSTRACT: The integration of Knowledge Graphs (KGs) into the Retrieval Augmented
Generation (RAG) framework has attracted significant interest, with early
studies showing promise in mitigating hallucinations and improving model
accuracy. However, a systematic understanding and comparative analysis of the
rapidly emerging KG-RAG methods are still lacking. This paper seeks to lay the
foundation for systematically answering the question of when and how to use
KG-RAG by analyzing their performance in various application scenarios
associated with different technical configurations. After outlining the mind
map using KG-RAG framework and summarizing its popular pipeline, we conduct a
pilot empirical study of KG-RAG works to reimplement and evaluate 6 KG-RAG
methods across 7 datasets in diverse scenarios, analyzing the impact of 9
KG-RAG configurations in combination with 17 LLMs. Our results underscore the
critical role of appropriate application conditions and optimal configurations
of KG-RAG components.
| no_new_dataset | 0.936807 |
2502.21093 | Jingqiu Zhou | Jingqiu Zhou, Lue Fan, Linjiang Huang, Xiaoyu Shi, Si Liu, Zhaoxiang
Zhang, Hongsheng Li | FlexDrive: Toward Trajectory Flexibility in Driving Scene Reconstruction
and Rendering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driving scene reconstruction and rendering have advanced significantly using
the 3D Gaussian Splatting. However, most prior research has focused on the
rendering quality along a pre-recorded vehicle path and struggles to generalize
to out-of-path viewpoints, which is caused by the lack of high-quality
supervision in those out-of-path views. To address this issue, we introduce an
Inverse View Warping technique to create compact and high-quality images as
supervision for the reconstruction of the out-of-path views, enabling
high-quality rendering results for those views. For accurate and robust inverse
view warping, a depth bootstrap strategy is proposed to obtain on-the-fly dense
depth maps during the optimization process, overcoming the sparsity and
incompleteness of LiDAR depth data. Our method achieves superior in-path and
out-of-path reconstruction and rendering performance on the widely used Waymo
Open dataset. In addition, a simulator-based benchmark is proposed to obtain
the out-of-path ground truth and quantitatively evaluate the performance of
out-of-path rendering, where our method outperforms previous methods by a
significant margin.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 14:32:04 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 03:48:47 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhou",
"Jingqiu",
""
],
[
"Fan",
"Lue",
""
],
[
"Huang",
"Linjiang",
""
],
[
"Shi",
"Xiaoyu",
""
],
[
"Liu",
"Si",
""
],
[
"Zhang",
"Zhaoxiang",
""
],
[
"Li",
"Hongsheng",
""
]
]
| TITLE: FlexDrive: Toward Trajectory Flexibility in Driving Scene Reconstruction
and Rendering
ABSTRACT: Driving scene reconstruction and rendering have advanced significantly using
the 3D Gaussian Splatting. However, most prior research has focused on the
rendering quality along a pre-recorded vehicle path and struggles to generalize
to out-of-path viewpoints, which is caused by the lack of high-quality
supervision in those out-of-path views. To address this issue, we introduce an
Inverse View Warping technique to create compact and high-quality images as
supervision for the reconstruction of the out-of-path views, enabling
high-quality rendering results for those views. For accurate and robust inverse
view warping, a depth bootstrap strategy is proposed to obtain on-the-fly dense
depth maps during the optimization process, overcoming the sparsity and
incompleteness of LiDAR depth data. Our method achieves superior in-path and
out-of-path reconstruction and rendering performance on the widely used Waymo
Open dataset. In addition, a simulator-based benchmark is proposed to obtain
the out-of-path ground truth and quantitatively evaluate the performance of
out-of-path rendering, where our method outperforms previous methods by a
significant margin.
| no_new_dataset | 0.94801 |
2502.21130 | Jiuyang Dong | Jiuyang Dong, Junjun Jiang, Kui Jiang, Jiahan Li, Yongbing Zhang | Fast and Accurate Gigapixel Pathological Image Classification with
Hierarchical Distillation Multi-Instance Learning | 11 pages, 4 figures, accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although multi-instance learning (MIL) has succeeded in pathological image
classification, it faces the challenge of high inference costs due to
processing numerous patches from gigapixel whole slide images (WSIs). To
address this, we propose HDMIL, a hierarchical distillation multi-instance
learning framework that achieves fast and accurate classification by
eliminating irrelevant patches. HDMIL consists of two key components: the
dynamic multi-instance network (DMIN) and the lightweight instance
pre-screening network (LIPN). DMIN operates on high-resolution WSIs, while LIPN
operates on the corresponding low-resolution counterparts. During training,
DMIN are trained for WSI classification while generating attention-score-based
masks that indicate irrelevant patches. These masks then guide the training of
LIPN to predict the relevance of each low-resolution patch. During testing,
LIPN first determines the useful regions within low-resolution WSIs, which
indirectly enables us to eliminate irrelevant regions in high-resolution WSIs,
thereby reducing inference time without causing performance degradation. In
addition, we further design the first Chebyshev-polynomials-based
Kolmogorov-Arnold classifier in computational pathology, which enhances the
performance of HDMIL through learnable activation layers. Extensive experiments
on three public datasets demonstrate that HDMIL outperforms previous
state-of-the-art methods, e.g., achieving improvements of 3.13% in AUC while
reducing inference time by 28.6% on the Camelyon16 dataset.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 15:10:07 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 08:39:54 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Dong",
"Jiuyang",
""
],
[
"Jiang",
"Junjun",
""
],
[
"Jiang",
"Kui",
""
],
[
"Li",
"Jiahan",
""
],
[
"Zhang",
"Yongbing",
""
]
]
| TITLE: Fast and Accurate Gigapixel Pathological Image Classification with
Hierarchical Distillation Multi-Instance Learning
ABSTRACT: Although multi-instance learning (MIL) has succeeded in pathological image
classification, it faces the challenge of high inference costs due to
processing numerous patches from gigapixel whole slide images (WSIs). To
address this, we propose HDMIL, a hierarchical distillation multi-instance
learning framework that achieves fast and accurate classification by
eliminating irrelevant patches. HDMIL consists of two key components: the
dynamic multi-instance network (DMIN) and the lightweight instance
pre-screening network (LIPN). DMIN operates on high-resolution WSIs, while LIPN
operates on the corresponding low-resolution counterparts. During training,
DMIN are trained for WSI classification while generating attention-score-based
masks that indicate irrelevant patches. These masks then guide the training of
LIPN to predict the relevance of each low-resolution patch. During testing,
LIPN first determines the useful regions within low-resolution WSIs, which
indirectly enables us to eliminate irrelevant regions in high-resolution WSIs,
thereby reducing inference time without causing performance degradation. In
addition, we further design the first Chebyshev-polynomials-based
Kolmogorov-Arnold classifier in computational pathology, which enhances the
performance of HDMIL through learnable activation layers. Extensive experiments
on three public datasets demonstrate that HDMIL outperforms previous
state-of-the-art methods, e.g., achieving improvements of 3.13% in AUC while
reducing inference time by 28.6% on the Camelyon16 dataset.
| no_new_dataset | 0.947817 |
2502.21228 | Omer Goldman | Omer Goldman, Uri Shaham, Dan Malkin, Sivan Eiger, Avinatan Hassidim,
Yossi Matias, Joshua Maynez, Adi Mayrav Gilady, Jason Riesa, Shruti Rijhwani,
Laura Rimell, Idan Szpektor, Reut Tsarfaty, Matan Eyal | ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual
Knowledge Transfer | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | To achieve equitable performance across languages, multilingual large
language models (LLMs) must be able to abstract knowledge beyond the language
in which it was acquired. However, the current literature lacks reliable ways
to measure LLMs' capability of cross-lingual knowledge transfer. To that end,
we present ECLeKTic, a multilingual closed-book QA (CBQA) dataset that
Evaluates Cross-Lingual Knowledge Transfer in a simple, black-box manner. We
detected information with uneven coverage across languages by controlling for
presence and absence of Wikipedia articles in 12 languages. We generated
knowledge-seeking questions in a source language, for which the answer appears
in a relevant Wikipedia article and translated them to all other 11 languages,
for which the respective Wikipedias lack equivalent articles. Assuming that
Wikipedia reflects the prominent knowledge in the LLM's training data, to solve
ECLeKTic's CBQA task the model is required to transfer knowledge between
languages. Experimenting with 8 LLMs, we show that SOTA models struggle to
effectively share knowledge across, languages even if they can predict the
answer well for queries in the same language the knowledge was acquired in.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 16:59:30 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 09:11:46 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Goldman",
"Omer",
""
],
[
"Shaham",
"Uri",
""
],
[
"Malkin",
"Dan",
""
],
[
"Eiger",
"Sivan",
""
],
[
"Hassidim",
"Avinatan",
""
],
[
"Matias",
"Yossi",
""
],
[
"Maynez",
"Joshua",
""
],
[
"Gilady",
"Adi Mayrav",
""
],
[
"Riesa",
"Jason",
""
],
[
"Rijhwani",
"Shruti",
""
],
[
"Rimell",
"Laura",
""
],
[
"Szpektor",
"Idan",
""
],
[
"Tsarfaty",
"Reut",
""
],
[
"Eyal",
"Matan",
""
]
]
| TITLE: ECLeKTic: a Novel Challenge Set for Evaluation of Cross-Lingual
Knowledge Transfer
ABSTRACT: To achieve equitable performance across languages, multilingual large
language models (LLMs) must be able to abstract knowledge beyond the language
in which it was acquired. However, the current literature lacks reliable ways
to measure LLMs' capability of cross-lingual knowledge transfer. To that end,
we present ECLeKTic, a multilingual closed-book QA (CBQA) dataset that
Evaluates Cross-Lingual Knowledge Transfer in a simple, black-box manner. We
detected information with uneven coverage across languages by controlling for
presence and absence of Wikipedia articles in 12 languages. We generated
knowledge-seeking questions in a source language, for which the answer appears
in a relevant Wikipedia article and translated them to all other 11 languages,
for which the respective Wikipedias lack equivalent articles. Assuming that
Wikipedia reflects the prominent knowledge in the LLM's training data, to solve
ECLeKTic's CBQA task the model is required to transfer knowledge between
languages. Experimenting with 8 LLMs, we show that SOTA models struggle to
effectively share knowledge across, languages even if they can predict the
answer well for queries in the same language the knowledge was acquired in.
| new_dataset | 0.963609 |
2503.00018 | Siyang Liu | Siyang Liu, Bianca Brie, Wenda Li, Laura Biester, Andrew Lee, James
Pennebaker, Rada Mihalcea | Eeyore: Realistic Depression Simulation via Supervised and Preference
Optimization | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have been previously explored for mental
healthcare training and therapy client simulation, but they still fall short in
authentically capturing diverse client traits and psychological conditions. We
introduce \textbf{Eeyore}, an 8B model optimized for realistic depression
simulation through a structured alignment framework, incorporating expert input
at every stage. First, we systematically curate real-world depression-related
conversations, extracting depressive traits to guide data filtering and
psychological profile construction, and use this dataset to instruction-tune
Eeyore for profile adherence. Next, to further enhance realism, Eeyore
undergoes iterative preference optimization -- first leveraging model-generated
preferences and then calibrating with a small set of expert-annotated
preferences. Throughout the entire pipeline, we actively collaborate with
domain experts, developing interactive interfaces to validate trait extraction
and iteratively refine structured psychological profiles for clinically
meaningful role-play customization. Despite its smaller model size, the Eeyore
depression simulation outperforms GPT-4o with SOTA prompting strategies, both
in linguistic authenticity and profile adherence.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 20:29:44 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Liu",
"Siyang",
""
],
[
"Brie",
"Bianca",
""
],
[
"Li",
"Wenda",
""
],
[
"Biester",
"Laura",
""
],
[
"Lee",
"Andrew",
""
],
[
"Pennebaker",
"James",
""
],
[
"Mihalcea",
"Rada",
""
]
]
| TITLE: Eeyore: Realistic Depression Simulation via Supervised and Preference
Optimization
ABSTRACT: Large Language Models (LLMs) have been previously explored for mental
healthcare training and therapy client simulation, but they still fall short in
authentically capturing diverse client traits and psychological conditions. We
introduce \textbf{Eeyore}, an 8B model optimized for realistic depression
simulation through a structured alignment framework, incorporating expert input
at every stage. First, we systematically curate real-world depression-related
conversations, extracting depressive traits to guide data filtering and
psychological profile construction, and use this dataset to instruction-tune
Eeyore for profile adherence. Next, to further enhance realism, Eeyore
undergoes iterative preference optimization -- first leveraging model-generated
preferences and then calibrating with a small set of expert-annotated
preferences. Throughout the entire pipeline, we actively collaborate with
domain experts, developing interactive interfaces to validate trait extraction
and iteratively refine structured psychological profiles for clinically
meaningful role-play customization. Despite its smaller model size, the Eeyore
depression simulation outperforms GPT-4o with SOTA prompting strategies, both
in linguistic authenticity and profile adherence.
| no_new_dataset | 0.938801 |
2503.00020 | Rakeen Rouf | Rakeen Rouf, Trupti Bavalatti, Osama Ahmed, Dhaval Potdar, Faraz Jawed | A Systematic Review of Open Datasets Used in Text-to-Image (T2I) Gen AI
Model Safety | Accepted for publication in IEEE Access, DOI:
10.1109/ACCESS.2025.3539933 | IEEE Access 2025 | 10.1109/ACCESS.2025.3539933 | null | cs.CL cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Novel research aimed at text-to-image (T2I) generative AI safety often relies
on publicly available datasets for training and evaluation, making the quality
and composition of these datasets crucial. This paper presents a comprehensive
review of the key datasets used in the T2I research, detailing their collection
methods, compositions, semantic and syntactic diversity of prompts and the
quality, coverage, and distribution of harm types in the datasets. By
highlighting the strengths and limitations of the datasets, this study enables
researchers to find the most relevant datasets for a use case, critically
assess the downstream impacts of their work given the dataset distribution,
particularly regarding model safety and ethical considerations, and also
identify the gaps in dataset coverage and quality that future research may
address.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2025 00:59:04 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Rouf",
"Rakeen",
""
],
[
"Bavalatti",
"Trupti",
""
],
[
"Ahmed",
"Osama",
""
],
[
"Potdar",
"Dhaval",
""
],
[
"Jawed",
"Faraz",
""
]
]
| TITLE: A Systematic Review of Open Datasets Used in Text-to-Image (T2I) Gen AI
Model Safety
ABSTRACT: Novel research aimed at text-to-image (T2I) generative AI safety often relies
on publicly available datasets for training and evaluation, making the quality
and composition of these datasets crucial. This paper presents a comprehensive
review of the key datasets used in the T2I research, detailing their collection
methods, compositions, semantic and syntactic diversity of prompts and the
quality, coverage, and distribution of harm types in the datasets. By
highlighting the strengths and limitations of the datasets, this study enables
researchers to find the most relevant datasets for a use case, critically
assess the downstream impacts of their work given the dataset distribution,
particularly regarding model safety and ethical considerations, and also
identify the gaps in dataset coverage and quality that future research may
address.
| no_new_dataset | 0.955068 |
2503.00029 | Hongming Zhang | Hongming Zhang, Ruixin Hong, Dong Yu | Streaming Looking Ahead with Token-level Self-reward | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Autoregressive decoding algorithms that use only past information often
cannot guarantee the best performance. Recently, people discovered that
looking-ahead algorithms such as Monte Carlo Tree Search (MCTS) with external
reward models (RMs) can significantly improve models' output by allowing them
to think ahead and leverage future outputs and associated rewards to guide the
current generation. Such techniques can help the reinforcement fine-tuning
phase by sampling better trajectories and the inference phase by selecting the
better output. However, their high computational cost limits their
applications, especially in streaming scenarios. To address this issue, we
propose equipping the policy model with token-level self-reward modeling (TRM)
capability to eliminate the need for external models and extra communication.
We name the new architecture as Reward Transformer. In addition, we propose a
streaming-looking-ahead (SLA) algorithm to further boost search efficiency with
better parallelization. Experiments show that SLA achieves an overall win rate
of 79.7\% against the baseline greedy decoding algorithm on three
general-domain datasets with a frozen policy model while maintaining streaming
efficiency. If we combine SLA with reinforcement fine-tuning techniques such as
DPO, SLA achieves an overall win rate of 89.4\%.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 22:35:53 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhang",
"Hongming",
""
],
[
"Hong",
"Ruixin",
""
],
[
"Yu",
"Dong",
""
]
]
| TITLE: Streaming Looking Ahead with Token-level Self-reward
ABSTRACT: Autoregressive decoding algorithms that use only past information often
cannot guarantee the best performance. Recently, people discovered that
looking-ahead algorithms such as Monte Carlo Tree Search (MCTS) with external
reward models (RMs) can significantly improve models' output by allowing them
to think ahead and leverage future outputs and associated rewards to guide the
current generation. Such techniques can help the reinforcement fine-tuning
phase by sampling better trajectories and the inference phase by selecting the
better output. However, their high computational cost limits their
applications, especially in streaming scenarios. To address this issue, we
propose equipping the policy model with token-level self-reward modeling (TRM)
capability to eliminate the need for external models and extra communication.
We name the new architecture as Reward Transformer. In addition, we propose a
streaming-looking-ahead (SLA) algorithm to further boost search efficiency with
better parallelization. Experiments show that SLA achieves an overall win rate
of 79.7\% against the baseline greedy decoding algorithm on three
general-domain datasets with a frozen policy model while maintaining streaming
efficiency. If we combine SLA with reinforcement fine-tuning techniques such as
DPO, SLA achieves an overall win rate of 89.4\%.
| no_new_dataset | 0.947962 |
2503.00031 | Chengsong Huang | Chengsong Huang, Langlin Huang, Jixuan Leng, Jiacheng Liu, Jiaxin
Huang | Efficient Test-Time Scaling via Self-Calibration | null | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Increasing test-time computation is a straightforward approach to enhancing
the quality of responses in Large Language Models (LLMs). While Best-of-N
sampling and Self-Consistency with majority voting are simple and effective,
they require a fixed number of sampling responses for each query, regardless of
its complexity. This could result in wasted computation for simpler questions
and insufficient exploration for more challenging ones. In this work, we argue
that model confidence of responses can be used for improving the efficiency of
test-time scaling. Unfortunately, LLMs are known to be overconfident and
provide unreliable confidence estimation. To address this limitation, we
introduce Self-Calibration by distilling Self-Consistency-derived confidence
into the model itself. This enables reliable confidence estimation at test time
with one forward pass. We then design confidence-based efficient test-time
scaling methods to handle queries of various difficulty, such as Early-Stopping
for Best-of-N and Self-Consistency with calibrated confidence. Experiments on
three LLMs across six datasets demonstrate the effectiveness of our approach.
Specifically, applying confidence-based Early Stopping to Best-of-N improves
MathQA accuracy from 81.0 to 83.6 with a sample budget of 16 responses,
indicating the efficacy of confidence-based sampling strategy at inference
time.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 00:21:14 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Huang",
"Chengsong",
""
],
[
"Huang",
"Langlin",
""
],
[
"Leng",
"Jixuan",
""
],
[
"Liu",
"Jiacheng",
""
],
[
"Huang",
"Jiaxin",
""
]
]
| TITLE: Efficient Test-Time Scaling via Self-Calibration
ABSTRACT: Increasing test-time computation is a straightforward approach to enhancing
the quality of responses in Large Language Models (LLMs). While Best-of-N
sampling and Self-Consistency with majority voting are simple and effective,
they require a fixed number of sampling responses for each query, regardless of
its complexity. This could result in wasted computation for simpler questions
and insufficient exploration for more challenging ones. In this work, we argue
that model confidence of responses can be used for improving the efficiency of
test-time scaling. Unfortunately, LLMs are known to be overconfident and
provide unreliable confidence estimation. To address this limitation, we
introduce Self-Calibration by distilling Self-Consistency-derived confidence
into the model itself. This enables reliable confidence estimation at test time
with one forward pass. We then design confidence-based efficient test-time
scaling methods to handle queries of various difficulty, such as Early-Stopping
for Best-of-N and Self-Consistency with calibrated confidence. Experiments on
three LLMs across six datasets demonstrate the effectiveness of our approach.
Specifically, applying confidence-based Early Stopping to Best-of-N improves
MathQA accuracy from 81.0 to 83.6 with a sample budget of 16 responses,
indicating the efficacy of confidence-based sampling strategy at inference
time.
| no_new_dataset | 0.947332 |
2503.00034 | Hongyi Cai | Hongyi Cai, Yuqian Fu, Hongming Fu and Bo Zhao | MergeIT: From Selection to Merging for Efficient Instruction Tuning | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Instruction tuning is crucial for optimizing Large Language Models (LLMs),
yet mainstream data selection methods heavily rely on LLMs as instruction
quality scorers, leading to high computational costs and reduced data
diversity. To address these limitations, we propose MergeIT, a novel LLM-based
Merging strategy for better Instruction Tuning that shifts the focus from
selection to synthesis. MergeIT operates in two stages: first, topic-aware
filtering clusters and refines the dataset, preserving diversity while
eliminating redundancy without relying on LLM-based scoring. Second, LLM-based
merging synthesizes semantically similar instructions into more informative and
compact training data, enhancing data richness while further reducing dataset
size. Experimental results demonstrate that MergeIT enables efficient, diverse,
and scalable instruction selection and synthesis, establishing LLM-based
merging as a promising alternative to conventional scoring-based selection
methods for instruction tuning. Our source code and datasets are now available
at https://github.com/XcloudFance/MergeIT
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 03:43:20 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Cai",
"Hongyi",
""
],
[
"Fu",
"Yuqian",
""
],
[
"Fu",
"Hongming",
""
],
[
"Zhao",
"Bo",
""
]
]
| TITLE: MergeIT: From Selection to Merging for Efficient Instruction Tuning
ABSTRACT: Instruction tuning is crucial for optimizing Large Language Models (LLMs),
yet mainstream data selection methods heavily rely on LLMs as instruction
quality scorers, leading to high computational costs and reduced data
diversity. To address these limitations, we propose MergeIT, a novel LLM-based
Merging strategy for better Instruction Tuning that shifts the focus from
selection to synthesis. MergeIT operates in two stages: first, topic-aware
filtering clusters and refines the dataset, preserving diversity while
eliminating redundancy without relying on LLM-based scoring. Second, LLM-based
merging synthesizes semantically similar instructions into more informative and
compact training data, enhancing data richness while further reducing dataset
size. Experimental results demonstrate that MergeIT enables efficient, diverse,
and scalable instruction selection and synthesis, establishing LLM-based
merging as a promising alternative to conventional scoring-based selection
methods for instruction tuning. Our source code and datasets are now available
at https://github.com/XcloudFance/MergeIT
| no_new_dataset | 0.947575 |
2503.00036 | Miao Ye | Miao Ye, Zhibang Jiang, Xingsi Xue, Xingwang Li, Peng Wen, Yong Wang | A Novel Spatiotemporal Correlation Anomaly Detection Method Based on
Time-Frequency-Domain Feature Fusion and a Dynamic Graph Neural Network in
Wireless Sensor Network | null | null | null | null | eess.SP cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attention-based transformers have played an important role in wireless sensor
network (WSN) timing anomaly detection due to their ability to capture
long-term dependencies. However, there are several issues that must be
addressed, such as the fact that their ability to capture long-term
dependencies is not completely reliable, their computational complexity levels
are high, and the spatiotemporal features of WSN timing data are not
sufficiently extracted for detecting the correlation anomalies of multinode WSN
timing data. To address these limitations, this paper proposes a WSN anomaly
detection method that integrates frequency-domain features with dynamic graph
neural networks (GNN) under a designed self-encoder reconstruction framework.
First, the discrete wavelet transform effectively decomposes trend and seasonal
components of time series to solve the poor long-term reliability of
transformers. Second, a frequency-domain attention mechanism is designed to
make full use of the difference between the amplitude distributions of normal
data and anomalous data in this domain. Finally, a multimodal fusion-based
dynamic graph convolutional network (MFDGCN) is designed by combining an
attention mechanism and a graph convolutional network (GCN) to adaptively
extract spatial correlation features. A series of experiments conducted on
public datasets and their results demonstrate that the anomaly detection method
designed in this paper exhibits superior precision and recall than the existing
methods do, with an F1 score of 93.5%, representing an improvement of 2.9% over
that of the existing models.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 04:34:18 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Ye",
"Miao",
""
],
[
"Jiang",
"Zhibang",
""
],
[
"Xue",
"Xingsi",
""
],
[
"Li",
"Xingwang",
""
],
[
"Wen",
"Peng",
""
],
[
"Wang",
"Yong",
""
]
]
| TITLE: A Novel Spatiotemporal Correlation Anomaly Detection Method Based on
Time-Frequency-Domain Feature Fusion and a Dynamic Graph Neural Network in
Wireless Sensor Network
ABSTRACT: Attention-based transformers have played an important role in wireless sensor
network (WSN) timing anomaly detection due to their ability to capture
long-term dependencies. However, there are several issues that must be
addressed, such as the fact that their ability to capture long-term
dependencies is not completely reliable, their computational complexity levels
are high, and the spatiotemporal features of WSN timing data are not
sufficiently extracted for detecting the correlation anomalies of multinode WSN
timing data. To address these limitations, this paper proposes a WSN anomaly
detection method that integrates frequency-domain features with dynamic graph
neural networks (GNN) under a designed self-encoder reconstruction framework.
First, the discrete wavelet transform effectively decomposes trend and seasonal
components of time series to solve the poor long-term reliability of
transformers. Second, a frequency-domain attention mechanism is designed to
make full use of the difference between the amplitude distributions of normal
data and anomalous data in this domain. Finally, a multimodal fusion-based
dynamic graph convolutional network (MFDGCN) is designed by combining an
attention mechanism and a graph convolutional network (GCN) to adaptively
extract spatial correlation features. A series of experiments conducted on
public datasets and their results demonstrate that the anomaly detection method
designed in this paper exhibits superior precision and recall than the existing
methods do, with an F1 score of 93.5%, representing an improvement of 2.9% over
that of the existing models.
| no_new_dataset | 0.950595 |
2503.00037 | Wei Zhao | Wei Zhao, Zhe Li, Yige Li, Jun Sun | Zero-Shot Defense Against Toxic Images via Inherent Multimodal Alignment
in LVLMs | null | null | null | null | cs.CL cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Vision-Language Models (LVLMs) have made significant strides in
multimodal comprehension, thanks to extensive pre-training and fine-tuning on
large-scale visual datasets. However, despite their robust textual safety
mechanisms, they remain vulnerable to harmful visual inputs. Existing
safeguards-typically relying on pre-filtering or fine-tuning-incur high costs
and diminish overall utility. To address this critical vulnerability, we
introduce SafeCLIP, a lightweight method that leverages LVLMs inherent
multimodal alignment for zero-shot toxic image detection. By projecting CLIPs
discarded CLS token into its text space and matching it with toxic descriptors,
SafeCLIP detects harmful content without any architectural changes-adding
minimal latency and enabling dynamic safety corrections during inference and
fine-tuning.Experiments show that SafeCLIP achieves a 66.9% defense success
rate with only 3.2% false positive rate and 7.2% overhead. In contrast,
state-of-the-art methods achieve 52.9% success but have a 10.7% false positive
rate and 210% overhead. Our work demonstrates that leveraging inherent
multimodal alignment can yield efficient, low-cost LVLM safety. Code is
available at anonymous.4open.science/r/safeclip-2C01.
| [
{
"version": "v1",
"created": "Tue, 25 Feb 2025 06:51:16 GMT"
}
]
| 2025-03-04T00:00:00 | [
[
"Zhao",
"Wei",
""
],
[
"Li",
"Zhe",
""
],
[
"Li",
"Yige",
""
],
[
"Sun",
"Jun",
""
]
]
| TITLE: Zero-Shot Defense Against Toxic Images via Inherent Multimodal Alignment
in LVLMs
ABSTRACT: Large Vision-Language Models (LVLMs) have made significant strides in
multimodal comprehension, thanks to extensive pre-training and fine-tuning on
large-scale visual datasets. However, despite their robust textual safety
mechanisms, they remain vulnerable to harmful visual inputs. Existing
safeguards-typically relying on pre-filtering or fine-tuning-incur high costs
and diminish overall utility. To address this critical vulnerability, we
introduce SafeCLIP, a lightweight method that leverages LVLMs inherent
multimodal alignment for zero-shot toxic image detection. By projecting CLIPs
discarded CLS token into its text space and matching it with toxic descriptors,
SafeCLIP detects harmful content without any architectural changes-adding
minimal latency and enabling dynamic safety corrections during inference and
fine-tuning.Experiments show that SafeCLIP achieves a 66.9% defense success
rate with only 3.2% false positive rate and 7.2% overhead. In contrast,
state-of-the-art methods achieve 52.9% success but have a 10.7% false positive
rate and 210% overhead. Our work demonstrates that leveraging inherent
multimodal alignment can yield efficient, low-cost LVLM safety. Code is
available at anonymous.4open.science/r/safeclip-2C01.
| no_new_dataset | 0.949763 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.